The latest enhancement in ProxySQL v2.0.0 is the native support of Galera Cluster. No more need to use an external script within the scheduler like explained also in this post of ours.

This document will cover how to take an advantage of the new feature and integrate ProxySQL with Percona XtraDB Cluster to monitor galera node status and read-write split performed by ProxySQL. To illustrate we will use a cluster of 3 nodes, below are the details:

IP address Hostname
172.16.1.112 db-node01
172.16.1.113 db-node02
172.16.1.114 db-node03

Galera Cluster Native Support in ProxySQL

So, let’s have a look at what is new for users in the admin interface of ProxySQL: * Table to configure cluster and monitor status:

mysql> show tables ;
+--------------------------------------------+
| tables                                     |
+--------------------------------------------+
| [..]                                       |
| mysql_galera_hostgroups                    |
| [..]                                       |
| runtime_mysql_galera_hostgroups            |
| [..]                                       |
+--------------------------------------------+ 
mysql> show tables from monitor ;
+------------------------------------+
| tables                             |
+------------------------------------+
| [..]                               |
| mysql_server_galera_log            |
| [..]                               |
+------------------------------------+
  • Global variables to control timeout and interval check:
mysql> select * from global_variables where variable_name like '%monitor_galera%';
+-------------------------------------------+----------------+
| variable_name                             | variable_value |
+-------------------------------------------+----------------+
| mysql-monitor_galera_healthcheck_interval | 5000           |
| mysql-monitor_galera_healthcheck_timeout  | 800            |
+-------------------------------------------+----------------+
  • Definition of the mysql_galera_hostgroups table:
mysql> show create table mysql_galera_hostgroups\G
*************************** 1. row ***************************
       table: mysql_galera_hostgroups
Create Table: CREATE TABLE mysql_galera_hostgroups (
    writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
    backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
    reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
    offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
    writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1)) NOT NULL DEFAULT 0,
    max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
    comment VARCHAR,
    UNIQUE (reader_hostgroup),
    UNIQUE (offline_hostgroup),
    UNIQUE (backup_writer_hostgroup));
  • The fields have the following semantics :
Column Name Description
writer_hostgroup the id of the hostgroup that will contain all the members that are writer
backup_writer_hostgroup if the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is larger than the max_writers, the extra nodes are located in that backup writer group
reader_hostgroup the id of the hostgroup that will contain all the members that are reader: nodes that have read_only=1 will be assigned to this hostgroup.
offline_hostgroup when ProxySQL’s monitoring determines a host is offline, it will be put such host into the offline_hostgroup
active boolean value (0 or 1), when enabled, ProxySQL monitors the hostgroups and moves accordingly the server in the appropriate hostgroups
max_writers limit the amount of nodes in the writer hostgroup: nodes in excess of this value will be put into the backup_writer_hostgroup
writer_is_also_reader boolean value (0 or 1), when enabled, a node in the writer hostgroup will also belongs to the reader hostgroup
max_transactions_behind determines the maximum number of writesets behind the cluster that ProxySQL should allow before shunning the node to prevent stale reads (this is determined by querying the wsrep_local_recv_queue galera variable).

For reference : https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_galera_hostgroups

Now we can start with the configuration:

We are configuring ProxySQL to have one writer node and one backup writer node which will be used when primary writer node is not accessible for writes. And one node is responsible only for handling SELECT traffic. First thing, we have to connect to the admin interface and begin configuration:

mysql -P6032 -uadmin -padmin -h 127.0.0.1

We need to populate mysql_servers table with information about our Galera nodes. The node with the lowest weight will get reconfigured in backup_writer_hostgroup after loading configuration into runtime.

INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.16.1.112',3306,100);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (2,'172.16.1.113',3306,10);
INSERT INTO mysql_servers(hostgroup_id,hostname,port,weight) VALUES (3,'172.16.1.114',3306,100);

Now we can set up ProxySQL’s behavior for our Galera cluster. We set max_writers=1 and we also want a dedicated writer, which would not handle SELECT traffic (writer_is_also_reader=0) in the current setup.

INSERT INTO mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) 
VALUES (2,4,3,1,1,1,0,100);

Now, we verify the configured tables :

mysql> select hostgroup_id,hostname,port,status,weight,max_connections from mysql_servers;
+--------------+--------------+------+--------+--------+-----------------+
| hostgroup_id | hostname     | port | status | weight | max_connections |
+--------------+--------------+------+--------+--------+-----------------+
| 2            | 172.16.1.112 | 3306 | ONLINE | 100    | 1000            |
| 2            | 172.16.1.113 | 3306 | ONLINE | 10     | 1000            |
| 3            | 172.16.1.114 | 3306 | ONLINE | 100    | 1000            |
+--------------+--------------+------+--------+--------+-----------------+
3 rows in set (0.00 sec)
mysql> select * from mysql_galera_hostgroups;
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
| 2                | 4                       | 3                | 1                 | 1      | 1           | 0                     | 100                     | NULL    |
+------------------+-------------------------+------------------+-------------------+--------+-------------+-----------------------+-------------------------+---------+
1 row in set (0.00 sec)

We should not forget to save our mysql servers configuration to disk and load it to runtime:

LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

After loading the configuration to runtime, we can see that host 172.16.1.113, which has lower weight, got reconfigured in hostgroup: 4. This happened because we have configured ProxySQL to have max_writers=1 and backup_writer_hostgroup=4. The backup writer node will only be used in case the node 172.16.1.112 would fail .

mysql> select hostgroup,srv_host,status,ConnUsed,MaxConnUsed,Queries,Latency_us from stats.stats_mysql_connection_pool order by srv_host;
+-----------+--------------+--------+----------+-------------+-----------+------------+
| hostgroup | srv_host     | status | ConnUsed | MaxConnUsed | Queries   | Latency_us |
+-----------+--------------+--------+----------+-------------+-----------+------------+
| 2         | 172.16.1.112 | ONLINE | 3        | 4           | 930742390 | 118        |
| 4         | 172.16.1.113 | ONLINE | 0        | 0           | 0         | 136        |
| 3         | 172.16.1.114 | ONLINE | 1        | 1           | 233130345 | 123        |
+-----------+--------------+--------+----------+-------------+-----------+------------+
3 rows in set (0.00 sec)

Now it’s time to go ahead and define the query rules, which will handle read/write split. The default_hostgroup for the application user in mysql_users table is set to 2 i.e writer hostgroup. * 1st Rule : Query processor scans the query rule to find a match for ^SELECT.* pattern and if a match is found, proxysql will forward these queries on destination_hostgroup=3; * 2nd Rule : Queries with a ^SELECT.* FOR UPDATE pattern should always be served from the writer hostgroup , so we set the destination_hostgroup=2;

INSERT INTO mysql_query_rules (active, match_digest, destination_hostgroup, apply) VALUES (1, '^SELECT.*',3, 0);
INSERT INTO mysql_query_rules (active, match_digest, destination_hostgroup, apply) VALUES (1, '^SELECT.* FOR UPDATE',2, 1);
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
  • Make sure read_only is enabled when you want to set any host in reader hostgroup and that you set writer_is_also_reader=0
db-node03 mysql> set global read_only=ON;

When you are done with the configuration, as you can see below, there is a useful table in ProxySQL which helps to get a quick view of the state of each node of the cluster, as seen by ProxySQL:

mysql> select * from mysql_server_galera_log order by time_start_us desc limit 3;
+--------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+
| hostname     | port | time_start_us    | success_time_us | primary_partition | read_only | wsrep_local_recv_queue | wsrep_local_state | wsrep_desync | wsrep_reject_queries | wsrep_sst_donor_rejects_queries | error |
+--------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+
| 172.16.1.114 | 3306 | 1529510693289001 | 1234            | YES               | NO        | 0                      | 4                 | NO           | NO                   | NO                              | NULL  |
| 172.16.1.113 | 3306 | 1529510693287804 | 1209            | YES               | NO        | 0                      | 4                 | NO           | NO                   | NO                              | NULL  |
| 172.16.1.112 | 3306 | 1529510693286879 | 1158            | YES               | NO        | 0                      | 4                 | NO           | NO                   | NO                              | NULL  |
+--------------+------+------------------+-----------------+-------------------+-----------+------------------------+-------------------+--------------+----------------------+---------------------------------+-------+
3 rows in set (0.03 sec)

Time of Testing:

Let’s enable a Donor/Desync state on a writer node and let’s check how proxysql will handle the write traffic. When the writer node is going into a Donor/Desync status, ProxySQL will move the write traffic to the backup writer node, after promoting it from HG4 (writer backup HG) to HG2 (writer HG).

mysql> select hostgroup,srv_host,status,ConnUsed,MaxConnUsed,Queries,Latency_us from stats.stats_mysql_connection_pool order by srv_host;
+-----------+--------------+--------+----------+-------------+------------+------------+
| hostgroup | srv_host     | status | ConnUsed | MaxConnUsed | Queries    | Latency_us |
+-----------+--------------+--------+----------+-------------+------------+------------+
| 2         | 172.16.1.112 | ONLINE | 4        | 4           | 2295114892 | 131        |
| 4         | 172.16.1.113 | ONLINE | 0        | 0           | 0          | 162        |
| 3         | 172.16.1.114 | ONLINE | 1        | 1           | 539211603  | 142        |
+-----------+--------------+--------+----------+-------------+------------+------------+
3 rows in set (0.00 sec)

db-node01 mysql> set global wsrep_desync=ON;
Query OK, 0 rows affected (0.01 sec)

mysql> select hostgroup,srv_host,status,ConnUsed,MaxConnUsed,Queries,Latency_us from stats.stats_mysql_connection_pool order by srv_host;
+-----------+--------------+--------+----------+-------------+-----------+------------+
| hostgroup | srv_host     | status | ConnUsed | MaxConnUsed | Queries   | Latency_us |
+-----------+--------------+--------+----------+-------------+-----------+------------+
| 1         | 172.16.1.112 | ONLINE | 0        | 0           | 0         | 149        |
| 2         | 172.16.1.113 | ONLINE | 3        | 4           | 1156479   | 129        |
| 3         | 172.16.1.114 | ONLINE | 0        | 1           | 542028027 | 128        |
+-----------+--------------+--------+----------+-------------+-----------+------------+
3 rows in set (0.00 sec)

Note: This behaviour will be the same when primary writer node would go down and leave the cluster: then the traffic would go to the secondary writer node (the node in HG2 – writer HG – , indeed, it was moved to HG1 – offline host). Apart from monitoring wsrep_dsync variable, ProxySQL is also continuously checking the status of wsrep_reject_queries and wsrep_sst_donor_rejects_queries variables and take a required action when needed.

Happy ProxySQLing!

Authored by: Ashwini Ahire