A blog series about ProxySQL and Galera…

This post is the first of a series of blogposts on how to easily use ProxySQL to scale-out your application’s database workload on a Galera cluster. This series will explore the main concepts in configuring ProxySQL for Galera across three articles as follows:

– A first introductory post describing the minimal configuration needed for ProxySQL to monitor and manage a Galera Cluster.
– A second post describing how to setup a read/write split configuration for our Galera Cluster, using ProxySQL query rules.
– A third and final post with examples on specific cluster configuration options and more detailed explanations on why and how ProxySQL changes the nodes states based on configuration changes and monitored variables.

Requirements

To illustrate how to configure ProxySQL for a Galera Cluster we will use a sample Galera Cluster composed of three nodes:

IP address Hostname
172.17.0.2 db-node1
172.17.0.3 db-node2
172.17.0.4 db-node3

Configuring ProxySQL

Galera configuration support

For a detailed specification of the configuration variables and tables that ProxySQL provides specifically for Galera, please refer to the official documentation. This blogpost series assumes you are already familiar with the common deployment options for ProxySQL.

Configuring the Galera Cluster

We will start by configuring ProxySQL to monitor the Galera cluster. First we are going to connect to the ProxySQL admin console, and add the mysql_servers:

mysql -P6032 -uadmin -padmin -h 127.0.0.1

Now we add the information about our Galera cluster to the mysql_servers table. We will use the following hostgroups: writer_hostgroup=0, reader_hostgroup=1, backup_writer_hostgroup=2, offline_hostgroup=4. The node with the lowest weight will get configured into backup_writer_hostgroup after loading the configuration into runtime, if all servers share the default weight=1 then a random server is selected as is the case in our configuration:

DELETE FROM mysql_servers;

INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (0,'172.17.0.2',3306);
INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (0,'172.17.0.3',3306);
INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,'172.17.0.4',3306);

LOAD MYSQL SERVERS TO RUNTIME;

We continue adding configuration that defines the behavior we want ProxySQL to have for our Galera cluster workload. For this we make use of the table mysql_galera_hostgroups. Initially we are going to set max_writers=1 and writer_is_also_reader=1. This means that we will have 1 writer and 3 readers, reader nodes will also be placed in the backup_writer_hostgroup. This is because:

  1. As doc states “writer_is_also_reader=1: nodes in writer_hostgroup and backup_writer_hostgroup are also in reader hostgroup.”
  2. The number of writers is limited to 1.

First we add the configuration to mysql_galera_hostgroups and and then load mysql servers to runtime:

DELETE FROM mysql_galera_hostgroups;

INSERT INTO mysql_galera_hostgroups (writer_hostgroup, backup_writer_hostgroup, reader_hostgroup, offline_hostgroup, active, max_writers, writer_is_also_reader, max_transactions_behind, comment) VALUES (0, 2, 1, 4, 1, 1, 1, 100, NULL);

LOAD MYSQL SERVERS TO RUNTIME;

Lets verify the configured tables:

mysql> select * from mysql_servers;
+--------------+------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname   | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 0            | 172.17.0.2 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 0            | 172.17.0.3 | 3306 | 0         | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 1            | 172.17.0.4 | 3306 | 0         | ONLINE | 1       | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)
mysql> select * from mysql_galera_hostgroups\G
*************************** 1. row ***************************
       writer_hostgroup: 0
backup_writer_hostgroup: 2
       reader_hostgroup: 1
      offline_hostgroup: 4
                 active: 1
            max_writers: 1
  writer_is_also_reader: 1
max_transactions_behind: 100
                comment: NULL
1 row in set (0.00 sec)

And finally, we load the configuration to runtime and save it to disk for persistence:

LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

Now it’s time to check the cluster’s runtime configuration and check that our configuration options were correctly applied:

mysql> select hostgroup_id, hostname, port, gtid_port, status, weight from runtime_mysql_servers;
+--------------+------------+------+-----------+---------+--------+
| hostgroup_id | hostname   | port | gtid_port | status  | weight |
+--------------+------------+------+-----------+---------+--------+
| 0            | 172.17.0.2 | 3306 | 0         | SHUNNED | 1      |
| 2            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
| 2            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
| 1            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
| 1            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
| 1            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
| 0            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
| 0            | 172.17.0.3 | 3306 | 0         | SHUNNED | 1      |
+--------------+------------+------+-----------+---------+--------+
8 rows in set (0.00 sec)

Let’s see how each server has ended in each hostgroup:

  • Server: 172.17.0.2

    This server has been placed in the backup_writer_hostgroup(2) and in the reader_hostgroup(1). Additionally it is in the writer_hostgroup(0) but with a status of SHUNNED. This is normal and expected and serves to prevent the server from receiving writes as we have `max_writers=1` configured in the `mysql_galera_hostgroups` table.

    Hostgroup roles and re-configuration:

    1. Servers are been placed into the writer / backup_writer hostgroups because the server’s MySQL READ_ONLY value is 0
    2. Writers are additionally placed into the reader hostgroup because writer_is_also_reader=1
    3. The number of maximum_writers=1 and server 172.17.0.4 has been randomly selected as the current writer as the servers all share the same `weight=1`.
  • Server: 172.17.0.3

    This server is in the exact same situation as 172.17.0.2.

  • Server: 172.17.0.4

    This server is the current unique writer and has been placed in the writer_hostgroup(2) and in the reader_hostgroup(1).

Changing cluster configuration

<

p>
We have a fully configured cluster, but now are going to see how configuration changes to max_writers and to writer_is_also_reader make ProxySQL dynamically alter the cluster state, changing the active nodes status’ and hostgroups:

  • Increasing max_writers to 2:
    mysql> update mysql_galera_hostgroups set max_writers=2;
    

    Now mysql_servers will report:

    mysql> select hostgroup_id, hostname, port, gtid_port, status, weight from runtime_mysql_servers;
    +--------------+------------+------+-----------+---------+--------+
    | hostgroup_id | hostname   | port | gtid_port | status  | weight |
    +--------------+------------+------+-----------+---------+--------+
    | 0            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 2            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.3 | 3306 | 0         | SHUNNED | 1      |
    +--------------+------------+------+-----------+---------+--------+
    7 rows in set (0.01 sec)
    

    As we can see (172.17.0.2) has been selected as a second writer. The server with the biggest weight will be selected, if all servers share the same weight a random server is selected. (172.17.0.3) remains in the same state as before.

  • Setting writer_is_also_reader to 0 (with max_writers set back to 1).

    mysql> select hostgroup_id, hostname, port, gtid_port, status, weight from runtime_mysql_servers;
    +--------------+------------+------+-----------+---------+--------+
    | hostgroup_id | hostname   | port | gtid_port | status  | weight |
    +--------------+------------+------+-----------+---------+--------+
    | 0            | 172.17.0.2 | 3306 | 0         | SHUNNED | 1      |
    | 2            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 2            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.3 | 3306 | 0         | SHUNNED | 1      |
    +--------------+------------+------+-----------+---------+--------+
    5 rows in set (0.00 sec)
    

    Now we can see that all the nodes in the writer_hostgroup and in the backup_writer_hostgroup remain untouched, while all the servers where READ_ONLY is 0 have been removed from the reader_hostgroup. This puts us in a situation where no backends are available in the reader hostgroup.

    In order to have readers available with these config options, we need to set a backend’s READ_ONLY to 1 in MySQL and that server will be moved to the reader hostgroup.

    For example, let’s set READ_ONLY to 1 for server 172.17.0.3:

    mysql -h172.17.0.3 -P3306 -uadmin -padmin
    
    mysql> SET GLOBAL READ_ONLY=1;
    Query OK, 0 rows affected (0.02 sec)
    

    Now the server state looks like:

    mysql> select hostgroup_id, hostname, port, gtid_port, status, weight from runtime_mysql_servers;
    +--------------+------------+------+-----------+---------+--------+
    | hostgroup_id | hostname   | port | gtid_port | status  | weight |
    +--------------+------------+------+-----------+---------+--------+
    | 0            | 172.17.0.2 | 3306 | 0         | SHUNNED | 1      |
    | 2            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
    +--------------+------------+------+-----------+---------+--------+
    4 rows in set (0.00 sec)
    
  • Setting writer_is_also_reader to 2 (with max_writers set back to 1).
    mysql> select hostgroup_id, hostname, port, gtid_port, status, weight from runtime_mysql_servers;
    +--------------+------------+------+-----------+---------+--------+
    | hostgroup_id | hostname   | port | gtid_port | status  | weight |
    +--------------+------------+------+-----------+---------+--------+
    | 0            | 172.17.0.2 | 3306 | 0         | SHUNNED | 1      |
    | 2            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 2            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.2 | 3306 | 0         | ONLINE  | 1      |
    | 1            | 172.17.0.3 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.4 | 3306 | 0         | ONLINE  | 1      |
    | 0            | 172.17.0.3 | 3306 | 0         | SHUNNED | 1      |
    +--------------+------------+------+-----------+---------+--------+
    7 rows in set (0.01 sec)
    

    When writer_is_also_reader=1 is set again we see that writer nodes are added to the reader hostgroup, except for the writer node which is now are excluded from the reader_hostgroup.

Conclusion

As seen, ProxySQL offers a very flexible configuration model and you can decide which one will best suits your use case. We have covered the basics of how to configure a Galera Cluster with ProxySQL and what to expect from the different configurations ProxySQL has to offer. For more detailed information about ProxySQL Galera Cluster, please refer to the documentation, and of course, stay tuned for the next blogpost in this series!

Happy ProxySQLing!