working together

How to run multiple ProxySQL instances

Some time ago I wrote about the ability of running multiple ProxySQL instances listening on the same port(s).
This capability is present since ProxySQL 1.3.0 , but not much used.
Although the feature seems super interesting, how to use it seems a bit not intiutive.
After a series of Q&A on Slack (mostly reported at the bottom of this blog post), I decided it is time to better describe this feature.
This blog post try to describe how to use this feature, starting from installation.

Note: this feature requires Linux kernel >= 3.9 .

Installation

In this example, installation is performed using deb packages from github.com repository:

root@nebula:~# wget -q https://github.com/sysown/proxysql/releases/download/v1.4.1/proxysql_1.4.1-ubuntu16_amd64.deb
root@nebula:~# 
root@nebula:~# dpkg -i proxysql_1.4.1-ubuntu16_amd64.deb 
Selecting previously unselected package proxysql.
(Reading database ... 269571 files and directories currently installed.)
Preparing to unpack .../proxysql_1.4.1-ubuntu16_amd64.deb ...
Unpacking proxysql (1.4.1) ...
Setting up proxysql (1.4.1) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ureadahead (0.100.0-19) ...

During the installation, an empty datadir is created, as well as a minimal config file:

root@nebula:~# ls -l /var/lib/proxysql/
total 0
root@nebula:~# ls -l /etc/proxysql.cnf 
-rw-r--r-- 1 root root 4517 ago  1 13:23 /etc/proxysql.cnf

Starting ProxySQL

When ProxySQL starts, it normally starts as a daemon. As part of this process, it checks if there is a pid file in the datadir and if another proxysql instance is running. If yes, it returns immediately.
Therefore running two instances of ProxySQL needs some extra configuration.
Although it is possible for two proxysql instances to share the same database file proxysql.db , it is a bit complex and we won’t describe this scenario. We will instead run two proxysql instances, each with its own datadir and configuration file (minimal configuration file!)

Preparing the datadirs

We will simply create two directories in /var/lib/proxysql , each directory representing the datadir of a process.

root@nebula:~# mkdir /var/lib/proxysql/proxy01
root@nebula:~# mkdir /var/lib/proxysql/proxy02

Next, we will create 2 config files, one in each datadir. The only thing we will configure is the listener port of the Admin interface, because otherwise we won’t be able to specify to which ProxySQL’s Admin we will connect.
Note that is also possible to configure two different Unix domain sockets, but in this example we will use TCP sockets.

root@nebula:~# cat > /var/lib/proxysql/proxy01/proxysql.cnf << EOF
> datadir="/var/lib/proxysql/proxy01"
> admin_variables=
> {
>   mysql_ifaces="0.0.0.0:6031"
> }
> EOF
root@nebula:~# cat > /var/lib/proxysql/proxy02/proxysql.cnf << EOF
> datadir="/var/lib/proxysql/proxy02"
> admin_variables=
> {
>   mysql_ifaces="0.0.0.0:6032"
> }
> EOF

Verify the files:

root@nebula:~# cat /var/lib/proxysql/proxy01/proxysql.cnf
datadir="/var/lib/proxysql/proxy01"
admin_variables=
{
  mysql_ifaces="0.0.0.0:6031"
}
root@nebula:~# cat /var/lib/proxysql/proxy02/proxysql.cnf
datadir="/var/lib/proxysql/proxy02"
admin_variables=
{
  mysql_ifaces="0.0.0.0:6032"
}

Now we can start two proxysql instances, each one with its config file:

root@nebula:~# proxysql -c /var/lib/proxysql/proxy01/proxysql.cnf
root@nebula:~# proxysql -c /var/lib/proxysql/proxy02/proxysql.cnf

Next, let’s check what is running:

root@nebula:~# ps -eF | grep proxysql | grep cnf
root     16383  1855  0  8307  7052   6 20:32 ?        00:00:00 proxysql -c /var/lib/proxysql/proxy01/proxysql.cnf
root     16384 16383  0 24920 31956   5 20:32 ?        00:00:00 proxysql -c /var/lib/proxysql/proxy01/proxysql.cnf
root     16410  1855  0  8307  3368   3 20:32 ?        00:00:00 proxysql -c /var/lib/proxysql/proxy02/proxysql.cnf
root     16411 16410  0 24920  8044   2 20:32 ?        00:00:00 proxysql -c /var/lib/proxysql/proxy02/proxysql.cnf

Perfect, we have four proxysql processes! Remember, when starting as a daemon, proxysql immediately forks:

  • the child process is the real service
  • the parent process checks the exit status of the child process, and eventually restart it immediately in under 15ms if it crashes.

We can now verify that the two services are using a datadir each:

root@nebula:~# ls -l /var/lib/proxysql/
total 8
drwxr-xr-x 2 root root 4096 ago  1 20:32 proxy01
drwxr-xr-x 2 root root 4096 ago  1 20:32 proxy02
root@nebula:~# ls -l /var/lib/proxysql/proxy01/
total 108
-rw-r--r-- 1 root root    87 ago  1 20:30 proxysql.cnf
-rw------- 1 root root 98304 ago  1 20:32 proxysql.db
-rw------- 1 root root  1239 ago  1 20:32 proxysql.log
-rw-r--r-- 1 root root     6 ago  1 20:32 proxysql.pid
root@nebula:~# ls -l /var/lib/proxysql/proxy02/
total 108
-rw-r--r-- 1 root root    87 ago  1 20:30 proxysql.cnf
-rw------- 1 root root 98304 ago  1 20:32 proxysql.db
-rw------- 1 root root  1239 ago  1 20:32 proxysql.log
-rw-r--r-- 1 root root     6 ago  1 20:32 proxysql.pid

Verify listening processes

Next, let’s check which ports are used by the two proxysql instances:

root@nebula:~# netstat -ntap | grep proxysql
tcp        0      0 0.0.0.0:6031            0.0.0.0:*               LISTEN      16384/proxysql  
tcp        0      0 0.0.0.0:6032            0.0.0.0:*               LISTEN      16411/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16411/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16411/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16411/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16411/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16384/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16384/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16384/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      16384/proxysql  

As expected, they are listening on port 6031 (pid 16384, proxysql1), port 6032 (pid 16411, proxysql2), and port 6033 (both pid 16384 and 16411).

But why each proxysql processes is reported 4 times as listening on port 6033?
In proxysql, each MySQL Thread is listening on said port. Because mysql-threads=4 by default, that means that each proxysql process has 4 threads listening on port 6033. That is, multiple threads are listening on the same socket:

root@nebula:~# lsof -i:6033 -n
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
proxysql 16384 root   15u  IPv4 705075      0t0  TCP *:6033 (LISTEN)
proxysql 16384 root   16u  IPv4 705076      0t0  TCP *:6033 (LISTEN)
proxysql 16384 root   17u  IPv4 705077      0t0  TCP *:6033 (LISTEN)
proxysql 16384 root   18u  IPv4 705078      0t0  TCP *:6033 (LISTEN)
proxysql 16411 root   15u  IPv4 700302      0t0  TCP *:6033 (LISTEN)
proxysql 16411 root   16u  IPv4 700303      0t0  TCP *:6033 (LISTEN)
proxysql 16411 root   17u  IPv4 700304      0t0  TCP *:6033 (LISTEN)
proxysql 16411 root   18u  IPv4 700305      0t0  TCP *:6033 (LISTEN)

As a result, 8 threads are listening on the same port, 4 threads for each proxysql process.

Create a user and a backend in ProxySQL

To test how connections are load balanced, we first need to create a user in ProxySQL to allow clients to connect.
We will also configure a backend.
Because each proxysql instance has its own configuration, both need to be configured.

root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "INSERT INTO mysql_users (username,password) VALUES ('sbtest','sbtest'); LOAD MYSQL USERS TO RUNTIME;" ; done
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "INSERT INTO mysql_servers (hostname) VALUES ('127.0.0.1'); LOAD MYSQL SERVERS TO RUNTIME;" ; done
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.

Load balance traffic

It is now time to check how the kernel will load balance the traffic.
We will run very simple queries, specifically SELECT 1 using mysqlslap :

root@nebula:~# time mysqlslap --create-schema=sbtest -u sbtest -psbtest -h 127.0.0.1 -P6033 -c 8 -i 1000 -q "SELECT 1"
mysqlslap: [Warning] Using a password on the command line interface can be insecure.
Benchmark
        Average number of seconds to run all queries: 0.000 seconds
        Minimum number of seconds to run all queries: 0.000 seconds
        Maximum number of seconds to run all queries: 0.004 seconds
        Number of clients running queries: 8
        Average number of queries per client: 1

real    0m1.131s
user    0m0.344s
sys     0m0.868s

How do we verify how many queries were executed and connections established on each proxysql?

root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "SELECT * FROM stats_mysql_global WHERE VARIABLE_NAME IN ('Client_Connections_created','Questions')" 2> /dev/null ; done
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 4034           |
| Questions                  | 4033           |
+----------------------------+----------------+
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 3967           |
| Questions                  | 3967           |
+----------------------------+----------------+

The kernel load balanced the traffic almost evenly:

  • 50.41% of the connections to proxysql1
  • 49.59% of the connections to proxysql2

PROXYSQL PAUSE

Very old versions of ProxySQL (maybe 0.2 or 1.0) has a command that allows to gracefully shutdown the listener. While ProxySQL became more modular and complex, this feature was somehow broken, but finally restored in 1.4.1 , following a feature request.
Command PROXYSQL PAUSE stops accepting new connections, while continue processing the requests from the current connections.
Why this feature is important? If there is the need to restart ProxySQL (for example to perform an upgrade), it is possible to execute PROXYSQL PAUSE, wait for all the clients to disconnect, and finally stop the process, so that clients’ connections are terminated gracefully.

Let’s try this running again mysqlslap for a longer period of time, and issuing PROXYSQL PAUSE on one of the proxies.

(Note: both proxies were erroneously restarted between the previous test and this one, therefore pids are changed and counters reset)

root@nebula:~# ( ( sleep 2 ; mysql -u admin -padmin -h 127.0.0.1 -P6031 -e "PROXYSQL PAUSE") & ) ; time mysqlslap --create-schema=sbtest -u sbtest -psbtest -h 127.0.0.1 -P6033 -c 8 -i 300 -q "SELECT sleep(0.01)"
mysqlslap: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
Benchmark
        Average number of seconds to run all queries: 0.011 seconds
        Minimum number of seconds to run all queries: 0.010 seconds
        Maximum number of seconds to run all queries: 0.111 seconds
        Number of clients running queries: 8
        Average number of queries per client: 1

real    0m3.763s
user    0m0.204s
sys     0m0.632s

Let’s check where queries were executed. We expect that proxysql1 has received less connections because its listener was stopped during he test:

root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "SELECT * FROM stats_mysql_global WHERE VARIABLE_NAME IN ('Client_Connections_created','Questions')" 2> /dev/null ; done
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 644            |
| Questions                  | 644            |
+----------------------------+----------------+
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 1757           |
| Questions                  | 1756           |
+----------------------------+----------------+

One important thing to highlight is that mysqlslap didn’t receive any error: this because when the listener is stopped, current connections are not terminated.

Now let’s check listeners. Only one process is listening on port 6033:

root@nebula:~# netstat -ntlp | grep proxysql
tcp        0      0 0.0.0.0:6031            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6032            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  

That means that if we executes mysqlslap again, all traffic will go to proxysql2:

root@nebula:~# time mysqlslap --create-schema=sbtest -u sbtest -psbtest -h 127.0.0.1 -P6033 -c 8 -i 300 -q "SELECT sleep(0.01)"
mysqlslap: [Warning] Using a password on the command line interface can be insecure.
Benchmark
        Average number of seconds to run all queries: 0.011 seconds
        Minimum number of seconds to run all queries: 0.011 seconds
        Maximum number of seconds to run all queries: 0.018 seconds
        Number of clients running queries: 8
        Average number of queries per client: 1

real    0m3.817s
user    0m0.220s
sys     0m0.648s
root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "SELECT * FROM stats_mysql_global WHERE VARIABLE_NAME IN ('Client_Connections_created','Questions')" 2> /dev/null ; done
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 644            |
| Questions                  | 644            |
+----------------------------+----------------+
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 4158           |
| Questions                  | 4156           |
+----------------------------+----------------+

Confirmed! All new requests went to proxysql2.

PROXYSQL RESUME

Surprise! :)
ProxySQL doesn’t have just the command PROXYSQL PAUSE, but also the command PROXYSQL RESUME: all the MySQL Threads inside ProxySQL will start listening again on listener port. Let’s run again mysqlslap, and while it run we execute PROXYSQL RESUME on proxysql1:

root@nebula:~# ( ( sleep 2 ; mysql -u admin -padmin -h 127.0.0.1 -P6031 -e "PROXYSQL RESUME") & ) ; time mysqlslap --create-schema=sbtest -u sbtest -psbtest -h 127.0.0.1 -P6033 -c 8 -i 300 -q "SELECT sleep(0.01)"
mysqlslap: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
Benchmark
        Average number of seconds to run all queries: 0.012 seconds
        Minimum number of seconds to run all queries: 0.011 seconds
        Maximum number of seconds to run all queries: 0.017 seconds
        Number of clients running queries: 8
        Average number of queries per client: 1

real    0m3.839s
user    0m0.224s
sys     0m0.700s

Let’s now check where connections were established and queries executed:

root@nebula:~# for i in 6031 6032 ; do mysql -u admin -padmin -h 127.0.0.1 -P$i -e "SELECT * FROM stats_mysql_global WHERE VARIABLE_NAME IN ('Client_Connections_created','Questions')" 2> /dev/null ; done
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 1210           |
| Questions                  | 1210           |
+----------------------------+----------------+
+----------------------------+----------------+
| Variable_Name              | Variable_Value |
+----------------------------+----------------+
| Client_Connections_created | 5993           |
| Questions                  | 5990           |
+----------------------------+----------------+

As expected, now proxysql1 has resumed operation and processing requests!
And finally, netstat confirms that both proxies are listening on port 6033:

root@nebula:~# netstat -ntlp | grep proxysql
tcp        0      0 0.0.0.0:6031            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6032            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13909/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  
tcp        0      0 0.0.0.0:6033            0.0.0.0:*               LISTEN      13936/proxysql  

Conclusion

Running multiple ProxySQL instances on the same box, all listening on the same port(s), it is a great way to provide HA, to perform graceful upgrade, or test new functionalities or configurations.
All this, completely transparent to the application.

Q & A

Below is reported the Q&A that we had on Slack on this subject.

Q. Wait, “Multiple ProxySQL instances can listen on same TCP port” – really? How did you do that?
A. Yes, actually, it’s very easy, you just need Linux kernel 3.9+. Yet, a lot of production systems use 2.6. (Example, CentOS 6.x used 2.6 and current CentOS 7 uses 3.10.)

Q. Wow. The kernel must round robin incoming connections to listening processes? And keeps track by source IP and TCP port number?
A. Yes. Once a connection is bound to a process, it stays locked there, until it is disconnected.

Q. So tell me again how I can use this in ProxySQL.
A. The new command “PROXYSQL PAUSE” allows a graceful shutdown. If you have 2 proxies, you can issue PROXYSQL PAUSE in one of them, and from that moment on, all new connections will go to the other proxy. When the first proxy has no more connections, you can shut it down.

Q. And my connecting applications will never know the difference! I could run /usr/bin/proxysql01 and /usr/bin/proxysql02 both on port 6033, then PAUSE proxysql01, shut it down, and upgrade just that proxysql01 binary, while traffic still routes through 02?
A. Yes you can. If you run them in the foreground, that is easy. If you run them in the background, you need to specify 2 different datadirs, because otherwise the 2nd won’t start because if will find a pid file.

Q. Right, data directory management … though you could technically have both proxysql01 and proxysql02 share the same proxysql.db file? Admin them both separately, tell each to SAVE BLAH TO DISK, one config database, but then you are overwriting the db file so better be careful you know what you are doing!
A. Exactly. They can share the same proxysql.db, but using different database files makes things easier.

Q. I guess you better also run them on separate admin TCP ports — else what a nightmare.
A. Yes. Technically you can use different unix socket domains, but it just makes management more complex. Using different datadir and different admin ports is way easier. Something like 6301, 6302, etc.

Q. But no matter what, I have to remember to make any configuration changes twice, on both instances, (or more) right?
A. Yes, you have to manage each instance separately, and be careful to keep them in sync since they are listening on the same host on the same port. Linux is choosing which one gets each incoming proxysql connection and you cannot control that. If you have different RUNTIME configs on same port you will get unpredictable results.

Q. I guess I could carefully use different pid files but share one proxysql.db file, SAVE {CONFIG} TO DISK in my master admin instance, then LOAD {CONFIG} FROM DISK and LOAD {CONFIG} TO RUNTIME on the other instances, to make sure they all had the same changes.
A. Yes, that would work. Example, LOAD MYSQL QUERY RULES FROM DISK and LOAD MYSQL QUERY RULES TO RUNTIME would bring up changes into your secondary instance(s) after saving from primary.

Q. In summary, this feature seems powerful but could be tricky to manage.
A. Yes for administration here you must be careful. It is only if you want to be able to upgrade proxysql in-place without losing service for even a few seconds. Remember you can still do a lot with proxysql in its more typical mode of one binary running and listening on one port. You can still accept thousands of connections and route them to mysql backends based on almost any criteria.