ProxySQL is widely used load balancing tool for HA MySQL load balancing. When we have HA clusters it is always better to use a load balancer for scaling the database. ie, which nodes can be used for writes and who for reads.
In this article, I will explain the below
⇒ Installing , Starting ProxySQL
⇒ How Percona XtraDB Cluster (PXC) can be used with ProxySQL with READ-WRITE split.
⇒ Read/write benchmark test
⇒ Automatic failover.
To read more about ProxySQL click here. {https://github.com/sysown/proxysql/wiki}
For this setup i have the below,
Linux Server : Centos 6.10
PXC Version : 5.7.25-28-57 Percona XtraDB Cluster (GPL),
ProxySQL version : ProxySQL 1.4.4
Proxy Node : 192.168.56.106
Galera Node1 : 192.168.56.102
Galera Node1 : 192.168.56.103
PXC setup can be done from this doc,
Installing ProxySQL
The best way to install proxysql is from Percona Repo. To achieve this, install the percona repo on the server initially (if not available)
Note : The following commands are run under root user , if not root user please use sudo before the commands
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
You can get the rpm from below command,
[root@centos ~]# yum list | grep proxysql
proxysql.x86_64 1.4.14-1.1.el6 @percona-release-x86_64
[root@centos ~]#
|
To install proxysql run ,
yum install proxysql
|
You can even check the same post installation
[root@centos ~]# rpm -qa | grep proxysql
proxysql-1.4.14-1.1.el6.x86_64
[root@centos ~]#
|
Note : You need to have mysql-client to be installed on the same proxy node.
|
Starting ProxySQL
You need to start the service ,
service proxysql start
or
/etc/init.d/proxysql start
ProxySQL will create two config files where your configurations are available
proxysql-admin.cnf & proxysql.cnf
In proxysql-admin.cnf you can have all your admin variables , below is what i had
cat /etc/proxysql-admin.cnf
# proxysql admin interface credentials.
export PROXYSQL_DATADIR='/var/lib/proxysql'
export PROXYSQL_USERNAME='admin'
export PROXYSQL_PASSWORD='admin'
export PROXYSQL_HOSTNAME='localhost'
export PROXYSQL_PORT='6032'
# PXC admin credentials for connecting to pxc-cluster-node.
export CLUSTER_USERNAME='admin'
export CLUSTER_PASSWORD='admin'
export CLUSTER_HOSTNAME='localhost'
export CLUSTER_PORT='3306'
# proxysql monitoring user. proxysql admin script will create this user in pxc to monitor pxc-nodes.
export MONITOR_USERNAME="monitor"
export MONITOR_PASSWORD="monitor"
# Application user to connect to pxc-node through proxysql
export CLUSTER_APP_USERNAME="sbuser"
export CLUSTER_APP_PASSWORD="sbpass"
# ProxySQL hostgroup IDs
export WRITER_HOSTGROUP_ID='10'
export READER_HOSTGROUP_ID='11'
# ProxySQL read/write configuration mode.
export MODE="singlewrite"
# max_connections default (used only when INSERTing a new mysql_servers entry)
export MAX_CONNECTIONS="1000"
# Determines the maximum number of writesets a node can have queued
# before the node is SHUNNED to avoid stale reads.
export MAX_TRANSACTIONS_BEHIND=100
# Connections to the backend servers (from ProxySQL) will use SSL
export USE_SSL="no"
|
Load balancing with ProxySQL
Post installation you can login to the admin interface, use the credentials, host name and port specified in config file.
[root@standalone ~]# mysql -u admin -padmin -h 127.0.0.1 -P6032
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 15421
Server version: 5.5.30 (ProxySQL Admin Module)
Copyright (c) 2009-2019 Percona LLC and/or its affiliates
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
|
Adding Cluster Nodes to proxySQL
As mentioned in the config file we will use two hostgroup id’s for write & read
export WRITER_HOSTGROUP_ID='10' ⇒ It mentions the node is a write node
export READER_HOSTGROUP_ID='11' ⇒ It mentions the node is a read node
On proxy-node please execute the following queries,
INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (10,'192.168.56.102',3306,1000000,'WRITE');
INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.56.103',3306,1000000,'READ');
INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.56.107',3306,1000000,'READ');
|
Once added we need to load them and save to disk.
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK; |
ProxySQL Monitoring
In-order to enable monitoring of PXC nodes in ProxySQL, we will create a user on any node in the cluster.
Execute the below query on any one of PXC node.
CREATE USER 'monitor'@'%' IDENTIFIED BY 'monitor';
GRANT USAGE ON *.* TO 'monitor'@'%';
|
Now configure the user in ProxySQL
UPDATE global_variables SET variable_value='monitor' WHERE variable_name="mysql-monitor_username";
UPDATE global_variables SET variable_value='monitor' WHERE variable_name="mysql-monitor_password";
|
Once added we need to load them and save to disk
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;
|
To confirm that monitoring is enabled, check the monitoring logs:
mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 6;
+----------------+------+------------------+-------------------------+---------------+
| hostname | port | time_start_us | connect_success_time_us | connect_error |
+----------------+------+------------------+-------------------------+---------------+
| 192.168.56.103 | 3306 | 1558139759619458 | 1956 | NULL |
| 192.168.56.102 | 3306 | 1558139758921182 | 1336 | NULL |
| 192.168.56.107 | 3306 | 1558139758223625 | 1486 | NULL |
| 192.168.56.107 | 3306 | 1558139699644073 | 1453 | NULL |
| 192.168.56.102 | 3306 | 1558139698932583 | 1580 | NULL |
| 192.168.56.103 | 3306 | 1558139698220689 | 1135 | NULL |
+----------------+------+------------------+-------------------------+---------------+
6 rows in set (0.00 sec)
|
mysql> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 6;
+----------------+------+------------------+----------------------+------------+
| hostname | port | time_start_us | ping_success_time_us | ping_error |
+----------------+------+------------------+----------------------+------------+
| 192.168.56.103 | 3306 | 1558139789634032 | 590 | NULL |
| 192.168.56.107 | 3306 | 1558139789500496 | 385 | NULL |
| 192.168.56.102 | 3306 | 1558139789367311 | 416 | NULL |
| 192.168.56.102 | 3306 | 1558139779544502 | 530 | NULL |
| 192.168.56.103 | 3306 | 1558139779455180 | 480 | NULL |
| 192.168.56.107 | 3306 | 1558139779366472 | 500 | NULL |
+----------------+------+------------------+----------------------+------------+
6 rows in set (0.00 sec)
|
Query Rules
We will now configure ProxySQL to send the read queries to read nodes and everything else to the master, also you can add what all types of select needs to be diverted.
INSERT INTO mysql_query_rules (rule_id,active,match_pattern,destination_hostgroup,apply) VALUES (1,1,'^SELECT.*FOR UPDATE$',10,1), (2,1,'^SELECT',11,1);
|
Once added we need to load them and save to disk
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
|
Creating Application user
Now we will create the application user which will connect to application from ProxySQL
Execute the below query on any one of PXC node.
CREATE USER 'sbuser'@'192.168.56.106' IDENTIFIED BY 'sbpass';
GRANT ALL ON *.* TO 'sbuser'@'192.168.56.106';
|
Now add them to mysql_users table on ProxySQL
INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('sbuser','sbpass',10);
|
Once added we need to load them and save to disk
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;
|
Scheduler
Scheduler is a cron-like implementation integrated inside ProxySQL with millisecond granularity. We have a inbuilt script proxysql_galera_checker which checks the status of nodes. This tool will automatically handle the failover of a node, ie, if the write node goes down it will automatically promote a reader node as write node.
Execute the below command on proxy node
INSERT INTO scheduler (active,interval_ms,filename,arg1,comment) VALUES (1,10000,'/usr/bin/proxysql_galera_checker','--config-file=/etc/proxysql-admin.cnf --write-hg=10 --read-hg=11 --writer-count=0 --mode=singlewrite --log=/var/lib/proxysql/cluster_one_proxysql_galera_check.log','cluster_one');
|
Once added we need to load them and save to disk
LOAD SCHEDULER TO RUNTIME;
SAVE SCHEDULER TO DISK;
|
You can check the same from server,
mysql> select * from scheduler\G
*************************** 1. row ***************************
id: 1
active: 1
interval_ms: 10000
filename: /usr/bin/proxysql_galera_checker
arg1: --config-file=/etc/proxysql-admin.cnf --write-hg=10 --read-hg=11 --writer-count=0 --mode=singlewrite --log=/var/lib/proxysql/cluster_one_proxysql_galera_check.log
arg2: NULL
arg3: NULL
arg4: NULL
arg5: NULL
comment: cluster_one
1 row in set (0.00 sec)
|
This scheduler, as soon started will write the logs to the log file .
Read/Write benchmark test
Now we can run sysbench to know how PXC works with proxySQL, here to validate i will run sysbench for writes, reads & RW separately and will show what nodes where used for them.
Before running benchmark we need to create a schema/database on any one of the cluster node.
create database sbtest;
Now we need to create a table and load data into it.
sysbench --report-interval=5 --threads=4 --num-requests=0 --time=20 /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-user='sbuser' --mysql-password='sbpass' --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 prepare
[root@standalone]# sysbench --report-interval=5 --threads=4 --num-requests=0 --time=20 /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-user='sbuser' --mysql-password='sbpass' --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 prepare
sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)
Creating table 'sbtest1'...
Inserting 10000 records into 'sbtest1'
Creating secondary indexes on 'sbtest1'...
[root@standalone]#
|
Write test
sysbench '/usr/share/sysbench/oltp_update_index.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user='sbuser' --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
[root@standalone]# sysbench '/usr/share/sysbench/oltp_update_index.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user='sbuser' --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 4
Initializing random number generator from current time
Initializing worker threads...
Threads started!
SQL statistics:
queries performed:
read: 0
write: 14709
other: 11914
total: 26623
transactions: 26623 (887.24 per sec.)
queries: 26623 (887.24 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 30.0051s
total number of events: 26623
Latency (ms):
min: 0.19
avg: 4.50
max: 146.37
95th percentile: 10.46
sum: 119913.56
Threads fairness:
events (avg/stddev): 6655.7500/73.86
execution time (avg/stddev): 29.9784/0.00
[root@standalone sysbench]#
|
Checking stats table you can see on srv_host 192.168.56.102 check the queries count
mysql> select * from stats_mysql_connection_pool where hostgroup in (10,11) order by hostgroup,srv_host ;
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.56.102 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 26627 | 639100 | 0 | 455 |
| 11 | 192.168.56.103 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 452 |
| 11 | 192.168.56.107 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 512 |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
3 rows in set (0.00 sec)
|
Read Test
sysbench '/usr/share/sysbench/oltp_point_select.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user=sbuser --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
[root@standalone]# sysbench '/usr/share/sysbench/oltp_point_select.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user=sbuser --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 4
Initializing random number generator from current time
Initializing worker threads...
Threads started!
SQL statistics:
queries performed:
read: 259526
write: 0
other: 0
total: 259526
transactions: 259526 (8650.08 per sec.)
queries: 259526 (8650.08 per sec.)
ignored errors: 0 (0.00 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 30.0015s
total number of events: 259526
Latency (ms):
min: 0.17
avg: 0.46
max: 20.90
95th percentile: 0.69
sum: 119449.74
Threads fairness:
events (avg/stddev): 64881.5000/3101.19
execution time (avg/stddev): 29.8624/0.01
[root@standalone sysbench]#
|
Checking stats table you can see on srv_host 192.168.56.103 & .107 check the queries count getting affected from 0 to new values but the node1 (192.168.56.102) remains the same. This states the read nodes are only used for read not the write node.
mysql> select * from stats_mysql_connection_pool where hostgroup in (10,11) order by hostgroup,srv_host ;
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.56.102 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 26627 | 639100 | 0 | 408 |
| 11 | 192.168.56.103 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 128722 | 3089358 | 8954316 | 425 |
| 11 | 192.168.56.107 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 130810 | 3139466 | 9087246 | 393 |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
3 rows in set (0.00 sec)
|
Read/Write Test
sysbench '/usr/share/sysbench/oltp_read_write.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user='sbuser' --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
[root@standalone]# sysbench '/usr/share/sysbench/oltp_read_write.lua' --table-size=1000000 --mysql-host=192.168.56.106 --mysql-db='sbtest' --mysql-user='sbuser' --mysql-password='sbpass' --mysql-port=6033 --time=30 --threads=4 --db-driver=mysql run
sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 4
Initializing random number generator from current time
Initializing worker threads...
Threads started!
SQL statistics:
queries performed:
read: 86408
write: 17417
other: 19607
total: 123432
transactions: 6170 (205.54 per sec.)
queries: 123432 (4111.85 per sec.)
ignored errors: 2 (0.07 per sec.)
reconnects: 0 (0.00 per sec.)
General statistics:
total time: 30.0171s
total number of events: 6170
Latency (ms):
min: 8.76
avg: 19.45
max: 102.21
95th percentile: 27.17
sum: 120008.01
Threads fairness:
events (avg/stddev): 1542.5000/103.42
execution time (avg/stddev): 30.0020/0.00
[root@standalone sysbench]#
|
Checking stats table you can see on the difference of queries count.
mysql> select * from stats_mysql_connection_pool where hostgroup in (10,11) order by hostgroup,srv_host ;
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 10 | 192.168.56.102 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 63677 | 3249956 | 0 | 373 |
| 11 | 192.168.56.103 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 172881 | 4250574 | 82798911 | 303 |
| 11 | 192.168.56.107 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 173087 | 4251066 | 79370204 | 328 |
+-----------+----------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
3 rows in set (0.00 sec)
|
Automatic failover
If a node goes down proxysql will automatically make the node as unavailable for connections and will make it to state OFFLINE_SOFT/OFFLINE_HARD
OFFLINE_SOFT : Active transactions and connections will still be used, but no new traffic will be send to the node.
OFFLINE_HARD : All the current traffic will be immediately terminated, and no new traffic will be sent.
|
Scenario 1 : Making one of the read node down.
[root@galera1 ~]# hostname -I
192.168.56.103
[root@galera2 ~]# service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (2820)
[root@galera2 ~]# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster).............. SUCCESS!
[root@galera2 ~]#
|
Checking the status on proxy
mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
+--------------+----------------+------+--------------+---------+
| hostgroup_id | hostname | port | status | comment |
+--------------+----------------+------+--------------+---------+
| 10 | 192.168.56.102 | 3306 | ONLINE | WRITE |
| 11 | 192.168.56.103 | 3306 | OFFLINE_HARD | READ |
| 11 | 192.168.56.107 | 3306 | ONLINE | READ |
+--------------+----------------+------+--------------+---------+
3 rows in set (0.00 sec)
mysql>
|
After making the node up
mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
+--------------+----------------+------+--------+---------+
| hostgroup_id | hostname | port | status | comment |
+--------------+----------------+------+--------+---------+
| 10 | 192.168.56.102 | 3306 | ONLINE | WRITE |
| 11 | 192.168.56.103 | 3306 | ONLINE | READ |
| 11 | 192.168.56.107 | 3306 | ONLINE | READ |
+--------------+----------------+------+--------+---------+
3 rows in set (0.00 sec)
mysql>
|
Scenario 2 : Making the current write node down.
[root@galera1 ~]# hostname -I
192.168.56.102
[root@galera1 ~]# service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (2723)
[root@galera1 ~]# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster)............... SUCCESS!
[root@galera1 ~]#
|
Checking the status on proxy, you can see here proxy has automatically promoted another read node as write node
mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
+--------------+----------------+------+--------------+---------+
| hostgroup_id | hostname | port | status | comment |
+--------------+----------------+------+--------------+---------+
| 11 | 192.168.56.102 | 3306 | OFFLINE_HARD | READ |
| 11 | 192.168.56.103 | 3306 | ONLINE | READ |
| 10 | 192.168.56.107 | 3306 | ONLINE | WRITE |
+--------------+----------------+------+--------------+---------+
3 rows in set (0.00 sec)
mysql>
|
After making the node up , the node has changed to READ node.
mysql> SELECT hostgroup_id,hostname,port,status,comment FROM mysql_servers;
+--------------+----------------+------+--------+---------+
| hostgroup_id | hostname | port | status | comment |
+--------------+----------------+------+--------+---------+
| 11 | 192.168.56.102 | 3306 | ONLINE | READ |
| 11 | 192.168.56.103 | 3306 | ONLINE | READ |
| 10 | 192.168.56.107 | 3306 | ONLINE | WRITE |
+--------------+----------------+------+--------+---------+
3 rows in set (0.00 sec)
mysql>
|
So whenever a write nodes goes down proxySQL will automatically promote another node as write node and make sure writes won’t fail. Here failover is done automatically.