Tuesday, 24 September 2019

Percona Audit Plugin

In this blog i will show how to install and use the percona audit plugin.


Why audit log?

Audit logs are useful to monitor the happenings inside the database server. It is location where to keep the track of who did what to the DB. It captures the user from which host he connected at what time which query was executed.

Percona Audit Log Plugin provides monitoring and logging of connection and query activity that were performed on specific server.

All the information can be stored in the XML log file,CSV,JSON formats where each event will have its NAME field, its own unique RECORD_ID field and a TIMESTAMP field and information about query.


Installation


Along with percona server audit plugin is available , but it is not installed by default. We need to install the plugin to enable it.

We can check with below query if audit log is enabled or not.


mysql> select * from information_schema.PLUGINS where PLUGIN_NAME like '%audit%';
Empty set (0.01 sec)

mysql> show variables like '%audit%';
Empty set (0.00 sec)

mysql> 


To install the plugin run the below query on mysql command line.

INSTALL PLUGIN audit_log SONAME 'audit_log.so';


Before executing the above command it is a good approach to add the audit log related configuration on the mysql configuration file (my.cnf). Adding these variables will not make you to restart the mysql service or else you will require a mysql restart for the audit log to work.

####Percona Audit######
plugin-load = audit_log.so
audit_log_file = /var/log/mysql/audit.log
audit_log_format = CSV
audit_log_policy = QUERIES
audit_log_handler = FILE

plugin-load

it is the socket file for audit log

audit_log_file

To specify the filename that’s going to store the audit log

audit_log_format

To specify the audit log format. The audit log plugin supports four log formats: OLD, NEW, JSON, and CSV


audit_log_policy

To specify which events should be logged. Values are:  ALL, LOGINS, QUERIS, NONE


audit_log_handler

To configure where the audit log will be written ie, syslog or normal file

After adding the above variables you can run the command to install audit log.


mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so'; Query OK, 0 rows affected (0.02 sec)


mysql> select * from information_schema.PLUGINS where PLUGIN_NAME like '%audit%'\G *************************** 1. row *************************** PLUGIN_NAME: audit_log PLUGIN_VERSION: 0.2 PLUGIN_STATUS: ACTIVE PLUGIN_TYPE: AUDIT PLUGIN_TYPE_VERSION: 4.1 PLUGIN_LIBRARY: audit_log.so PLUGIN_LIBRARY_VERSION: 1.7 PLUGIN_AUTHOR: Percona LLC and/or its affiliates. PLUGIN_DESCRIPTION: Audit log PLUGIN_LICENSE: GPL LOAD_OPTION: ON 1 row in set (0.00 sec) mysql> 



mysql> show variables like '%audit%';
+-----------------------------+--------------------------+
| Variable_name               | Value                    |
+-----------------------------+--------------------------+
| audit_log_buffer_size       | 1048576                  |
| audit_log_exclude_accounts  |                          |
| audit_log_exclude_commands  |                          |
| audit_log_exclude_databases |                          |
| audit_log_file              | /var/log/mysql/audit.log |
| audit_log_flush             | OFF                      |
| audit_log_format            | CSV                      |
| audit_log_handler           | FILE                     |
| audit_log_include_accounts  |                          |
| audit_log_include_commands  |                          |
| audit_log_include_databases |                          |
| audit_log_policy            | QUERIES                  |
| audit_log_rotate_on_size    | 0                        |
| audit_log_rotations         | 0                        |
| audit_log_strategy          | ASYNCHRONOUS             |
| audit_log_syslog_facility   | LOG_USER                 |
| audit_log_syslog_ident      | percona-audit            |
| audit_log_syslog_priority   | LOG_INFO                 |
+-----------------------------+--------------------------+
18 rows in set (0.01 sec)

mysql> 


Below is the entry of audit log file , how it looks.

[root@localhost ~]# cat /var/log/mysql/audit.log
"Query","39914_2019-09-24T11:35:49","2019-09-24T11:37:47 UTC","show_databases","2",0,"show databases","root[root] @ localhost []","localhost","","",""
"Query","39916_2019-09-24T11:35:49","2019-09-24T11:37:59 UTC","create_db","2",0,"create database test","root[root] @ localhost []","localhost","","",""
"Query","39917_2019-09-24T11:35:49","2019-09-24T11:38:01 UTC","select","2",0,"SELECT DATABASE()","root[root] @ localhost []","localhost","","",""
"Query","39921_2019-09-24T11:35:49","2019-09-24T11:38:03 UTC","show_tables","2",0,"show tables","root[root] @ localhost []","localhost","","",""


Now we have enabled the audit log for our mysql database.
IN percona audit log there are variables  which can control what type of queries to log and not to log.


audit_log_include_commands

Used to specify the list of commands for which Filtering by SQL command type is applied. The value can be NULL or comma separated list of commands. 
If this variable is set, then audit_log_exclude_commands must be unset, and vice versa.

audit_log_exclude_commands

Used to specify the list of commands for which Filtering by SQL command type is applied to exclude them ie, those kind of sql queries wont be logged. The value can be NULL or comma separated list of commands. 

If this variable is set, then audit_log_include_commands must be unset, and vice versa.

Either we can use one variable in our audit log.

For example if you want to log all the DML queries expect SELECT your audit_log_include_commands variable should look like below.



audit_log_include_commands='insert,insert_select,delete,delete_multi,update,update_multi'


The above will make sure to log only the insert, delete and updates.

You can set it globally too



mysql> show variables like 'audit_log_include_commands';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| audit_log_include_commands |       |
+----------------------------+-------+
1 row in set (0.01 sec)

mysql> set global audit_log_include_commands='insert,insert_select,delete,delete_multi,update,update_multi';
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'audit_log_include_commands';
+----------------------------+--------------------------------------------------------------+
| Variable_name              | Value                                                        |
+----------------------------+--------------------------------------------------------------+
| audit_log_include_commands | insert,insert_select,delete,delete_multi,update,update_multi |
+----------------------------+--------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> 


I will be executing some insert, delete update and select to check what is being logged in the log file.




From the above logs you can see i have executed select queries too, but they are not logged. In this way you can restrict your audit log file to capture the required events.




Sunday, 25 August 2019

Make ProxySQL for High Availability

In this blog we will see how to set-up a ProxySQL for HA

ProxySQL doesn’t natively support any high availability solution it is a single
point of failure. (It means if the proxy server goes down your application is
down)

There are many ways to fix the SPOF issue.

Here we will be using a Keepalived on the proxy servers (we need more than
on proxysql server).

From the Keepalived doc,

  • Keepalived is a routing software written in C. 
  • The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. 
  • Load Balancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. 
  • Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load balanced server pool according their health. 
  • On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. 
  • In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. 
  • In order to offer fastest network failure detection, Keepalived implements BFD protocol. 
  • VRRP state transition can take into account BFD hint to drive fast state transition. 
  • Keepalived frameworks can be used independently or all together to provide resilient infrastructures.


Our setup will be as below 



Single proxy on 3 node galera can be found on my previous blog here.

Here Keepalived will make sure one proxysql will be routing your queries to the
PXC , if active node goes down, the  passive node will be promoted as active
and application will be connecting to your database via that proxy. It is achieved
by assigning the VIP on that particular proxy node.

Below is the process to setup a Keepalived

We will make Proxy Active as Master and Proxy Passive as Backup ie, if proxy is
down on active the Proxy on Passive which is backup act as Master.

Here  rather than percona proxy i am using native proxy 2.0.5 which supports
Native Galera Support Link . The setup can be found in the blog.

Install Keepalived on both proxy servers

Install Keepalived 

yum install keepalived


Setup the Keepalived configuration file 

vi /etc/keepalived/keepalived.conf

Config file for master node

global_defs {
  # Keepalived process identifier
  script_user root
}
# Script used to check if Proxy is running
vrrp_script check_proxy {
 script "/bin/systemctl status proxysql.service"
 interval 2
 fall 2
 rise 2
}
# Virtual interface
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_01 {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 101

  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
  192.168.1.50
  }
  track_script {
    check_proxy
  }
}

Config file for backup node

global_defs {
  # Keepalived process identifier
  script_user root
}
# Script used to check if Proxy is running
vrrp_script check_proxy {
 script "/bin/systemctl status proxysql.service"
 interval 2
 fall 2
 rise 2
}
# Virtual interface
# The priority specifies the order in which the assigned interface to take over in a failover
vrrp_instance VI_01 {
  state BACKUP
  interface eth0
  virtual_router_id 51
  priority 100

  # The virtual ip address shared between the two loadbalancers
  virtual_ipaddress {
  192.168.1.50
  }
  track_script {
    check_proxy
  }
}


We need to start the Keepalived on active node at first and the
keepalived will set the state to Master and you can see the connections towards the PXC nodes.

systemctl status keepalived -l
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-08-25 06:06:14 UTC; 4s ago
  Process: 32626 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 32627 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─32627 /usr/sbin/keepalived -D
           ├─32628 /usr/sbin/keepalived -D
           └─32629 /usr/sbin/keepalived -D

Aug 25 06:06:14 proxysql01.localhost.com Keepalived_vrrp[32629]: VRRP_Script(check_proxy) succeeded
Aug 25 06:06:15 proxysql01.localhost.com Keepalived_vrrp[32629]: VRRP_Instance(VI_01) Transition to MASTER STATE
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: VRRP_Instance(VI_01) Entering MASTER STATE
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: VRRP_Instance(VI_01) setting protocol VIPs.
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: VRRP_Instance(VI_01) Sending/queueing gratuitous ARPs on eth0 for 192.168.1.50
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:06:16 proxysql01.localhost.com Keepalived_vrrp[32629]: Sending gratuitous ARP on eth0 for 192.168.1.50



systemctl status keepalived -l
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-08-25 06:16:37 UTC; 3s ago
  Process: 23667 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 23668 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─23668 /usr/sbin/keepalived -D
           ├─23669 /usr/sbin/keepalived -D
           └─23670 /usr/sbin/keepalived -D

Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Registering Kernel netlink reflector
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Registering Kernel netlink command channel
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Registering gratuitous ARP shared channel
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) removing protocol VIPs.
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Using LinkWatch kernel netlink reflector...
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Entering BACKUP STATE
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Script(check_proxy) succeeded

On proxy active node you can see the VIP assigned 

 ip addr show eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:78:77:b6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.12/24 brd 192.168.1.255 scope global noprefixroute dynamic eth0
       valid_lft 259107sec preferred_lft 259107sec
    inet 192.168.1.50/24 brd 192.168.1.255 scope global secondary eth0:0
       valid_lft forever preferred_lft forever
    inet6 fe80::1476:bc31:ebd5:a581/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever 

Testing the failover

To test the same, i will stop the proxy service on active node

systemctl stop proxysql.service

systemctl status proxysql.service

● proxysql.service - LSB: High Performance Advanced Proxy for MySQL
   Loaded: loaded (/etc/rc.d/init.d/proxysql; bad; vendor preset: disabled)
   Active: inactive (dead) since Sun 2019-08-25 06:50:26 UTC; 13s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 7925 ExecStop=/etc/rc.d/init.d/proxysql stop (code=exited, status=0/SUCCESS)

Aug 23 07:42:46 proxysql01.localhost.com systemd[1]: Starting LSB: High Performance Advanced Proxy for MySQL...
Aug 23 07:42:46 proxysql01.localhost.com su[11712]: (to proxysql) root on none
Aug 23 07:42:46 proxysql01.localhost.com proxysql[11710]: Starting ProxySQL: 2019-08-23 07:42:46 [INFO] Using config file /etc/proxysql.cnf
Aug 23 07:42:46 proxysql01.localhost.com proxysql[11710]: 2019-08-23 07:42:46 [INFO] SSL keys/certificates found in datadir (/var/lib/proxysql): loading them.
Aug 23 07:42:46 proxysql01.localhost.com proxysql[11710]: DONE!
Aug 23 07:42:46 proxysql01.localhost.com systemd[1]: Started LSB: High Performance Advanced Proxy for MySQL.
Aug 25 06:50:26 proxysql01.localhost.com systemd[1]: Stopping LSB: High Performance Advanced Proxy for MySQL...
Aug 25 06:50:26 proxysql01.localhost.com proxysql[7925]: Shutting down ProxySQL: DONE!
Aug 25 06:50:26 proxysql01.localhost.com systemd[1]: Stopped LSB: High Performance Advanced Proxy for MySQL.

On checking the status of keepalived on passive node

systemctl status keepalived -l

● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-08-25 06:13:17 UTC; 37min ago
  Process: 23667 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 23668 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─23668 /usr/sbin/keepalived -D
           ├─23669 /usr/sbin/keepalived -D
           └─23670 /usr/sbin/keepalived -D

Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Registering Kernel netlink command channel
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Registering gratuitous ARP shared channel
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: SECURITY VIOLATION - scripts are being executed but script_security not enabled.
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) removing protocol VIPs.
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: Using LinkWatch kernel netlink reflector...
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Entering BACKUP STATE
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Aug 25 06:13:17 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Script(check_proxy) succeeded
Aug 25 06:50:31 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Transition to MASTER STATE
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Entering MASTER STATE
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) setting protocol VIPs.
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Sending/queueing gratuitous ARPs on eth0 for 192.168.1.50
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:32 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: VRRP_Instance(VI_01) Sending/queueing gratuitous ARPs on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50
Aug 25 06:50:37 proxysql02.localhost.com Keepalived_vrrp[23670]: Sending gratuitous ARP on eth0 for 192.168.1.50

You can see the floating IP been assigned to the passive node and it transfer its state from backup to master.