Author: MyDBOPS

Scale-with-Maxscale-part5 (Multi-Master)

This is the 5th blog in series of Maxscale blog, Below is the list of our previous blogs, Which provides deep insight for Maxscale and its use cases for different architectures.

Scale with Maxscale part -1 (Intro & Galera Cluster)

Scale with Maxscale part – 2 (Administration, Installation & operation)

Scale with Maxscale part – 3 (Replication M-S)

Scale with Maxscale part – 4 (Amazon Aurora)

Here we are going to discuss, using Maxscale for Multi-Master environment (M-M), in both Active-Passive and Active-Active mode
Test Environment:
Below is the detail of the environment used for testing
OS                            : Debian 8 (Jessie)
MySQL Version     : 5.7.21-20-log Percona Server (GPL)
Maxscale version : maxscale-1.4.5-1.debian.jessie.x86_64 ( GPL )
Master 1                 :
Master 2                 :
Master-Master Replication 
Setting up of master-master is beyond the scope of our exercise, I will directly jump on to the configuration of maxscale with Multi-master setup
Monitor Module for M-M
Maxscale comes with a special monitor module named “MMMON”, This monitors the health of servers and set the status flag based on which the router module (Read-Write splitter) sends connections
Below is the configuration basic configuration for Multi-Master using maxscale.
[Splitter Service]

[Splitter Listener]
service=Splitter Service

[Replication Monitor]


Active-Active Setup:
Active-Active setup is where there is completed the balance of read & write between the servers, With this setup, I would strongly recommend having the Auto_increment_increment & Auto_increment_offset to avoid conflicting writes.
Active-Active Setup with MaxscaleBelow is how it looks from Maxscale
MaxScale> list servers
Server | Address | Port | Connections | Status
master1 | | 3306 | 0 | Master, Running
master2 | | 3306 | 0 | Master, Running
Active Passive Setup:
Active-Passive setup is where Writes happen on one of the node and reads is distributed among the servers. To have this just enable the “read_only=1” on any of the node. Maxscale identifies this flag and starts routing only the read connections.
Next question which arises immediately is what happens when there is writer (Active) node failure?
The answer is pretty simple just disable the read-only on the passive node, You do it manually by logging the node or automate it with Maxscale by integrating it along with the fail-over script, which will be called during the time of unplanned or planned maintenance.
Active-Passive setup With Maxscale 
I have just enabled the read_only on Master2, you can see the status got changed to ‘Slave’, as below.
MaxScale> list servers
Server | Address | Port | Connections | Status
master1 | | 3306 | 0 | Master, Running
master2 | | 3306 | 0 | Slave, Running
By setting the above Multi-master setup, we have ensured that we have a one more DB node to have the fail-over, This leaves maxscale as a single point of failure for the application, we can have an HA setup for maxscale using keepalived by having an IP switch between the nodes are if you using AWS you can go with ELB(Network) on TCP ports and balancing connection between Maxscale nodes.
Image Courtesy : Photo by Vincent van Zalinge on Unsplash

Back to basics: Isolation Levels In MySQL

In this blog, we will see the very basic thing “I” of “ACID” and an important property of Transaction ie., “ISOLATION”
The isolation defines the way in which the MySQL server (InnoDB) separates each transaction from other concurrent running transaction in the server and also ensures that the transactions are processed in a reliable way. If transactions are not isolated then one transaction could modify the data that another transaction is reading hence creating data inconsistency. Isolation levels determine how isolated the transactions are from each other.
MySQL supports all four the isolation levels that SQL-Standard defines.The four isolation levels are


The Isolation level’s can be set globally or session based on our requirements.

Choosing the best isolation level based, have a great impact on the database, Each level of isolation comes with a trade-off, let’s discuss on each of them,
In READ-UNCOMMITTED isolation level, there isn’t much isolation present between the transactions at all, ie ., No locks. A transaction can see changes to data made by other transactions that are not committed yet. This is the lowest level in isolation and highly performant since there is no overhead of maintaining locks, With this isolation level, there is always for getting a “Dirty-Read”
That means transactions could be reading data that may not even exist eventually because the other transaction that was updating the data rolled-back the changes and didn’t commit. lest see the below image for better understanding

Suppose a transaction T1 modifies a row if a transaction T2 reads the row and sees the modification even though T1 has not committed it, that is a dirty read, the problem here is if T1 rolls back, T2 doesn’t know that and will be in a state of “totally perplexed”
IN READ-COMMITTED isolation level, the phenomenon of dirty read is avoided, because any uncommitted changes are not visible to any other transaction until the change is committed. This is the default isolation level with most of popular RDBMS software, but not with MySQL.
Within this isolation level, each SELECT uses its own snapshot of the committed data that was committed before the execution of the SELECT. Now because each SELECT has its own snapshot, here is the trade-off now, so the same SELECT, when running multiple times during the same transaction, could return different result sets. This phenomenon is called non-repeatable read.

A non-repeatable occurs when a transaction performs the same transaction twice but gets a different result set each time. Suppose T2 reads some of the rows and T1 then change a row and commit the change, now T2 reads the same row set and gets a different result ie.., the initial read is non-repeatable.
Read-committed is the recommended isolation level for Galera ( PXC, MariaDB Cluster ) and InnoDB clusters.
In REPEATABLE-READ isolation level, the phenomenon of non-repeatable read is avoided. It is the default isolation in MySQL.This isolation level returns the same result set throughout the transaction execution for the same SELECT run any number of times during the progression of a transaction.
This is how it works, a snapshot of the SELECT is taken the first time the SELECT is run during the transaction and the same snapshot is used throughout the transaction when the same SELECT is executed. A transaction running in this isolation level does not take into account any changes to data made by other transactions, regardless of whether the changes have been committed or not. This ensures that reads are always consistent(repeatable). Maintaining a snapshot can cause extra overhead and impact some performance
Although this isolation level solves the problem of non-repeatable read, another possible problem that occurs is phantom reads.
A Phantom is a row that appears where it is not visible before. InnoDB and XtraDB solve the phantom read problem with multi-version concurrency control.
REPEATABLE READ is MySQL’s default transaction isolation level.

SERIALIZABLE completely isolates the effect of one transaction from others. It is similar to REPEATABLE READ with the additional restriction that row selected by one transaction cannot be changed by another until the first transaction finishes. The phenomenon of phantom reads is avoided. This isolation level is the strongest possible isolation level. AWS Aurora do not support this isolation level.
Photo by Alberto Triano on Unsplash

ProxySQL Series: Handling resource expensive(bad) Queries in MySQL

This is our fourth blog in the ProxySQL Series

MySQL Replication Read-write Split up
Seamless Replication Switchover Using MHA
Mirroring MySQL Queries

This blog focuses on how to quickly find and address badly written queries using ProxySQL without any downtime and change in application code.
When we get an incident about the high usage on a production master, then mostly it is because of unexpected spike in Traffic (QPS) or slow queries.
Below was the status when we were doing the regular weekly health check and In PMM graphs we saw master was serving more than 15K queries and both slaves were having ~6k queries.
We can also see a similar type of issues when a new deployment goes live with some application bug.

The first thing which came to my mind was why to overload master DB when we have slaves with enough resources.
There is a way to find offending queries without enabling slow query logs on master box.
In ProxySQL, Table [ stats_mysql_query_digest ] contains all executed query with extra details.
I have made test case with sysbench.
Three methods have been described here.
1. Route slow queries on Slave servers :
Below given is stats table and we found top 5 slow queries which are taking maximum execution time on a master.
select * from stats_mysql_query_digest where schemaname=’sbtest_rw’ order by sum_time desc limit 5;
| hostgroup | schemaname | username | digest | digest_text | count_star | sum_time |
| 0 | sbtest_rw | sysbench | 0x04E3DD532AB9CA82 | SELECT c FROM sbtest20 WHERE id=? | 4356291 | 96502488100 |
| 0 | sbtest_rw | sysbench | 0x573033E0E6EB75F4 | SELECT DISTINCT c FROM sbtest16 WHERE id BETWEEN ? AND ? ORDER BY c | 138156 | 39145641433 |
| 0 | sbtest_rw | sysbench | 0x135C4A683DA43FBB | SELECT c FROM sbtest7 WHERE id BETWEEN ? AND ? ORDER BY c | 138955 | 34528708401 |
| 0 | sbtest_rw | sysbench | 0xBAB6CF24BA4299CF | SELECT SUM(k) FROM sbtest16 WHERE id BETWEEN ? AND ? | 138788 | 20791121721 |
4 rows in set (0.01 sec)

For visibility copied o/p from sysbench test.
Described some Important field from stats table :
– hostgroup – the hostgroup where the query was sent.
– digest – a hexadecimal hash that uniquely represents a query with its parameters stripped
– digest_text – the actual text with its parameters stripped
– count_star – the total number of times the query has been executed.
– last_seen – unix timestamp, the last moment (so far) when the query was routed through the proxy
– sum_time – the total time in microseconds spent executing queries of this type.

More on stats table :
From above result, we can see that all these SELECT queries are going on the master box, which is configured for hostgroup 0 . So the solution can be simple, determine the expensive statements should be moved(routed) to reader nodes and just route them on reader group.
Note:  If you have multiple proxySQL nodes, so before creating query rule on all ProxySQL, create rule only on one ProxySQL server to make sure query rules is created successfully and it is getting used correctly.
Once you are confident then create a rule on all ProxySQL.
Current configuration :
Hostgroup 0 : writer host
Hostgroup 1 : reader host

INSERT INTO mysql_query_rules(rule_id,active,username,schemaname,digest,destination_hostgroup,apply)
VALUES (11,1,’sysbench’,’sbtest_rw’,’0x04E3DD532AB9CA82′,1,1);


MySQL [(none)]> select rule_id,active,username,schemaname,digest,match_digest,destination_hostgroup,apply from mysql_query_rules where rule_id=11;
| rule_id | active | username | schemaname | digest | match_digest | destination_hostgroup | apply |
| 11 | 1 | sysbench | sbtest_rw | 0x04E3DD532AB9CA82 | NULL | 1 | 1 |
1 row in set (0.00 sec)

Now verify query rule and check for hits :
MySQL [(none)]> SELECT rule_id, hits, destination_hostgroup hg, digest ,match_digest pattern FROM mysql_query_rules NATURAL JOIN stats_mysql_query_rules where rule_id=71;
| rule_id | hits | hg | digest | pattern |
| 11 | 17389 | 1 | 0x04E3DD532AB9CA82 | NULL |
1 row in set (0.00 sec)

Once we get confirmation that query rule is getting hits, then we can go ahead and create the rule on all ProxySQL servers. Will write about ProxySQL clustering later.
for i in `seq 11 20` ; do
ssh 172.16.0.$i — ‘mysql -u admin -padmin -h -P6032 -e “INSERT INTO mysql_query_rules(rule_id,active,username,schemaname,digest,destination_hostgroup,apply) VALUES (11,1,\”sysbench\”,\”sbtest_rw\”,\”0x04E3DD532AB9CA82\”,1,1); LOAD MYSQL QUERY RULES TO RUNTIME;”‘

for i in `seq 11 20` ; do
ssh 172.16.0.$i — ‘mysql -u admin -padmin -h -P6032 -e “SAVE MYSQL QUERY RULES TO DISK”‘

After forwarding few SELECT queries from master to slaves, we can see gradually load got divided between master and slaves.
MySQL Questions from PMM:

Cpu Load Status from graphite :

There is another way if we have queries of similar pattern. Make use column match_digest.

match_digest : it matches the regular expression again the digest of the query.

match_pattern : it matches the regular expression again the unmodified text of the query

The digest of a query [match_digest] can be smaller than the query itself ( for example,  SELECT statement with several MB of data in IN clause ), thus running a regex against a smaller string is surely faster.
INSERT INTO mysql_query_rules(rule_id,active,username,schemaname,match_digest,destination_hostgroup,apply) VALUES (21,1,”sysbench”,”sbtest_rw”,”^SELECT a.ip_id as ip_id, f.accountid as accountid,(| ) as fileid, f.size as size, a.service as service, a.login as login, s.ip as ip, s.type as servertype”,1,1);

More on Query_rules:
2. Rewrite slow queries on-the-fly :
Query Rewrite feature quickly allows to isolate and correct problematic queries and improve performance.
In below example we found that somehow index was not getting used by query ( Optimiser stats might be calculated wrongly ) and Query went smoothly after trying FORCE Index.
Now in this type of scenario, we always need code change.
But ProxySQL give us way, How? see below example.
insert into mysql_query_rules (rule_id,active,username,schemaname,match_pattern,replace_pattern,apply)
values (101,1,’sysbench’,’sbtest_rw’,’ as folderid FROM remotedl Join folder On Where folder.userid =’,’ as folderid FROM remotedl FORCE INDEX (folderid) Join folder On Where folder.userid =’);

MySQL [(none)]> select rule_id,active,username,schemaname,match_pattern,replace_pattern,apply from mysql_query_rules where rule_id=75\G
*************************** 1. row ***************************
rule_id: 101
active: 1
username: sysbench
schemaname: sbtest_rw
match_pattern: as folderid FROM remotedl Join folder On Where folder.userid =
replace_pattern: as folderid FROM remotedl FORCE INDEX (folderid) Join folder On Where folder.userid =
apply: 0
1 row in set (0.00 sec)

We have replaced query on-the-fly to use force index while execution.
More on Query Rewrite:
3. Block the queries :
Consider a situation when we had a spammer. parallel update on a single table for the single account. and we decided to block all update queries coming for that specific account.

| LEFT(INFO,15) | COUNT(*) |
| UPDATE abc | 2262 |
| UPDATE dept | 6 |
| UPDATE card | 2 |

INSERT INTO mysql_query_rules (rule_id, active, match_pattern, OK_msg, apply) VALUES (1,1,”UPDATE abc SET updated_at=.*_status=.*45335″,””,1);

Done ! Now database stops receiving that query.

Important to note! 
There are some exceptions when query routing gets disable. When transaction_persistent is enable for a user, so that application should execute all queries in same transactions on the same host to get the more accurate result.
Admin> SELECT username, transaction_persistent FROM mysql_users;
| username | transaction_persistent |
| user1 | 1 |

Tip: If you want fresh statistics to analyse, then execute below command to empty tablestats_mysql_query_digest.
select * from stats_mysql_query_digest_reset; : This blog post contains few more examples of how you can identify potential queries from digest table by creating complex queries to gather required information.
Image Courtesy : Photo by Chris Liverani on Unsplash

Fulfilled Tablespace Encryption (TDE) in Percona Cluster

Encryption is a very important form of security and It’s becoming a standard from a compliance perspective to ensure PCI, PII and HIPPA compliances. Encryption needs to be performed for Data at Rest, Data over Wire.
Data At Rest:

Until 5.6, To encrypt the MySQL data files, Only disk level encryption possible (Gazzang, Luks)
From 5.7.11 MySQL added a feature for tablespace encryption. Which encrypts only the InnoDB tablespaces, Not the logs files.

Percona Server 5.7.20-19 added a feature to encrypt binary logs and relay logs.
DBaaS like RDS supports TDE (Transparent Data Encryption), It’s a disk level encryption through Amazon KMS.
On EC2, Google Cloud servers it’s possible to create the disk as encrypted during the time of creation and MySQL data can be stored on that disk.

Data Over Wire:

Encryption over the wire (network) can be setup configuring MySQL with SSL.

Now coming to the topic,
We are about to explain how to make your PXC installation encrypted properly to ensure compliance is met.

Encrypting Client Traffic / Cluster Communication Using SSL
Encrypting Table space With TableSpace Encryption
Encrypting SST
Encrypting Binary/Relay logs With Percona Server Binlog Encryption

Below is the test environment used for this test case,
Operating System: Centos7.4
PXC Version : 5.7.21-20-57-log Percona XtraDB Cluster (GPL),
In this blog, we will not explain the basic installation and configuration of PXC,
Step1: Generating SSL certificates
By Default during MySQL installation, we will be having self-signed certificates created under “/var/lib/mysql”, it’s always recommended to have the below certificates in a separate directory common in all cluster nodes.

ca-key.pem (CA key file)
Ca.pem (CA certification file)
Server-key.pem (server key file)
Server-cert.pem (server certification file)

You can copy these file generated during installation to a separate location or use “mysql_ssl_rsa_setup” to generate these file to the specified directory as below, in this demo I have used “/etc/certs”
# mysql_ssl_rsa_setup -d /etc/certs

Generating a 2048 bit RSA private key
writing new private key to ‘ca-key.pem’
Generating a 2048 bit RSA private key……………………………………………………………………………………………………………………+++
writing new private key to ‘server-key.pem’
Generating a 2048 bit RSA private key
writing new private key to ‘client-key.pem’
Step 2: Enabling variables for encryption,
Encrypting Tablespace:
Tablespace encryption is done using keyring plugin, make sure to have individual table space for each table(“innodb_files_table=1“), you can refer here for TDE with details explanation
early-plugin-load =
keyring-file-data = /var/lib/mysql-keyring/keyring
Encrypting Replication Traffic :
Replication traffic in PXC majorly involves the following

Write set replication, which is the major one
IST (Incremental state transfer) copies only the missing transaction from DONOR to JOINER
Service messages ensure all the nodes are synchronised

We can have this variable “pxc-encrypt-cluster-traffic = ON” to enable the cluster traffic to be encrypted under mysqld section of cnf
Encrypting SST traffic:
As you were aware SST refers to State Snapshot transfer, It’s always recommended to use xtrabackup-v2 as the SST method for safer and faster way for transfer, While doing the SST, keyfile has to be sent along with file for decryption, hence provide keyfile location under [xtrabackup] and also provide the same SSL files under [sst].

streamfmt = xbstream
encrypt = 4
tmpdir = /tmp
Encrypting Binlog/Relay log:
As you are aware, binlog contains the change data(writes), Replication uses binlogs to copy data between master and slave, Anyone having access to binlog basically means, he has access to your DATA, this is a potential risk, hence it makes trivial to enable binlog Encryption. Binlog encryption is supported from Percona Server for MySQL version 5.7.20-19, Below are the trivial variables to enable binlog/Relaylog encryption along with the basic variable for binlog, Binlog encryption also needs the keyring plugin to be enabled,

Its mandatory to have the same setting on the slave server, which makes the relay log to be encrypted.
Consolidating and considering all the configs, A working config of PXC with look as below
port = 3306
socket = /var/lib/mysql/mysql.sock
datadir = /var/lib/mysql
user = mysql
log-error = /var/log/mysqld.err
wsrep_provider = /usr/lib64/galera3/
wsrep_cluster_address = gcomm://,,
wsrep_node_address =
wsrep_node_name = v11
wsrep_sst_method = xtrabackup-v2
wsrep_cluster_name = xtradbcluster
wsrep_auto_increment_control = ON
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sstuser:s3cretPass
innodb_autoinc_lock_mode = 2
server_id = 311
#binlog encrption
log_bin = mysql-bin

#Cluster encryption
pxc-encrypt-cluster-traffic = ON
early-plugin-load =
keyring-file-data = /var/lib/mysql-keyring/keyring

streamfmt = xbstream
encrypt = 4
tmpdir = /tmp

keyring-file-data = /var/lib/mysql-keyring/keyring
Now lets proceed to start the nodes, First node will be bootstrapped, make sure to have the cluster address as “wsrep_cluster_address=gcom://”
Below is the log from first node, which is limited to the SSL setting
2018-04-24T17:59:48.485704Z 0 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host =; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 10; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 4; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.freeze_purge_at_seqno = -1; gcache.keep_pages_count = 0; gcache.keep_pages_size = 0; gcache.mem_size = 0; = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = yes; gcache.size = 8G; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 800; gcs.fc_master_slave= YES; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.npvo = false; pc.recovery = 1; pc.version = 0; pc.wait_prim = true; pc.wait_prim_timeout = PT30S; pc.weight = 1; protonet.backend = asio; protonet.version = 0; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.max_ws_size = 2147483647; repl.proto_max = 8; socket.checksum = 2; socket.recv_buf_size = 212992; socket.ssl_ca = /etc/certs/ca.pem; socket.ssl_cert = /etc/certs/server-cert.pem; socket.ssl_cipher = AES128-SHA; socket.ssl_compression = YES; socket.ssl_key = /etc/certs/server-key.pem;

2018-04-24T17:59:48.619657Z 0 [Note] WSREP: Assign initial position for certification: 3, protocol version: -1
2018-04-24T17:59:48.619875Z 0 [Note] WSREP: Preparing to initiate SST/IST
2018-04-24T17:59:48.619910Z 0 [Note] WSREP: Starting replication
2018-04-24T17:59:48.619946Z 0 [Note] WSREP: Setting initial position to 300efff9-431e-11e8-98e0-e27f04800ef7:3
2018-04-24T17:59:48.620379Z 0 [Note] WSREP: Using CRC-32C for message checksums.
2018-04-24T17:59:48.620489Z 0 [Note] WSREP: initializing ssl context
2018-04-24T17:59:48.621385Z 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2018-04-24T17:59:48.621800Z 0 [Warning] WSREP: Fail to access the file (/var/lib/mysql//gvwstate.dat) error (No such file or directory). It is possible if node is booting forfirst time or re-booting after a graceful shutdown
2018-04-24T17:59:48.621810Z 0 [Note] WSREP: Restoring primary-component from disk failed. Either node is booting for first time or re-booting after a graceful shutdown
2018-04-24T17:59:48.622330Z 0 [Note] WSREP: GMCast version 0

2018-04-24T17:59:48.622647Z 0 [Note] WSREP: (476f28e5, ‘ssl://’) listening at ssl://
2018-04-24T17:59:48.622657Z 0 [Note] WSREP: (476f28e5, ‘ssl://’) multicast: , ttl: 1

2018-04-24T17:59:48.623454Z 0 [Note] WSREP: EVS version 0
2018-04-24T17:59:48.623800Z 0 [Note] WSREP: gcomm: connecting to group ‘xtradbcluster’, peer ”
2018-04-24T17:59:48.623858Z 0 [Note] WSREP: start_prim is enabled, turn off pc_recovery
2018-04-24T17:59:48.626063Z 0 [Note] WSREP: Node 476f28e5 state primary
2018-04-24T17:59:48.626176Z 0 [Note] WSREP: Current view of cluster as seen by this node
view (view_id(PRIM,476f28e5,1)
memb {
Now let’s proceed to join the next node, Please find the logs as below DONOR→ JOINER
2018-04-24T18:09:50.293384Z 0 [Note] WSREP: (ae0ecc92, ‘ssl://’) listening at ssl://
2018-04-24T18:09:50.293395Z 0 [Note] WSREP: (ae0ecc92, ‘ssl://’) multicast: , ttl: 1
2018-04-24T18:09:50.294414Z 0 [Note] WSREP: EVS version 0
2018-04-24T18:09:50.294860Z 0 [Note] WSREP: gcomm: connecting to group ‘xtradbcluster’, peer ‘,’
2018-04-24T18:09:50.312625Z 0 [Note] WSREP: SSL handshake successful, remote endpoint ssl:// local endpoint ssl:// cipher: AES128-SHA compression: none
2018-04-24T18:09:50.313114Z 0 [Note] WSREP: SSL handshake successful,remote endpoint ssl:// local endpoint ssl:// cipher: AES128-SHA compression: none
2018-04-24T18:09:50.313615Z 0 [Note] WSREP: (ae0ecc92, ‘ssl://’) connection established to ae0ecc92 ssl://
2018-04-24T18:09:50.313636Z 0 [Warning] WSREP: (ae0ecc92, ‘ssl://’) address ‘ssl://’ points to own listenin address, blacklisting
2018-04-24T18:09:50.314606Z 0 [Note] WSREP: SSL handshake successful, remote endpoint ssl:// local endpoint ssl:// cipher: AES128-SHA compression: none
2018-04-24T18:09:51.305207Z 0 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 3)
2018-04-24T18:09:51.306246Z 2 [Note] WSREP: State transfer required:
Group state: 300efff9-431e-11e8-98e0-e27f04800ef7:3
Local state: 00000000-0000-0000-0000-000000000000:-1
2018-04-24T18:09:51.306312Z 2 [Note] WSREP: New cluster view: global state: 300efff9-431e-11e8-98e0-e27f04800ef7:3, view# 2: Primary, number of nodes: 2, my index: 1, protocol version 3
2018-04-24T18:09:51.306344Z 2 [Note] WSREP: Setting wsrep_ready to true
2018-04-24T18:09:51.306429Z 2 [Warning] WSREP: Gap in state sequence.Need state transfer.
2018-04-24T18:09:51.306433Z 2 [Note] WSREP: Setting wsrep_ready to false
2018-04-24T18:09:51.307298Z 0 [Note] WSREP: Initiating SST/IST transfer on JOINER side (wsrep_sst_xtrabackup-v2 –role ‘joiner’ –address ‘’ –datadir ‘/var/lib/mysql/’ –defaults-file ‘/etc/my.cnf’ –defaults-group-suffix ” –parent ‘5788’ ” )
2018-04-24T18:09:53.810961Z 0 [Note] WSREP: (ae0ecc92, ‘ssl://’) turning message relay requesting off
2018-04-24T18:09:57.217241Z 2 [Note] WSREP: Prepared SST/IST request: xtrabackup-v2|
2018-04-24T18:09:57.217414Z 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-04-24T18:09:57.217611Z 2 [Note] WSREP: REPL Protocols: 8 (3, 2)
2018-04-24T18:09:57.217739Z 2 [Note] WSREP: Assign initial position for certification: 3, protocol version: 3
2018-04-24T18:09:57.217901Z 0 [Note] WSREP: Service thread queue flushed.
2018-04-24T18:09:57.217974Z 2 [Note] WSREP: Check if state gap can be serviced using IST
2018-04-24T18:09:57.218020Z 2 [Note] WSREP: Local UUID: 00000000-0000-0000-0000-000000000000 != Group UUID: 300efff9-431e-11e8-98e0-e27f04800ef7
2018-04-24T18:09:57.218477Z 2 [Note] WSREP: State gap can’t be serviced using IST. Switching to SST
2018-04-24T18:09:57.218549Z 2 [Note] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (300efff9-431e-11e8-98e0-e27f04800ef7): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():538. IST will be unavailable.
2018-04-24T18:09:57.220866Z 0 [Note] WSREP: Member 1.0 (v12) requested state transfer from ‘*any*’. Selected 0.0 (v11)(SYNCED) as donor.
2018-04-24T18:09:57.221004Z 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 3)
2018-04-24T18:09:57.221495Z 2 [Note] WSREP: Requesting state transfer: success, donor: 0
2018-04-24T18:09:57.221534Z 2 [Note] WSREP: GCache history reset: 00000000-0000-0000-0000-000000000000:0 -> 300efff9-431e-11e8-98e0-e27f04800ef7:3
2018-04-24T18:10:06.909709Z WSREP_SST: [INFO] donor keyring received at: ‘/var/lib/mysql-keyring/donor-keyring’
2018-04-24T18:10:07.061101Z WSREP_SST: [INFO] Proceeding with SST………
2018-04-24T18:10:07.350117Z WSREP_SST: [INFO] …………Waiting for SST streaming to complete!
2018-04-24T18:10:24.368719Z 0 [Note] WSREP: 0.0 (v11): State transfer to 1.0 (v12) complete.
2018-04-24T18:10:24.370118Z 0 [Note] WSREP: Member 0.0 (v11) synced with group.
2018-04-24T18:10:24.474538Z WSREP_SST: [INFO] Preparing the backup at /tmp/pxc_sst_VRBd/sst_Teu6
2018-04-24T18:10:33.692709Z WSREP_SST: [INFO] Moving the backup to /var/lib/mysql/
2018-04-24T18:10:33.891894Z WSREP_SST: [INFO] Moving sst keyring into place: moving /var/lib/mysql-keyring/donor-keyring to /var/lib/mysql-keyring/keyring
2018-04-24T18:10:33.939633Z WSREP_SST: [INFO] Galera co-ords from recovery: 300efff9-431e-11e8-98e0-e27f04800ef7:3
2018-04-24T18:10:34.021154Z 0 [Note] WSREP: SST complete, seqno: 3
From the above logs, it has undergone SST successfully with encryption and synced with the cluster.
In the future version of PXC (MySQL 8.0) we may get the undo logs and redo logs encryption . It will make all data files (most) at rest to be encrypted.

Partial (Optimised) JSON updates and Replication in MySQL 8.0

           MySQL is the most widely used  Relational Database Management System in the open source world. MySQL stepped into the NoSQL world by introducing the JSON Data Type in MySQL 5.7 release. In this blog post I am going to explain one of the major advantage of optimisation made in JSON Replication .
This was done from the MySQL 8.0.3 release.
What happened before 8.0.3 ?
Before MySQL 8.0.3, Whatever changes (Updates) made in JSON document, it will log as a full document in binary log & replicate the same into slave. The JSON data is stored as a blob internally. Below is an example of how it is logged as full document in binary log ?
Example –
Server version – 5.7.22-log MySQL Community Server (GPL)

My Binlog settings,

Binlog_format = ROW
Binlog_row_image = minimal

Creating table with JSON data type,

mysql> create table json_repl (id int primary key, emp_de json);
Query OK, 0 rows affected (0.01 sec)   

Inserting the values,

mysql> insert into json_repl values(1,'{“tag”: “desk1 addr1 history1 grade1”, “book”: “table1 and emp_historicaldata”, “emp_id”: “aaaa1”, “designation”: “MySQL DBA”, “position_grade”: “A1 grade1 and MASTER in MySQL”}’);
Query OK, 1 row affected (0.01 sec)

Updating the record with JSON function JSON_REPLACE.

mysql> update json_repl set emp_de = json_replace(emp_de,’$.emp_id’,’a1′);
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1 Warnings: 0

Binlog view ,

mysqlbinlog -vvv –base64-output=DECODE-ROWS mydbopslabs13-bin.000011 > log57

cat log57

### UPDATE `json_store`.`json_repl`
###   @1=1 /* INT meta=0 nullable=0 is_null=0 */
### SET
###   @2=’\x00\x05\x00<B2>\x00\x27\x00\x03\x00*\x00\x04\x00.\x00\x06\x004\x00\x0b\x00?\x00\x0e\x00\x0cM\x00\x0ci\x00\x0c<87>\x00\x0c<8A>\x00\x0c<94>\x00tagbookemp_iddesignationposition_grade\x1bdesk1 addr1 history1 grade1\x1dtable1 and emp_historicaldata\x02a1\x09MySQL DBA\x1dA1 grade1 and MASTER in MySQL’ /* JSON meta=4 nullable=1 is_null=0 */
# at 666596
#180421 23:00:58 server id 1  end_log_pos 666627 CRC32 0xbf0a14ea       Xid = 138659
Here, the JSON_REPLACE function is used to update the value in emp_id field . But, it is logging the full document in binary log file .
What happened after 8.0.3 ?
From MySQL 8.0.3, MySQL community released the new variable to avoid the full document logging into the binary log . The new variable  “binlog_row_value_options”.  The valid options are PARTIAL_JSON or empty(‘’) . More information in the worklog (WL 2955)
Below, I have explained how the replication is differed from older versions .
Example –
MySQL Version – MySQL 8.0.3

My Binlog settings ( my.cnf )


Creating table with JSON data type,

mysql> create table json_repl (id int primary key, emp_de json);
Query OK, 0 rows affected (0.01 sec)   

Inserting the values,

mysql> insert into json_repl values(1,'{“tag”: “desk1 addr1 history1 grade1”, “book”: “table1 and emp_historicaldata”, “emp_id”: “aaaa1”, “designation”: “MySQL DBA”, “position_grade”: “A1 grade1 and MASTER in MySQL”}’);
Query OK, 1 row affected (0.01 sec)

Updating the record with JSON function JSON_REPLACE.

mysql> update json_repl set emp_de = json_replace(emp_de,’$.emp_id’,’a1′);
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1 Warnings: 0

Binlog view ,

mysqlbinlog -vvv –base64-output=DECODE-ROWS mydbopslabs15-bin.000008 > log803

cat log803

### UPDATE `json_store`.`json_repl`
###   @1=1 /* INT meta=0 nullable=0 is_null=0 */
### SET
###   @2=JSON_REPLACE(@2, ‘$.emp_id’, ‘a1′) /* JSON meta=4 nullable=1 is_null=0 */
# at 516165
#180421 23:02:27 server id 3  end_log_pos 516196 CRC32 0x43c90650       Xid = 127163
Now, we are can observe only the updated field in binlog . Hope, it clearly explains the optimisation in JSON .
How it Benefits in Production Environment ?
Of course, there are many benefits. One of the major advantage is less disk usage .
Lower disk usage (IO)  –
By avoiding to logging the full document, we can save the disk usage (IO). I am just curious to know how much disk we can save by this approach, so i have made an test with 1 core records in both MySQL 5.7 and MySQL 8.0.3 . Below i am sharing the findings and my test results as well .
I have initiated the below script for insert the record in both MySQL 5.7 & MySQL 8.0.3.

[sakthi@mydbopslabs13 mysql]# cat

for i in `seq 2 10000000`;
mysql -e “insert into json_store.json_repl values($i,'{\”emp_id\”: \”aaaa$i\”, \”tag\”: \”desk$i addr$i history$i grade$i\”, \”book\”: \”table$i and emp_historicaldata\”, \”position_grade\”: \”A$i grade$i and MASTER in MySQL\”, \”designation\”: \”MySQL DBA\”}’);”
After inserting the records, i have decided to update every values based on primary key with the below script in both MySQL 5.7 and MySQL 8.0.3. Then we can finally analyze the binlog growth . 
[sakthi@mydbopslabs13 mysql]# cat

mysql -e “flush logs;”
for i in `seq 1 10000000`;
mysql -e “update json_store.json_repl set emp_de = json_replace(emp_de,’$.emp_id’,’a$i’) where id=$i;”
I have analysed the binlog growth in both MySQL 5.7 & MySQL 8.0.3 post JOSN Update. It is really good .
Binlog Growth for update in MySQL 5.7,

rw-r—–. 1 mysql mysql 1.1G  Apr 21 20:05 mydbopslabs13-bin.0000017
rw-r—–. 1 mysql mysql 1.1G  Apr 21 20:34 mydbopslabs13-bin.0000018
rw-r—–. 1 mysql mysql 1.1G  Apr 21 21:02 mydbopslabs13-bin.0000019
rw-r—–. 1 mysql mysql 1.1G  Apr 21 21:29 mydbopslabs13-bin.0000020
rw-r—–. 1 mysql mysql 1.1G  Apr 21 21:57 mydbopslabs13-bin.0000021
rw-r—–. 1 mysql mysql 1.1G  Apr 21 22:31 mydbopslabs13-bin.0000022
rw-r—–. 1 mysql mysql 652M Apr 21 23:00 mydbopslabs13-bin.0000023
Total – 6 GB 652 MB
Binlog Growth for update in MySQL 8.0,

-rw-r—–. 1 mysql mysql 1.1G  Apr 21 20:07 mydbopslabs15-bin.000012
-rw-r—–. 1 mysql mysql 1.1G  Apr 21 20:38 mydbopslabs15-bin.000013
-rw-r—–. 1 mysql mysql 1.1G  Apr 21 21:09 mydbopslabs15-bin.000014
-rw-r—–. 1 mysql mysql 1.1G  Apr 21 21:35 mydbopslabs15-bin.000015
-rw-r—–. 1 mysql mysql 1.1G  Apr 21 22:11 mydbopslabs15-bin.000016
-rw-r—–. 1 mysql mysql 74M   Apr 21 22:17 mydbopslabs15-bin.000017

Total – 5 GB 74 MB
Binlog Growth Graph –
From the analysis we have saved approximately 1.5 GB of disk space. It will be vary based on yours JSON data .
Less Network Traffic –
We can transfer the small amount of data over network instead of the full document . By this approach We can save the unwanted network usage in replication too.
Can avoid Replication Lag –
Replicating the big document (blob) might cause slower writes to the slave. So, there is a chance for replication lag.  We can avoid the replication  lag by this approach .
Limitations –
This only take effect with the below JSON functions. Without the below functions, it will log the full document even the partial_json enabled .
Input column and target column should be the same .
It only affect the ROW based replication Environment . Statement based replication will always works as its nature .
Conclusion –
JSON is very good for store the NoSQL data . It is very good for read purpose . MySQL has introduced the JSON and keep improving it . Definitely, JSON will play an important role for making the MySQL better . At Mydbops, We are keep testing the new things on MySQL and related tools, will be coming back with new blog soon.
Image Courtesy :

ProxySQL Series: Mirroring MySQL Queries

This is our third blog in the ProxySQL Series

MySQL Replication Read-write Split up

Seamless Replication Switchover Using MHA

This blog post explains one of the very important use case of ProxySQL called Mirroring.
Mirroring in ProxySQL is not an equivalent to MySQL replication. Also it does not guarantee data consistency across the servers but still there are a few cases where this feature of ProxySQL can take advantage.

Let us consider we have Master-Slave replication running behind ProxySQL with MySQL 5.6 Writes and Reads are served by Master and Slave respectively. For more information kindly refer previous blog in series of ProxySQL.
In this post we will cover how we can test our production queries before going for database upgrade to newer version(5.7/8.0)  or with different MySQL forks ( MariaDB / Percona ).
To validate all production read queries on MySQL version 5.7
We have to create a new MySQL instance with MySQL 5.7 , Restore backup on this new instance and set it as a slave of existing production Master in MySQL 5.6. (as shown in above diagram).
 Enable Mirroring in ProxySQL:

Add new server details into ProxySQL under hostgroup 7 and then your final mysql_servers table will look like below.

Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (7,’′,3306);

mysql> select hostgroup_id,hostname,port,status,weight from mysql_servers;
| hostgroup_id | hostname | port | status | weight |
| 0 | | 3306 | ONLINE | 100 |
| 1 | | 3306 | ONLINE | 100 |
| 1 | | 3306 | ONLINE | 100 |
| 7 | | 3306 | ONLINE | 100 |
4 rows in set (0.00 sec)

Create query rule to match with all incoming SELECT and mirror on hostgroup 7 And your final mysql_query_rules table will look like below.

# Mirror Read Traffic to new host
INSERT INTO mysql_query_rules (rule_id,active,username,schemaname,match_digest,mirror_hostgroup,apply) VALUES (11,1,”sysbench”,”sbtest”,’^SELECT’,7,1);

# Load And Save Configuration

mysql> SELECT rule_id,active,username,schemaname,match_digest,destination_hostgroup,mirror_hostgroup,apply,mirror_flagOUT FROM mysql_query_rules;
| rule_id | active | username | schemaname | match_digest | destination_hostgroup | mirror_hostgroup | apply | mirror_flagOUT |
| 1 | 1 | sysbench | sbtest | ^SELECT | 1 | NULL | 0 | NULL |
| 2 | 1 | sysbench | sbtest | ^SELECT.*FOR UPDATE | 0 | NULL | 1 | NULL |
| 11 | 1 | sysbench | sbtest | ^SELECT | NULL | 7 | 1 | NULL |
3 rows in set (0.00 sec)

Start Sending Traffic
Let us simulate a load using sysbench with minimum time to increase the visibility.
root@mydbops-labs21:/tmp# sysbench –test=oltp –oltp-table-size=10000 –mysql-db=sbtest –mysql-user=sysbench –db-driver=mysql –mysql-password=sysbench –mysql-host= –max-time=1 –oltp-read-only=off –max-requests=0 –mysql-port=6033 –num-threads=1 –db-ps-mode=disable run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)
Using “BEGIN” for starting transactions
Using auto_inc on the id column
Threads started!

Time limit exceeded, exiting…

OLTP test statistics:
queries performed:
read: 266
write: 95
other: 38
total: 399
transactions: 19 (18.04 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 361 (342.83 per sec.)
other operations: 38 (36.09 per sec.)

Test execution summary:
total time: 1.0530s
total number of events: 19
total time taken by event execution: 1.0522
per-request statistics:
min: 40.70ms
avg: 55.38ms
max: 82.73ms
approx. 95 percentile: 82.67ms

Threads fairness:
events (avg/stddev): 19.0000/0.00
execution time (avg/stddev): 1.0522/0.00

Check STATS Table

Confirm created rules are getting hits.

mysql> select * from stats_mysql_query_rules;
| rule_id | hits |
| 1 | 266 |
| 2 | 0 |
| 11 | 266 |
3 rows in set (0.00 sec)

Queries executed against each host :

mysql> select hostgroup,srv_host,srv_port,status,Queries from stats_mysql_connection_pool;
| hostgroup | srv_host | srv_port | status | Queries |
| 0 | | 3306 | ONLINE | 134 |
| 1 | | 3306 | ONLINE | 143 |
| 1 | | 3306 | ONLINE | 123 |
| 7 | | 3306 | ONLINE | 266 |
4 rows in set (0.00 sec)

To check how what statements application has executed :

mysql> select Command,Total_Time_us,Total_cnt from stats_mysql_commands_counters where Total_Time_us> 0 ;
| Command | Total_Time_us | Total_cnt |
| BEGIN | 6877 | 19 |
| COMMIT | 607616 | 19 |
| DELETE | 5774 | 19 |
| INSERT | 5488 | 19 |
| SELECT | 346073 | 532 |
| SHOW_TABLE_STATUS | 2570 | 1 |
| UPDATE | 19575 | 57 |
7 rows in set (0.00 sec)

With Digest we can see same SELECT queries has been mirrored on hostgroup 7.

mysql> SELECT hostgroup, username, digest_text, count_star, sum_time FROM stats_mysql_query_digest;
| hostgroup | username | digest_text | count_star | sum_time |
| 0 | sysbench | COMMIT | 19 | 607616 |
| 0 | sysbench | INSERT INTO sbtest values(?,?,?,?) | 19 | 5488 |
| 0 | sysbench | UPDATE sbtest set c=? where id=? | 19 | 5582 |
| 0 | sysbench | UPDATE sbtest set k=k+? where id=? | 38 | 13993 |
| 1 | sysbench | SELECT DISTINCT c from sbtest where id between ? and ? order by c | 19 | 13792 |
| 0 | sysbench | DELETE from sbtest where id=? | 19 | 5774 |
| 0 | sysbench | SHOW TABLE STATUS LIKE ? | 1 | 2570 |
| 0 | sysbench | BEGIN | 19 | 6877 |
| 1 | sysbench | SELECT c from sbtest where id=? | 190 | 114260 |
| 7 | sysbench | SELECT c from sbtest where id=? | 190 | 117746 |
| 7 | sysbench | SELECT DISTINCT c from sbtest where id between ? and ? order by c | 19 | 24826 |
| 7 | sysbench | SELECT c from sbtest where id between ? and ? | 19 | 9445 |
| 1 | sysbench | SELECT c from sbtest where id between ? and ? | 19 | 12620 |
| 7 | sysbench | SELECT SUM(K) from sbtest where id between ? and ? | 19 | 10228 |
| 7 | sysbench | SELECT c from sbtest where id between ? and ? order by c | 19 | 14617 |
| 1 | sysbench | SELECT SUM(K) from sbtest where id between ? and ? | 19 | 12037 |
| 1 | sysbench | SELECT c from sbtest where id between ? and ? order by c | 19 | 16502 |
17 rows in set (0.00 sec)

Test DML Queries
In similar manner we can test DML (write ) queries from application by using below query rule.
INSERT INTO mysql_query_rules(rule_id,active,username,schemaname,match_digest,mirror_hostgroup,apply) VALUES (12,1,”sysbench”,”sbtest”,”^(INSERT|UPDATE|DELETE)”,7,1);

We can check failed queries in ProxySQL Error logs under /var/lib/proxysql/proxysql.log
A  few issues commonly encountered in past while doing testing on prod :
2017-04-10 08:19:10 MySQL_Session.cpp:2243:handler(): [WARNING] Error during query on (7,,3306): 1055, Expression #2 of SELECT list is not in GROUP BY clause and contains nonaggregated column ‘sbtest.i.pad’ which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by

2017-04-21 05:58:01 MySQL_Session.cpp:2253:handler(): [WARNING] Error during query on (7,,3306): 1364, Field ‘headers’ doesn’t have a default value

#MYSQL 5.7.17 (Default Value)
mysql> show global variables like “%sql_mode%”;
| Variable_name | Value |
1 row in set (0.00 sec)

Fixed this by removing ONLY_FULL_GROUP_BY and STRICT_TRANS_TABLES from sql_mode in MySQL 5.7
Important to note : ProxySQL mirroring should be only used for benchmarking ,testing, upgrade validation and any other activity that doesn’t require correctness. The process increased some CPU utilization on box. so if we have more number of ProxySQL configured, we can just configure mirroring on one-two proxy’s to avoid extra cpu usage on every box.
For more information :

Ansible Best Practices

Managing Heterogeneous environment without automation can be tricky. We are using Ansible as an automation tool for install, configure and manage DB infra for Mydbops. Using Ansible, we overcome a lot of obstacle’s and we would like to share our learning with the community ( Our Presentation on Ansible )
Best Practices to write Ansible Plays

Use Native YAML Syntax
Ansible Tasks and Playbook are written in YAML. We can format YAML in different ways

shorthand/one-line (=)
structured map (:)
folded scalars (>)

Shorthand/one-line (=)

– name: Installing Percona Server For RedHat Family.
  yum: “name={{ item }} state=present”
  with_items: “{{ percona_packages }}”
  when: ansible_os_family == “RedHat” and  ‘percona’ in group_names

Structured map (:)

– name: Installing Percona Server For RedHat Family.
name: {{ item }}
state: present
with_items: “{{ percona_packages }}”
when: ansible_os_family == “RedHat” and ‘percona’ in group_names

folded scalars (>)

– name: Installing Percona Server For RedHat Family.
“name={{ item }} state=present”
with_items: “{{ percona_packages }}”
when: ansible_os_family == “RedHat” and ‘percona’ in group_names
We believe Shorthand/one-line (=) and Structured map (:) are the method we prefer to use to develop our Playbooks/Tasks. If a task required more then 2 parameters, we use only Structured map (:) format because it gives better readability, improving syntax highlighting and parameters are stacked.

Human Meaningful Names with Variables
Meaningful Task Name
A Task name should indicate its purpose very clearly. If someone else run our playbook, the name needs to indicates the purpose and the action of the task very clearly to the End Users.

– name: Installing Percona Server For RedHat Family.
name: {{ item }}
state: present
with_items: “{{ percona_packages }}”
when: ansible_os_family == “RedHat” and ‘percona’ in group_names

TASK [mysql : Installing Percona Server For RedHat Family.] **********************************************************************************************************************
changed: [v11] => (item=[u’Percona-Server-server-56′, u’Percona-Server-client-56′, u’Percona-Server-shared-56′])

Meaningful Variable Name
As a best practice, we recommend prefixing variables with the source or target of the Task/Play it represents.

    – Percona-Server-server-{{ mysql_version|string |replace(“.”, “”) }}
    – Percona-Server-client-{{ mysql_version|string |replace(“.”, “”) }}
    – Percona-Server-shared-{{ mysql_version|string |replace(“.”, “”) }} 

You can also dramatically improve the readability of your plays for a bit of extra verbosity with using human-meaningful names that communicate their purpose and usage to others or even yourself at a later date.

Project Layout

As  The project grows larger, maintaining a clean and maintainable directory layout is important. There are many different best practices for project layout, but I personally follow the recommendation on Ansible official site. It’s very much suitable for our requirements.

These two are my favorite tips:

Have different inventory for different environments/Clients. (this is important for multiple environment configurations)
Every Inventories directory have two folders(host_vars and group_vars) and one file(hosts).
We group host,  based on the purpose e.g.: Mysql Installation, Linux tunning, PXC Configuration and etc.
using this group name we create a file in targeted inventory group_vars directory, that will contain all variable required by the roles.
roles folder contain all roles, this role developed a focus on single purpose.
We have a playbook folder, which contains all plays files. each play file represents individual role and its hosts. This host will inventory group name, that we specified in 3rd point.
In this design everything will be static, only inventories will vary for every execution.

We are fully utilizing this folder layout and using customized execution model. This approach will not help everyone requirement. But someone developing ansible for multi-environment, sure it will do the purpose.

Ansible Configuration
Ansible will read ansible.cfg configuration from  ANSIBLE_CONFIG environment variable, ansible.cfg in the current working directory, .ansible.cfg from the user home directory or /etc/ansible/ansible.cfg, whichever it finds first it will take high precedence over the other. so we need to aware before we execute our tasks.
ansible.cfg allow tuning many settings, its help to improve the performance and customize according to our need.
inventory = ./inventories/vagrant_testing
forks = 50
ansible_managed = Mydbops Automation managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}. Please dont change anything manually.
pipelining = True
inventory parameters help to define the default inventories hosts file. If the user, unfortunately, forgets to specify the -i <host file>, ansible automatically look for inventory key and value from the ansible.cfg and execute all the task on the particular host files if host group or host match from the host file.
fork parameter help to instruct  Ansible runs the command in parallel using multiple forks. You can increase this value depending on how much your system and network connection can handle. If you set the value to 1, the command gets executed serially, i.e one after the other. The default value is 5.
ansible_managed is a variable that can be inserted into files written by Ansible’s config templating system. this {{ ansible_managed }} variable automatically expand to something meaningful value that we defined in out ansible.cfg. This help user to know how who and what time it created.
pipelining reduces the number of network operations required to execute a module on the remote server, this will increase significant performance improvement. if we use privilege escalation (become) and ‘requiretty’ enabled in /etc/sudoers on all managed hosts, it will get conflict and ssh connection can’t get established. so we need to precise requiretty is disabled on all manage host.
I believe this blog may help everyone. If anyone needs more details about any specific ansible feature please comment below, we will try to write a blog on the same.
 Image Courtesy:


Reading Amazon RDS MySQL/Aurora log file from terminal.

At Mydbops we support a good number of clients on AWS cloud (Aurora and RDS).
Amazon Relational Database Service (RDS) is providing the cloud based database service. It is the cost-efficient, resize able & ease to manage. As in any other DBaaS, If you need to analyse the log files (Error log / Slow log), you need to login the console and manually download the files.
Logging into the console seems simple, But this is a bit complex operation when it comes to incorporate that in a day to day operation and automation. In this blog i would like to share my experience in making this into a straightforward process for downloading the log files directly from command line without console GUI.
Following tools are to be installed for this operation.

awscli – Cli utility for AWS Cloud
jq – Utility For Parse And Format JSON Data, Just like SED for JSON

for linux,
yum install -y awscli jq

for Ubuntu,
apt-get install awscli jq

Step 1 – ( Configuring the AWS account )
After the installation, the Server should be configured with the respective AWS account as well as region using aws configure command.
Output –
[sakthi@mydbops_labs ~]$ aws configure
AWS Access Key ID : xxxxxxxxxxxxxx
AWS Secret Access Key : yyyyyyyyy/zzzzzzzzzz
Default region name : us-west-2
Default output format : json

To find the exact region name you can use this link.  My Server is on Oregon, so i have used “us-west-2”.

Step 2 – ( Check the available Instances and their Status )
The Flag describe-db-instances is used to find the instances details and status.
# aws rds describe-db-instances
[sakthi@mydbops_labs ~]$ aws rds describe-db-instances | jq ‘ .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + “: ” + .DBInstanceStatus’
“localmaster: available”
“mysql-rds: available”
“slave-tuning: available”

Step 3 – ( Check the available log files for specific Instance )
describe-db-log-files command is used to get the log details. It can be provided with the db instance identifier to get the log files for that instance.
# aws rds describe-db-log-files –db-instance-identifier localmaster
[sakthi@mydbops_labs ~]$  aws rds describe-db-log-files –db-instance-identifier localmaster | jq
 “DescribeDBLogFiles”: [
     “LastWritten”: 1518969300000,
     “LogFileName”: “error/mysql-error-running.log”,
     “Size”: 0
     “LastWritten”: 1501165840000,
     “LogFileName”: “mysqlUpgrade”,
     “Size”: 3189
     “LastWritten”: 1519032900000,
     “LogFileName”: “slowquery/mysql-slowquery.log”,
     “Size”: 1392
Note: Size is given in bytes
Step 4 – ( Download the specific log file )
download-db-log-file-portion command is used to download the log files, It has to be given along with the db-instance-identifier and log-file-name, we have identified on the previous step and output type.
[sakthi@mydbops_labs ~]$ aws rds download-db-log-file-portion –db-instance-identifier localmaster –log-file-name “slowquery/mysql-slowquery.log.13” –output text > rds_slow.log
[sakthi@mydbops_labs ~]$ ls -lrth | grep -i rds_slow.log
-rw-rw-r–.  1 sakthi sakthi  34K Feb 19 09:43 rds_slow.log
–output determines the output type of the file to be downloaded, It will support the following options.


Output Examples –
Table Output –
| LogFileData|
2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: ‘unconnected’ user: ‘rdsadmin’ host: ‘localhost’ (Got timeout reading communication packets)

JSON Output –
“LogFileData”: “2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: ‘unconnected’ user: ‘rdsadmin’ host: ‘localhost’ (Got timeout reading communication packets)

TEXT Output –
2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: ‘unconnected’ user: ‘rdsadmin’ host: ‘localhost’ (Got timeout reading communication packets)
Implementing In Production:
I have given some use case scenario explanations with examples of production implementation for effective database operation.
Scenario 1: ( Integrating with Pt-query-digest )
We can easily manipulate the slow log files with Percona tool “pt-query-digest“ by making the simple automation scripts. Below is a sample script to analyse the slow logs with Percona toolkit from my local master server.


echo -en “\navailable Instances – \n\n”
info=$(aws rds describe-db-instances | jq ‘ .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + “: ” + .DBInstanceStatus’ | awk ‘{ print $1, $2 }’ | sed ‘s/\”//g’ | sed ‘s/,//g’ | sed ‘s/://g’ | nl )
echo “$info”
echo -en “\n”;read -p “enter instance name : ” instance

if [[ $(echo “$info” | grep -sw “$instance” | wc -l) -eq 1 ]]; then
rds_node=$(echo “$info” | grep -sw “$instance” | awk ‘{print $2}’);
echo “Instance not found”;

echo -en “\navailable slow logs for $rds_node \n\n”
log_files=`aws rds describe-db-log-files –db-instance-identifier $rds_node | grep slowquery | awk ‘{ print $2 }’ | sed ‘s/\”//g’ | sed ‘s/,//g’ | head -n5`

if [[ $(echo “$log_files” | wc -c) -gt 1 ]]; then
echo “$log_files”
echo -en “\nno slow log files .. exiting the script [script]\n”

echo -en “\n”;read -p “enter slow log file : ” log_file
aws rds download-db-log-file-portion –db-instance-identifier $rds_node –log-file-name $log_file –output text > $path/recent.log

if [[ -f /usr/bin/pt-query-digest || -f /bin/pt-query-digest ]]; then

pt-query-digest –limit=95% –set-vars=’ssl_verify_mode=’SSL_VERIFY_NONE” $path/recent.log > $path/slow_pt_result.log
echo -en “\n\nstatus … [done] \n\no/p file – $path/slow_pt_result.log\n\nBye\n”

echo “pt-query-digest not found [exit]\n”


[sakthi@mydbops_labs ~]$ bash

available Instances –

1 localmaster available
2 slave-tuning available
3 mysql-rds available

enter instance name : 2 #choosing the respective number

available slow logs for localmaster


enter slow log file : slowquery/mysql-slowquery.log.10
status … [done]
o/p file – /home/sakthi/slow_pt_result.log
Scenario 2 – ( Accessing the latest ERROR log from any Server in the region )
We can easily check the Server’s recent ERROR log files in a single command. This will be really helpful in case of any MySQL crash happens.
We will be reading the running log
[sakthi@mydbops_labs ~]$ cat read_rds_error

echo -en “\navailable Instances – \n\n”
aws rds describe-db-instances | jq ‘ .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + “: ” + .DBInstanceStatus’ | awk ‘{ print $1 }’ | sed ‘s/\”//g’ | sed ‘s/,//g’ | sed ‘s/://g’
echo -en “\n”;read -p “enter instance name : ” instance
aws rds download-db-log-file-portion –db-instance-identifier $instance –log-file-name “error/mysql-error-running.log” –output text | less

Output –

[sakthi@mydbops_labs ~]$ read_rds_error

available Instances –

enter instance name : localmaster

Conclusion –
I believe this blog post will help you in handling day to day DBA  tasks easier on your RDS Environment. We are keep testing the new things on Amazon RDS Cloud, will be coming back with new stuff soon.
Photo Credits

TEL/電話+86 13764045638
QQ 47079569