Author: TwinDB

TwinDB Backup Tool Release 2.15.7

Some time ago, our team released a new version of TwinDB Backup Tool that contains minor fixed and some major improvements. In this article I want to highlight major features in this release. Status refactoring The tool stores backups metadata in a JSON file that I will refer further as the status. The status is […]
The post TwinDB Backup Tool Release 2.15.7 appeared first on TwinDB.

Monitoring MySQL backups with Datadog and TwinDB Backup Tool

Monitoring MySQL backups is a vital part of a good backup solution. Recovery Time Objective and Recovery Point Objective are most common disaster recovery metrics. TwinDB Backup Tool along with Datadog allows to monitor both of them. Recovery Point Objective Basically, Recovery Point Objective aka RPO means how much data you can lose if the […]
The post Monitoring MySQL backups with Datadog and TwinDB Backup Tool appeared first on TwinDB.

Verify MySQL Backups with TwinDB Backup Tool

If you don’t verify backups you may safely assume you don’t have them. It happens often that MySQL backups can be invalid or broken due to a software bug or some hidden corruption. If you are lucky enough hours and days will be needed to resurrect a database from a bad backup copy. If you […]
The post Verify MySQL Backups with TwinDB Backup Tool appeared first on TwinDB.

MySQL users management using ProxySQL Tools

Recently, we have implemented a set of new commands in ProxySQL Tools to manage MySQL users. You can create, delete, and modify users from command line. There are commands:

proxysql-tools galera user list
proxysql-tools galera user create username [—-options]
proxysql-tools galera user delete username
proxysql-tools galera user modify username [—-options]
proxysql-tools galera user set_password username

You can set a user’s password can be set at time of creation with —-password option or later on with  set_password sub-command.
Create user
Let’s add MySQL user to ProxySQL Tools with the next params:
Username: mysql_user
Password: mysql_user_password
Active: True
Use SSL: True
Default hostgroup: 10
Default schema: information_schema
Schema locked: True
Transaction persistent: True
Fast forward: True
Backend: True
Frontend: True
Max connections: 10000

> proxysql-tools galera user create mysql_user –password mysql_user_password –active –use_ssl –default_hostgroup 10 –default_schema information_schema –schema_locked –transaction_persistent –fast_forward –backend –frontend –max_connections 10000
User mysql_user has created
> proxysql-tools galera user list
————-+——————————-+———+———-+—————–+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+————+———————+——–+———+——————-+——————–+—————+————————+————–+———+———-+—————–+
| mysql_user | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 | True | True | 10 | information_schema | True | True | True | True | True | 10000 |
+————+———————+——–+———+——————-+——————–+—————+————————+————–+———+———-+————————+

Modify user
Let’s now modify a user:

> proxysql-tools galera user modify mysql_user —default_schema custom_schema
User mysql_user has modified
> proxysql-tools galera user list
————-+————–+———+———-+—————–+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+————+———————+——–+———+——————-+——————–+—————+————————+————–+———+———-+—————–+
| mysql_user | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 | True | True | 10 | custom_schema | True | True | True | True | True | 10000 |
+————+———————+——–+———+——————-+——————–+—————+————————+————–+———+———-+————————+

Delete user
Now, let’s try to delete a user using ProxySQL Tools:

> proxysql-tools galera user delete mysql_user
User mysql_user has deleted
> proxysql-tools galera user list
User list is empty

Conclusion
The new features aim to improve MySQL user management in ProxySQL. You don’t need to manually add or change users via mysql_users table, all changes can be made with ProxySQL Tools.
The post MySQL users management using ProxySQL Tools appeared first on Backup and Data Recovery for MySQL.

Logical MySQL Backup Tool Mydumper 0.9.1 Packages

The latest Mydumper 0.9.1 version is around for a while but we didn’t get to build packages for it. New features and bugfixes were announced in Mydumper 0.9.1 Release Note.
Briefly, the new features are:

Full schema support for Mydumper/Myloader
Locking reduce options
GTIDs and Multisource Slave
Myloader single database restore
Bugfixes.

Good news we now build packages not only for CentOS but for Ubuntu and Debian, too. Full list of supported systems

CentOS 6 and 7
Ubuntu trusty and xenial
Debian jessie and stretch

The packages are distributed via TwinDB Repository. To install it run one of following:
For CentOS

curl -s https://packagecloud.io/install/repositories/twindb/main/script.rpm.sh | sudo bash

For Ubuntu/Debian

curl -s https://packagecloud.io/install/repositories/twindb/main/script.deb.sh | sudo bash

For Chef, Puppet and others check out TwinDB Repository Installation Instructions.
Building packages in Travis-CI
We use Travis-CI a lot for our projects. It runs unit and integrations tests for us, as well as builds packages.
Last time we built mydumper packages manually two years ago. This is not right, so we decided to use Travis-CI for mydumper packages as well.
Travis-CI slaves run Ubuntu trusty, so if we need to build packages for other operating systems we need to use Docker. Basically we start a worker for each operating system and run make package-${PLATFORM} on each of them. The artifacts then are uploaded to the TwinDB Repository.

sudo: required
env:
matrix:
– PLATFORM=centos DISTRO=el OS_VERSION=6 DOCKER_IMAGE=centos:centos6
– PLATFORM=centos DISTRO=el OS_VERSION=7 DOCKER_IMAGE=centos:centos7
– PLATFORM=debian DISTRO=debian OS_VERSION=stretch DOCKER_IMAGE=debian:stretch
– PLATFORM=debian DISTRO=debian OS_VERSION=jessie DOCKER_IMAGE=debian:jessie
– PLATFORM=ubuntu DISTRO=ubuntu OS_VERSION=trusty DOCKER_IMAGE=ubuntu:trusty
– PLATFORM=ubuntu DISTRO=ubuntu OS_VERSION=xenial DOCKER_IMAGE=ubuntu:xenial

services:
– docker

script:
– make package-${PLATFORM}

before_deploy:
– ls -la
– sudo find build -name “*.gz” -delete
– sudo find build -name “*.dsc” -delete

deploy:
provider: packagecloud
repository: main
username: twindb
token:
secure: “THfnc8jCVfSdcBM11WXbnDozqnn8HhgSGOqXit0u13nHvlW2C+8oBMyR0dEWVarZBISU+2rBrq2ESrNyWypNcPaA7W3sGObhbjjG2u8/F0I/lbKpQLs5sKVvo2gQadkGmgY2JDPVrp9yBABNpczz6d8fsFblJ56LjY2nPi7j43vLhtQoTdqxOj81fjc6kYCFRYMXLSuvkzcCbxyCen0V86hjE6sda4BlWSxaMAr+SUNXnH2bXD5uJm6YWwMjdpzO/2+mlGyPSB3PEp4qoAwchz5E4P+H1oeOr0WWDVeGcjo3/si3qs3mUKLEdk9/mUpzjzehS0vv9el3lpO/jsWmqZl6JieucMD1jVlZ/2OTB6O5rL50nupFqb0SGO+YgkmS4QgdxjVr+XWf+lnDzGg5al98d+pGNgrU1APmSXNAvMD0pCT6HTtv2SLphEUO2+REwjdA8S+rzs9fGV9I7Jg7IdjZrWRfDYbbCk+knEuoq95aXOS5A0kG1yytA21azLw8hi+qvA3n+bG1DZrQMJjIb5OROX/Ua/OumewqTaaSe9Eoh+AUTrIHJZzo6QMoRLverqITV5GWOzidCXxmitBrKi0+VKzdcjdW+DpcNdA+LTejBjqjDC6+dU1Ealcx3XP3DpznEauRw7tdOg0PgqlGUo8NWmjsalwP8Dm7FWPqPa0=”
dist: ${DISTRO}/${OS_VERSION}
skip_cleanup: true
# local-dir: build
on:
branch: release-0.9.1

Source code of mydumper 0.9.1 as well as scripts to build the packages you can find on our GitHub. Enjoy.
The post Logical MySQL Backup Tool Mydumper 0.9.1 Packages appeared first on Backup and Data Recovery for MySQL.

How we use ProxySQL Tools

When we started working with a customer they asked us to setup for them more reliable and performant MySQL database. At the time they were using Percona XtraDB cluster with ELB to balance read traffic between PXC nodes. There was no writer redundancy – if a writer node was down, the app was down. The app by the way is a Magento backed online shop. All stuff was running in AWS.
We started to design the new database architecture. The first component we threw away was ELB. A simple OLTP sysbench test showed that ELB was a bottleneck. The ELB was capable to deliver somewhat around 6800 qps while with ProxySQL the Percona XtraDB Cluster was a bottleneck. We squeezed 10k qps from ProxySQL + PXC. That basically means we got all ProxySQL feature free of charge.
But one ProxySQL instance is a SPOF. To make ProxySQL highly available we put another instance and managed it with keepalived.
Eventually the architecture looked like depicted.
Read/Write queries split
It was very important to distribute load across the cluster. The load on the app was pretty substantial and that eventually hit the database. We needed to make all but a writer nodes to serve reads. The writer however must be one at all the time.
We used a built-in scheduler to manage the nodes. However we realized quickly that it wasn’t the best choice. Soon after deployment we hit some nasty bugs. For example, occasionally the scheduler made two nodes writers. That’s bad because the app doesn’t really like error on COMMIT. You know, with Galera it’s possible due to optimistic locking.
Worse happened when a node from which we took backups was selected as writer.
The worst part that the scheduler was written in Shell. You cannot implement efficient data model in Shell. You cannot write unit test in Shell. I will elaborate later on, but the point is Shell is a bad choice for such a complex job as ProxySQL scheduler.
So, Ovais Tariq did an exceptional job and rewrote the scheduler in Python. The scheduler is a part of our open source project ProxySQL Tools. As of then we could cover the critical parts of the code with unit test as well as with integration tests.

db-prod-proxy02> select * from scheduler;
+—-+——–+————-+——————–+——–+———-+——+——+——+———+
| id | active | interval_ms | filename | arg1 | arg2 | arg3 | arg4 | arg5 | comment |
+—-+——–+————-+——————–+——–+———-+——+——+——+———+
| 1 | 1 | 15000 | /bin/proxysql-tool | galera | register | NULL | NULL | NULL | |
+—-+——–+————-+——————–+——–+———-+——+——+——+———+
1 row in set (0.00 sec)

High Availability
A single ProxySQL instance is a single point of failure (SPOF). We needed the second ProxySQL node. Two ProxySQL nodes are managed by keepalived. When active ProxySQL node goes down the second, passive, takes over and continues to serve the app requests.
Keepalived implements VRRP protocol originally designed to provide redundancy of a default gateway in L2 networks.
On Linux servers it basically moves an IP alias between two nodes.
In AWS that doesn’t work that way, unfortunately. Amazon does let to assign an alias to an interface. So we came up with different idea.
We created an Elastic Network Interface, assigned an IP address to it and use that IP as a virtual IP. The application would connect to this IP to reach the database.
When keepalived makes a node active (master in VRRP terminology) it calls a script given by notify_master configuration option. That script is another tool from ProxySQL Tools. We wrote about it before but essentially keepalived runs proxysql-tool aws notify_master. What this tool does is it moves Elastic Network Interface (ENI) to the active node.

vrrp_instance proxysql_instance {
state BACKUP
notify_master “/bin/proxysql-tool aws notify_master”

}

Query Rewrite
From time to time we hit a nasty bug # 1258464 in Galera. The bug was confirmed four years ago, but yet not fixed. I really look forward to the Group Replication, it has a chance to perform better here.
It looks like Magento developers stuck in 2004 when MyISAM was still a thing. On some unknown reason Magento regularly sends ALTER TABLE ENABLE/DISABLE KEYS. Occasionally that triggered the bug and a writer node got blocked.
Query rewrite feature in ProxySQL helped us a lot. René suggested to replace ALTER TABLE with some NOOP query. So far it works like a charm.

db-prod-proxy02> select active, match_pattern, replace_pattern from mysql_query_rules;
+——–+——————————-+——————————-+
| active | match_pattern | replace_pattern |
+——–+——————————-+——————————-+
| 1 | ^ALTER TABLE .* ENABLE KEYS$ | SET interactive_timeout=28800 |
| 1 | ^ALTER TABLE .* DISABLE KEYS$ | SET interactive_timeout=28800 |
+——–+——————————-+——————————-+
2 rows in set (0.00 sec)

TL;DR

ProxySQL is performant and feature rich proxy for Percona XtraDB Cluster.

ProxySQL Tools provide better helper tools for ProxySQL (for a scheduler and HA).

The post How we use ProxySQL Tools appeared first on Backup and Data Recovery for MySQL.

How to backup MySQL 5.7

If you ever tried to backup MySQL 5.7 with Percona Xtrabackup that comes in percona-xtrabackup package most likely you got error message like this:

# innobackupex ./
170623 05:58:43 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.

170623 05:58:44 version_check Connecting to MySQL server with DSN ‘dbi:mysql:;mysql_read_default_group=xtrabackup’ as ‘dba’ (using password: YES).
170623 05:58:44 version_check Connected to MySQL server
170623 05:58:44 version_check Executing a version check against the server…
170623 05:58:44 version_check Done.
170623 05:58:44 Connecting to MySQL server host: localhost, user: dba, password: set, port: not set, socket: not set
Error: Unsupported server version: ‘5.7.18-15’. Please report a bug at https://bugs.launchpad.net/percona-xtrabackup

This happens because percona-xtrabackup actually installs version 2.3 that doesn’t support MySQL 5.7.
To take copies from MySQL 5.7 you need to use Percona XtraBackup version 2.4 and higher.
Starting from version 2.13.0 TwinDB Backup uses Percona XtraBackup 2.4 to backup MySQL 5.7, 5.6 and earlier versions.
Configuring repositories.
For Debian based systems run following script:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
curl -s https://packagecloud.io/install/repositories/twindb/main/script.deb.sh | sudo bash

For RedHat based systems run this:

yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
curl -s https://packagecloud.io/install/repositories/twindb/main/script.rpm.sh | sudo bash

Installing TwinDB Backup.
For TwinDB Backup just install package twindb-backup.

# On Debian/Ubuntu

apt-get install twindb-backup

# On CentOS/RedHat

yum install twindb-backup

Package percona-xtrabackup-24 conflicts with percona-xtrabackup. So if you get the error just uninstall percona-xtrabackup.
Configuring TwinDB Backup
There are many ways to configure backups. See documentation for more detailed description of options in the configuration file.
Along with MySQL database I prefer to take copies of /etc/ as well. As the destination S3 is probably the obvious choice. So, the configuration file /etc/twindb/twindb-backup.cfg should look something like this:

[source]
backup_dirs=”/etc”
backup_mysql=yes

[destination]
backup_destination=s3
keep_local_path=/var/log/mysql/DBBACKUP/

[s3]
AWS_ACCESS_KEY_ID=**
AWS_SECRET_ACCESS_KEY=***
AWS_DEFAULT_REGION=us-west-2
BUCKET=my_bucket_for_backups

[mysql]
mysql_defaults_file=/root/.my.cnf
full_backup=daily

[retention]
hourly_copies=24
daily_copies=7
weekly_copies=4
monthly_copies=12
yearly_copies=3

[retention_local]
hourly_copies=1
daily_copies=1
weekly_copies=0
monthly_copies=0
yearly_copies=0

[intervals]
run_hourly=yes
run_daily=yes
run_weekly=yes
run_monthly=yes
run_yearly=yes

Test run
TwinDB Backup package installs a cron job, so you could let it run and check the backups an hour later. Or you can run it manually of course to make sure everything is configured correctly.

# Let’s do a daily run. Cron will run exactly this command every day.
twindb-backup backup daily

If success the tool should not produce any output. If any errors drop me a message on support@twindb.com.
Backups verification
You don’t have backups if you don’t verify them. Unfortunately too many users learned this lesson hard way. So don’t be that user and do verify your backups.
TwinDB Backup is super friendly when it comes to restoring backup copies. It will take care of full or incremental backups, local or remote copies.
First step in the verification process is to check what backups we actually have:

twindb-backup ls
2017-06-23 07:23:27,830: INFO: ls.list_available_backups():22: Local copies:
/var/backup/master1/daily/files/_etc-2017-06-23_06_58_11.tar.gz
/var/backup/master1/daily/mysql/mysql-2017-06-23_06_58_22.xbstream.gz
/var/backup/master1/hourly/files/_etc-2017-06-23_07_00_03.tar.gz
/var/backup/master1/hourly/files/_home-2017-06-23_07_00_12.tar.gz
/var/backup/master1/hourly/mysql/mysql-2017-06-23_07_00_14.xbstream.gz
2017-06-23 07:23:27,835: INFO: ls.list_available_backups():33: hourly copies:
/path/to/twindb-server-backups/master1/hourly/files/_etc-2017-06-23_07_00_03.tar.gz
/path/to/twindb-server-backups/master1/hourly/mysql/mysql-2017-06-23_07_00_14.xbstream.gz
2017-06-23 07:23:28,451: INFO: ls.list_available_backups():33: daily copies:
/path/to/twindb-server-backups/master1/daily/files/_etc-2017-06-23_06_58_11.tar.gz
/path/to/twindb-server-backups/master1/daily/mysql/mysql-2017-06-23_06_58_22.xbstream.gz
2017-06-23 07:23:29,073: INFO: ls.list_available_backups():33: weekly copies:
2017-06-23 07:23:29,704: INFO: ls.list_available_backups():33: monthly copies:
2017-06-23 07:23:30,337: INFO: ls.list_available_backups():33: yearly copies:

Ok, we have some copies on the local and remote storage. Let’s grab and restore the last hourly copy:

twindb-backup restore mysql /path/to/twindb-server-backups/master1/hourly/mysql/mysql-2017-06-23_07_00_14.xbstream.gz –dst /var/lib/mysql_restored
2017-06-23 07:28:59,242: INFO: restore.restore_from_mysql():308: Restoring /path/to/twindb-server-backups/master1/hourly/mysql/mysql-2017-06-23_07_00_14.xbstream.gz in /var/lib/mysql_restored

170623 07:29:06 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints “completed OK!”.

innobackupex version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 05f1fcf)

InnoDB: Shutdown completed; log sequence number 2542148
170623 07:29:13 completed OK!
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():380: Successfully restored /path/to/twindb-server-backups/master1/hourly/mysql/mysql-2017-06-23_07_00_14.xbstream.gz in /var/lib/mysql_restored.
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():382: Now copy content of /var/lib/mysql_restored to MySQL datadir: cp -R /var/lib/mysql_restored/* /var/lib/mysql/
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():383: Fix permissions: chown -R mysql:mysql /var/lib/mysql/
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():385: Make sure innodb_log_file_size and innodb_log_files_in_group in /var/lib/mysql_restored/backup-my.cnf and in /etc/my.cnf are same.
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():388: Original my.cnf is restored in /var/lib/mysql_restored/_config.
2017-06-23 07:29:13,279: INFO: restore.restore_from_mysql():390: Then you can start MySQL normally.

So the backup copy is restored successfully.
To be absolutely sure the backup copy is usable I’d recommend to run a test instance on /var/lib/mysql_restored and run some queries to make sure the data is there.
The post How to backup MySQL 5.7 appeared first on Backup and Data Recovery for MySQL.

Setup high availability for ProxySQL via KeepAlived in AWS

Usually application do not connect directly to Percona XtraDB Cluster, but go through a proxy – ProxySQL, for instance. However if only one proxy node is used it becomes a single point of failure. Not long ago Marco Tusa wrote about how to configure two ProxySQL nodes in front of XtraDB cluster. If deployed on EC2 instances it doesn’t work that way because Amazon doesn’t allow to assign secondary IP address on an interface.
This post describes how to configure highly available ProxySQL with keepalived, proxysql-tools and AWS Elastic Network Interface(ENI).
The application connects to a single Virtual IP. The VIP is assigned to ENI which is managed by keepalived. proxysql-tools moves ENI between ProxySQL instances and monitor health of XtraDB Cluster nodes.
Setup Percona XtraDB Cluster

Node #1: 172.31.8.51
Node #2: 172.31.12.128
Node #3: 172.31.3.159

Setting up XtraDB Cluster is pretty straightforward and not different from any other case.

# wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
# sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
# sudo apt-get update
# sudo apt-get install percona-xtradb-cluster-57
# sudo service mysql stop

MySQL configuration file on other nodes differs only in wsrep_node_address.

# cat /etc/mysql/my.cnf
!includedir /etc/mysql/conf.d/
[mysqld]
datadir=/var/lib/mysql
user=mysql
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://172.31.8.51,172.31.12.128,172.31.3.159
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_node_address=172.31.8.51
wsrep_sst_method=xtrabackup-v2
wsrep_cluster_name=cluster_1
wsrep_sst_auth=”sstuser:s3cretPass”

 
After this step, We can bootstrap our cluster. I have made in from first node by next command:

# /etc/init.d/mysql bootstrap-pxc

When the first node has been started, cluster status can be checked by:

mysql> show status like ‘wsrep%’;
+—————————-+————————————–+
| Variable_name | Value |
+—————————-+————————————–+
| wsrep_local_state_uuid | cc00ee27-3433-11e7-84e5-4a9beaabe9c9 |

| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |

| wsrep_cluster_size | 1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |

| wsrep_ready | ON |
+—————————-+————————————–+

When all nodes are running, wsrep_cluster_size will be equal to number of clusters.
Cluster setup is complete at this step.
Setup ProxySQL and KeepAlived

ProxySQL active: 172.31.24.212
ProxySQL passive: 172.31.19.155
ENI address: 172.31.26.237

It’s worth noting ProxySQL instances and ENI must be on the same subnet.
First, We must to configure our ProxySQL instances for proxy request to cluster nodes.

# apt install proxysql
# apt-get install percona-xtradb-cluster-client-5.7

Now, we’ll setup ProxySQL for work with our Galera nodes. I will use default hostgroup.

# service proxysql start
# mysql -u admin -p -h 127.0.0.1 -P 6032
mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,’172.31.8.51′,3306);
mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,’172.31.12.128′,3306);
mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,’172.31.3.159′,3306);

Now, we must create user for monitoring Percona XtraDB Cluster nodes in ProxySQL. You can do it on any node of cluster.

mysql@node1> CREATE USER ‘monitor’@’%’ IDENTIFIED BY ‘*****’;
mysql@node1> GRANT USAGE ON *.* TO ‘monitor’@’%’;

And update settings in proxysql:

mysql@proxysql1> UPDATE global_variables SET variable_value=’monitor’ WHERE variable_name=’mysql-monitor_username’;
mysql@proxysql1> UPDATE global_variables SET variable_value=’*****’ WHERE variable_name=’mysql-monitor_password’;
mysql@proxysql1> LOAD MYSQL VARIABLES TO RUNTIME;
mysql@proxysql1> SAVE MYSQL VARIABLES TO DISK;

After this changes, let’s try to see monitoring and ping logs:

mysql@proxysql1 SELECT * FROM monitor.mysql_server_ping_log DESC LIMIT 6;
+—————+——+——————+———————-+————+
| hostname | port | time_start_us | ping_success_time_us | ping_error |
+—————+——+——————+———————-+————+
| 172.31.12.128 | 3306 | 1494492166885382 | 627 | NULL |
| 172.31.3.159 | 3306 | 1494492166887154 | 586 | NULL |
| 172.31.8.51 | 3306 | 1494492166888947 | 501 | NULL |
| 172.31.12.128 | 3306 | 1494492176885541 | 596 | NULL |
| 172.31.3.159 | 3306 | 1494492176887442 | 599 | NULL |
| 172.31.8.51 | 3306 | 1494492176889317 | 527 | NULL |
+—————+——+——————+———————-+————+
6 rows in set (0.00 sec)

The previous examples show that ProxySQL is able to connect and ping the nodes you added.
To enable monitoring of these nodes, load them at runtime:

mysql@proxysql1> LOAD MYSQL SERVERS TO RUNTIME;

Now, we must to create user that will work with our XtraDB Cluster nodes.
To add a user, insert credentials into mysql_users table:

mysql@proxysql1> INSERT INTO mysql_users (username,password) VALUES (‘proxy_user’,’*****’);
Query OK, 1 row affected (0.00 sec)
mysql@proxysql1> LOAD MYSQL USERS TO RUNTIME;

To provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

mysql@node2> CREATE USER ‘proxy_user’@’%’ IDENTIFIED BY ‘*****’;
Query OK, 0 rows affected (0.01 sec)

mysql@node2> GRANT ALL ON *.* TO ‘proxy_user’@’%’;
Query OK, 0 rows affected (0.00 sec)

proxysql-tools can monitor health of XtraDB Cluster nodes and take out of rotation failed nodes.
Install it to your proxysql instances using pip install proxysql-tools  and add it into ProxySQL scheduler:

mysql@proxysql1> INSERT INTO scheduler (id,interval_ms,filename,arg1,arg2) VALUES (1, ‘15000’, ‘/usr/local/bin/proxysql_tools’, ‘galera’, ‘register’);
mysql@proxysql1> LOAD SCHEDULER TO RUNTIME;

To make sure that the script has been loaded, check the runtime_scheduler table:

mysql@proxysql1> SELECT * FROM runtime_schedulerG
*************************** 1. row ***************************
id: 1
active: 1
interval_ms: 15000
filename: /usr/local/bin/proxysql_tools
arg1: galera
arg2: register
arg3: NULL
arg4: NULL
arg5: NULL
comment:
1 row in set (0.00 sec)

Repeat steps above for another ProxySQL instance.
Now, ProxySQL instances are configured. But we haven’t added high availability yet.
The idea is following.
keepalived on each of ProxySQL nodes will monitor each other. When one of the nodes becomes active we will attach ENI to the active node and configure VIP on it. The application will use the VIP to connect to MySQL. No configuration changes are needed when the active ProxySQL node changes.
So, install KeepAlived on ProxySQL instances and create the network interface in AWS that will be in same subnet with ProxySQL instances.
proxysql-tools attaches network interface to an active node. Let’s configure it:
Create config file /etc/twindb/proxysql-tools.cfg and add content below:

# cat /etc/twindb/proxysql-tools.cfg
[proxysql]
# ProxySQL admin interface connectivity information
admin_host=127.0.0.1
admin_port=6032
admin_username=admin
admin_password=***

# MySQL user used by ProxySQL monitoring module to monitor MySQL servers
monitor_username=monitor
monitor_password=***

# Virtual IP for HA configuration
virtual_ip=172.31.26.237
virtual_netmask=255.255.240.0

[aws]
aws_access_key_id=***
aws_secret_access_key=***
aws_default_region=***

Virtual IP is address of your ENI. After this, we must configure KeepAlived.

# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_script chk_proxysql {
script “/usr/bin/mysqladmin ping -h 127.0.0.1 -P 3306 > /dev/null 2>&1”
interval 1
timeout 1
fall 3
rise 3
user root
}
vrrp_instance proxysql_instance {
notify_master “/usr/local/bin/proxysql-tool aws notify_master”
virtual_router_id 41
state BACKUP
interface eth0
dont_track_primary
unicast_peer {
172.31.19.155
}
priority 200
authentication {
auth_type PASS
auth_pass 1066
}
track_script {
chk_proxysql
}
nopreempt
debug

unicast_peer is the IP on eth0 of other ProxySQL node proxysql-tool with arguments aws notify_master attach ENI Virtual IP to the instance. On passive ProxySQL, config is same except unicast_peer:

# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_script chk_proxysql {
script “/usr/bin/mysqladmin ping -h 127.0.0.1 -P 3306 > /dev/null 2>&1”
interval 1
timeout 1
fall 3
rise 3
user root
}
vrrp_instance proxysql_instance {
notify_master “/usr/local/bin/proxysql-tool aws notify_master”
virtual_router_id 41
state BACKUP
interface eth0
dont_track_primary
unicast_peer {
172.31.19.212
}
priority 200
authentication {
auth_type PASS
auth_pass 1066
}
track_script {
chk_proxysql
}
nopreempt
debug

And now you can start keepalived as service. One of the ProxySQL nodes will become active, /usr/local/bin/proxysql-tool aws notify_master will move the ENI to the active node and assign the VIP to it.
The post Setup high availability for ProxySQL via KeepAlived in AWS appeared first on Backup and Data Recovery for MySQL.

Howto Encrypt MySQL Backups on S3

TwinDB Backup supports encrypted backup copies since version 2.11.0. As usual the tool supports natively backup and restore operations, if backup copies are encrypted the tool takes care of decryption.
Installing TwinDB Packages repository
I will work with CentOS 7 system to show the example, but there are also packages for Ubuntu trusty and Debian jessie.
We host our packages in PackageCloud which provides a great installation guide if you need to install the repo via puppet, chef etc. The manual way is pretty straightforward as well. A PackageCloud script installs and configures the repository.

curl -s https://packagecloud.io/install/repositories/twindb/main/script.rpm.sh | sudo bash

Installing twindb-backup
Once the repository is ready it’s time to install the tool.

yum install twindb-backup

Let’s review what files the tool actually installs.

# rpm -ql twindb-backup
/opt
/opt/twindb-backup

/opt/twindb-backup/bin

/opt/twindb-backup/bin/twindb-backup

The RPM installs the files in opt because we use OmniBus to package twindb-backup. We package with the tool itself its own python, dependencies. That way we make sure there are no conflicts, no surprises due to different modules versions etc.
The post installation script also creates a cron config and a sample tool configuration file.

# cat /etc/cron.d/twindb-backup
@hourly root twindb-backup backup hourly
@daily root twindb-backup backup daily
@weekly root twindb-backup backup weekly
@monthly root twindb-backup backup monthly
@yearly root twindb-backup backup yearly

# cat /etc/twindb/twindb-backup.cfg
# NOTE: don’t quote option values
# What to backup
[source]
backup_dirs=/etc /root /home
backup_mysql=no

# Destination
[destination]
# backup destination can be ssh or s3
backup_destination=ssh
keep_local_path=/var/backup/local

[s3]

# S3 destination settings

AWS_ACCESS_KEY_ID=XXXXX
AWS_SECRET_ACCESS_KEY=YYYYY
AWS_DEFAULT_REGION=us-east-1
BUCKET=twindb-backups

[ssh]

# SSH destination settings

backup_host=127.0.0.1
backup_dir=/tmp/backup
ssh_user=root
ssh_key=/root/.ssh/id_rsa

[mysql]

# MySQL

mysql_defaults_file=/etc/twindb/my.cnf

full_backup=daily

[retention]

# Remote retention policy

hourly_copies=24
daily_copies=7
weekly_copies=4
monthly_copies=12
yearly_copies=3

[retention_local]

# Local retention policy

hourly_copies=1
daily_copies=1
weekly_copies=0
monthly_copies=0
yearly_copies=0

[intervals]

# Run intervals

run_hourly=yes
run_daily=yes
run_weekly=yes
run_monthly=yes
run_yearly=yes

Preparing Encryption Key
We use GPG to encrypt the backups. The tool doesn’t manage the keys so it’s all user responsibility to create and save a backup copy of the key.
Let’s generate the key first.

# gpg –gen-key
gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: directory `/root/.gnupg’ created
gpg: new configuration file `/root/.gnupg/gpg.conf’ created
gpg: WARNING: options in `/root/.gnupg/gpg.conf’ are not yet active during this run
gpg: keyring `/root/.gnupg/secring.gpg’ created
gpg: keyring `/root/.gnupg/pubring.gpg’ created
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Aleksandr Kuzminsky
Email address: backups@twindb.com
Comment: Key for encrypting MySQL backups
You selected this USER-ID:
“Aleksandr Kuzminsky (Key for encrypting MySQL backups) <backups@twindb.com>”

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O

You don’t want a passphrase – this is probably a *bad* idea!
I will do it anyway. You can change your passphrase at any time,
using this program with the option “–edit-key”.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 8564B88A marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub 2048R/8564B88A 2017-03-28
Key fingerprint = 441E 4B7A FD92 C0D5 4C6B 0C89 4AE0 849C 8564 B88A
uid Aleksandr Kuzminsky (Key for encrypting MySQL backups) <backups@twindb.com>
sub 2048R/0CE02576 2017-03-28

We don’t use passphrase for the key.
Preparing twindb-backup configuration
We need to change default config. Let’s review the changes.

[source]
backup_dirs=/etc
backup_mysql=yes

It’s always nice to save backup copies of /etc. If you don’t want to backup directories, comment out backup_dirs.

# Destination
[destination]
# backup destination can be ssh or s3
backup_destination=s3
keep_local_path=/var/backup/local

We store backups in s3 and we will also keep a local copy (for faster restore time).

[s3]

# S3 destination settings

AWS_ACCESS_KEY_ID=XXXXX
AWS_SECRET_ACCESS_KEY=YYYYY
AWS_DEFAULT_REGION=us-east-1
BUCKET=twindb-backups

We will store backups in S3, so change these options to your key and bucket values.

[mysql]

# MySQL
mysql_defaults_file=/etc/twindb/my.cnf
full_backup=daily

The tool uses a defaults file to connect to MySQL, so specify it here.

# cat /etc/twindb/my.cnf
[client]
user=root

Don’t forget to chmod 600 /etc/twindb/my.cnf.
The config also tells how often to take daily full copies. The hourly copies will be the difference between the last full copy and the current state. It’s so-called differential backups.
To encrypt the backup copies add a [gpg] section

[gpg]
keyring = /root/.gnupg/pubring.gpg
secret-keyring = /root/.gnupg/secring.gpg
recipient = backups@twindb.com

It specifies where GnuPG can find private and public keys of the recipient.
Optionally you may want to change local and remote retention policies, but the defaults should be good enough.
Test backup run
Now let’s run the tool manually to see how it works.

# twindb-backup backup daily

The tool should produce no output unless there is an error.
Listing available backup copies
The tool can tell you what backup copies are available now.

# twindb-backup ls
2017-03-28 05:32:40,412: INFO: ls.list_available_backups():22: Local copies:
/var/backup/local/d312b5e3a877/status
/var/backup/local/d312b5e3a877/daily/files/_etc-2017-03-28_05_32_26.tar.gz
/var/backup/local/d312b5e3a877/daily/mysql/mysql-2017-03-28_05_32_30.xbstream.gz
2017-03-28 05:32:40,417: INFO: ls.list_available_backups():33: hourly copies:
2017-03-28 05:32:41,087: INFO: ls.list_available_backups():33: daily copies:
s3://twindb-backup-test-0/d312b5e3a877/daily/files/_etc-2017-03-28_05_32_26.tar.gz.gpg
s3://twindb-backup-test-0/d312b5e3a877/daily/mysql/mysql-2017-03-28_05_32_30.xbstream.gz.gpg
2017-03-28 05:32:41,687: INFO: ls.list_available_backups():33: weekly copies:
2017-03-28 05:32:42,269: INFO: ls.list_available_backups():33: monthly copies:
2017-03-28 05:32:42,831: INFO: ls.list_available_backups():33: yearly copies:

The encrypted copies have .gpg suffix. Note the local copies are not encrypted.
Restore MySQL from backup
Now we have a backup copy s3://twindb-backup-test-0/d312b5e3a877/daily/mysql/mysql-2017-03-28_05_32_30.xbstream.gz.gpg. Let’s restore MySQL database from it.

# twindb-backup restore mysql s3://twindb-backup-test-0/d312b5e3a877/daily/mysql/mysql-2017-03-28_05_32_30.xbstream.gz.gpg –dst restored

170328 05:39:49 innobackupex: completed OK!
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():354: Successfully restored s3://twindb-backup-test-0/d312b5e3a877/daily/mysql/mysql-2017-03-28_05_32_30.xbstream.gz.gpg in restored.
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():356: Now copy content of restored to MySQL datadir: cp -R restored/* /var/lib/mysql/
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():357: Fix permissions: chown -R mysql:mysql /var/lib/mysql/
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():359: Make sure innodb_log_file_size and innodb_log_files_in_group in restored/backup-my.cnf and in /etc/my.cnf are same.
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():362: Original my.cnf is restored in restored/_config.
2017-03-28 05:39:49,566: INFO: restore.restore_from_mysql():364: Then you can start MySQL normally.

Now we have a restored database in restored directory that we can copy to /var/lib/mysql

# ls -la restored/
total 30756
drwxr-xr-x 6 root root 4096 Mar 28 05:39 .
dr-xr-x— 5 root root 4096 Mar 28 05:39 ..
drwxr-xr-x 3 root root 4096 Mar 28 05:39 _config
-rw-r—– 1 root root 262 Mar 28 05:39 backup-my.cnf
-rw-r–r– 1 root root 5242880 Mar 28 05:39 ib_logfile0
-rw-r–r– 1 root root 5242880 Mar 28 05:39 ib_logfile1
-rw-r—– 1 root root 18874368 Mar 28 05:39 ibdata1
drwx—— 2 root root 4096 Mar 28 05:39 mysql
drwx—— 2 root root 4096 Mar 28 05:39 performance_schema
drwx—— 2 root root 4096 Mar 28 05:39 test
-rw-r—– 1 root root 89 Mar 28 05:39 xtrabackup_checkpoints
-rw-r—– 1 root root 562 Mar 28 05:39 xtrabackup_info
-rw-r—– 1 root root 2097152 Mar 28 05:39 xtrabackup_logfile

The post Howto Encrypt MySQL Backups on S3 appeared first on Backup and Data Recovery for MySQL.

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569