Author: MySQL Performance Blog

Webinar Thursday June 22, 2017: Deploying MySQL in Production

Join Percona’s Senior Operations Engineer, Daniel Kowalewski as he presents Deploying MySQL in Production on Thursday, June 22, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).
Register Now

 MySQL is famous for being something you can install and get going in less than five minutes in terms of development. But normally you want to run MySQL in production, and at scale. This requires some planning and knowledge. So why not learn the best practices around installation, configuration, deployment and backup?
This webinar is a soup-to-nuts talk that will have you going from zero to hero in no time. It includes discussion of the best practices for installation, configuration, taking backups, monitoring, etc.
Register for the webinar here.

Daniel Kowalewski, Senior Technical Operations Engineer
Daniel has been designing and deploying solutions around MySQL for over ten years. He lives for those magic moments where response time drops by 90%, and loves adding more “nines” to everything.

Three Methods of Installing Percona Monitoring and Management

In this blog post, we’ll look at three different methods for installing Percona Monitoring and Management (PMM).
Percona offers multiple methods of installing Percona Monitoring and Management, depending on your environment and scale. I’ll also share comments on which installation methods we’ve decided to forego for now. Let’s begin by reviewing the three supported methods:

Virtual Appliance
Amazon Machine Image
Docker

Virtual Appliance
We ship an OVF/OVA method to make installation as simple as possible, with the least amount of effort required and at the lowest cost to you. You can leverage the investment in your virtualization deployment platform. OVF is an open standard for packaging and distributing virtual appliances, designed to be run in virtual machines.
Using OVA with VirtualBox as a first step is common in order to quickly play with a working PMM system, and get right to adding clients and observing activity within your own environment against your MySQL and MongoDB instances. But you can also use the OVA file for enterprise deployments. It is a flexible file format that can be imported into other popular hypervisor systems such as VMware, Red Hat Virtualization, XenServer, Microsoft System Centre Virtual Machine Manager and others.
We’d love to hear your feedback on this installation method!
AWS AMI
We also have an AWS AMI in order to provide easy scaling of PMM Server in AWS, so that you can deploy onto any instance size required for your monitoring instance. Depending on the AWS region you’re in, you’ll need to choose from the appropriate AMI Instance ID. Soon we’ll be moving to the AWS Marketplace for even easier deployment. When this is implemented, you will no longer need to clone an existing AMI ID.
Docker
Docker is our most common production deployment method. It is easy (three commands) and scalable (tuning passed on the command line to Docker run). While we recognize that Docker is still a relatively new deployment system for many users, it is dramatically gaining adoption. It is also where Percona is investing the bulk of our development efforts. We deploy PMM Server as two Docker containers: one for storing the data that persists across restarts/upgrades, and the other for running the actual PMM Server binaries (Grafana, Prometheus, consul, Orchestrator, QAN, etc.).
Where are the RPM/DEB/tar.gz packages?!
A common question I hear is why doesn’t Percona support binary-based installation?
We hear you: RPM/DEB/tar.gz methods are commonly used today for many of your own applications. Percona is striving for simplicity in our deployment of PMM Server, and we spend considerable development and QA effort validating the specific versions of Grafana/Prometheus/QAN/consul/Orchestrator all work seamlessly together.
Percona wants to ensure OS compatibility and long-term support of PMM, and to do binary distribution “right” means it can quickly get expensive to build and QA across all the popular Linux distributions available today. We’re in no way against binary distributions. For example, see our list of the nine supported platforms for which we provide bug fix support.
Percona decided to focus our development efforts on stability and features, and less on the number of supported platforms. Hence the hyper-focus on Docker. We don’t have any current plans to move to a binary deployment method for PMM, but we are always open to hearing your feedback. If there is considerable interest, then please let me know via the comments below. We’ll take these thoughts into consideration for PMM planning in the second half of 2017.
Which other methods of installing Percona Monitoring and Management would you like to see?

MySQL Triggers and Updatable Views

In this post we’ll review how MySQL triggers can affect queries.
Contrary to what the documentation states, we can activate triggers even while operating on views:
https://dev.mysql.com/doc/refman/5.7/en/triggers.html
Important: MySQL triggers activate only for changes made to tables by SQL statements. They do not activate for changes in views, nor by changes to tables made by APIs that do not transmit SQL statements to the MySQL server.
Be on the lookout if you use and depend on triggers, since it’s not the case for updatable views! We have reported a documentation bug for this but figured it wouldn’t hurt to mention this as a short blog post, too. The link to the bug in question is here:
https://bugs.mysql.com/bug.php?id=86575
Now, we’ll go through the steps we took to test this, and their outputs. These are for the latest MySQL version (5.7.18), but the same results were seen in 5.5.54, 5.6.35, and MariaDB 10.2.5.
First, we create the schema, tables and view needed:
mysql> CREATE SCHEMA view_test;
Query OK, 1 row affected (0.00 sec)
mysql> USE view_test;
Database changed
mysql> CREATE TABLE `main_table` (
   ->   `id` int(11) NOT NULL AUTO_INCREMENT,
   ->   `letters` varchar(64) DEFAULT NULL,
   ->   `numbers` int(11) NOT NULL,
   ->   `time` time NOT NULL,
   ->   PRIMARY KEY (`id`),
   ->   INDEX col_b (`letters`),
   ->   INDEX cols_c_d (`numbers`,`letters`)
   -> ) ENGINE=InnoDB  DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.31 sec)
mysql> CREATE TABLE `table_trigger_control` (
   ->   `id` int(11),
   ->   `description` varchar(255)
   -> ) ENGINE=InnoDB  DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.25 sec)
mysql> CREATE VIEW view_main_table AS SELECT * FROM main_table;
Query OK, 0 rows affected (0.02 sec)
Indexes are not really needed to prove the point, but were initially added to the tests for completeness. They make no difference in the results.
Then, we create the triggers for all possible combinations of [BEFORE|AFTER] and [INSERT|UPDATE|DELETE]. We will use the control table to have the triggers insert rows, so we can check if they were actually called by our queries.
mysql> CREATE TRIGGER trigger_before_insert BEFORE INSERT ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (NEW.id, “BEFORE INSERT”);
Query OK, 0 rows affected (0.05 sec)
mysql> CREATE TRIGGER trigger_after_insert AFTER INSERT ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (NEW.id, “AFTER INSERT”);
Query OK, 0 rows affected (0.05 sec)
mysql> CREATE TRIGGER trigger_before_update BEFORE UPDATE ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (NEW.id, “BEFORE UPDATE”);
Query OK, 0 rows affected (0.19 sec)
mysql> CREATE TRIGGER trigger_after_update AFTER UPDATE ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (NEW.id, “AFTER UPDATE”);
Query OK, 0 rows affected (0.05 sec)
mysql> CREATE TRIGGER trigger_before_delete BEFORE DELETE ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (OLD.id, “BEFORE DELETE”);
Query OK, 0 rows affected (0.18 sec)
mysql> CREATE TRIGGER trigger_after_delete AFTER DELETE ON main_table FOR EACH ROW
   -> INSERT INTO table_trigger_control VALUES (OLD.id, “AFTER DELETE”);
Query OK, 0 rows affected (0.05 sec)
As you can see, they will insert the ID of the row in question, and the combination of time/action appropriate for each one. Next, we will proceed in the following manner:

INSERT three rows in the main table
UPDATE the second
DELETE the third

The reasoning behind doing it against the base table is to check that the triggers are working correctly, and doing what we expect them to do.
mysql> INSERT INTO main_table VALUES (1, ‘A’, 10, time(NOW()));
Query OK, 1 row affected (0.01 sec)
mysql> INSERT INTO main_table VALUES (2, ‘B’, 20, time(NOW()));
Query OK, 1 row affected (0.14 sec)
mysql> INSERT INTO main_table VALUES (3, ‘C’, 30, time(NOW()));
Query OK, 1 row affected (0.17 sec)
mysql> UPDATE main_table SET letters = ‘MOD’ WHERE id = 2;
Query OK, 1 row affected (0.03 sec)
Rows matched: 1  Changed: 1  Warnings: 0
mysql> DELETE FROM main_table WHERE id = 3;
Query OK, 1 row affected (0.10 sec)
And we check our results:
mysql> SELECT * FROM main_table;
+—-+———+———+———-+
| id | letters | numbers | time     |
+—-+———+———+———-+
|  1 | A       |      10 | 15:19:14 |
|  2 | MOD     |      20 | 15:19:14 |
+—-+———+———+———-+
2 rows in set (0.00 sec)
mysql> SELECT * FROM table_trigger_control;
+——+—————+
| id   | description   |
+——+—————+
|    1 | BEFORE INSERT |
|    1 | AFTER INSERT  |
|    2 | BEFORE INSERT |
|    2 | AFTER INSERT  |
|    3 | BEFORE INSERT |
|    3 | AFTER INSERT  |
|    2 | BEFORE UPDATE |
|    2 | AFTER UPDATE  |
|    3 | BEFORE DELETE |
|    3 | AFTER DELETE  |
+——+—————+
10 rows in set (0.00 sec)
Everything is working as it should, so let’s move on with the tests that we really care about. We will again take the three steps mentioned above, but this time directly on the view.
mysql> INSERT INTO view_main_table VALUES (4, ‘VIEW_D’, 40, time(NOW()));
Query OK, 1 row affected (0.02 sec)
mysql> INSERT INTO view_main_table VALUES (5, ‘VIEW_E’, 50, time(NOW()));
Query OK, 1 row affected (0.01 sec)
mysql> INSERT INTO view_main_table VALUES (6, ‘VIEW_F’, 60, time(NOW()));
Query OK, 1 row affected (0.11 sec)
mysql> UPDATE view_main_table SET letters = ‘VIEW_MOD’ WHERE id = 5;
Query OK, 1 row affected (0.04 sec)
Rows matched: 1  Changed: 1  Warnings: 0
mysql> DELETE FROM view_main_table WHERE id = 6;
Query OK, 1 row affected (0.01 sec)
And we check our tables:
mysql> SELECT * FROM main_table;
+—-+———-+———+———-+
| id | letters  | numbers | time     |
+—-+———-+———+———-+
|  1 | A        |      10 | 15:19:14 |
|  2 | MOD      |      20 | 15:19:14 |
|  4 | VIEW_D   |      40 | 15:19:34 |
|  5 | VIEW_MOD |      50 | 15:19:34 |
+—-+———-+———+———-+
4 rows in set (0.00 sec)
mysql> SELECT * FROM view_main_table;
+—-+———-+———+———-+
| id | letters  | numbers | time     |
+—-+———-+———+———-+
|  1 | A        |      10 | 15:19:14 |
|  2 | MOD      |      20 | 15:19:14 |
|  4 | VIEW_D   |      40 | 15:19:34 |
|  5 | VIEW_MOD |      50 | 15:19:34 |
+—-+———-+———+———-+
4 rows in set (0.00 sec)
mysql> SELECT * FROM table_trigger_control;
+——+—————+
| id   | description   |
+——+—————+
|    1 | BEFORE INSERT |
|    1 | AFTER INSERT  |
|    2 | BEFORE INSERT |
|    2 | AFTER INSERT  |
|    3 | BEFORE INSERT |
|    3 | AFTER INSERT  |
|    2 | BEFORE UPDATE |
|    2 | AFTER UPDATE  |
|    3 | BEFORE DELETE |
|    3 | AFTER DELETE  |
|    4 | BEFORE INSERT |
|    4 | AFTER INSERT  |
|    5 | BEFORE INSERT |
|    5 | AFTER INSERT  |
|    6 | BEFORE INSERT |
|    6 | AFTER INSERT  |
|    5 | BEFORE UPDATE |
|    5 | AFTER UPDATE  |
|    6 | BEFORE DELETE |
|    6 | AFTER DELETE  |
+——+—————+
20 rows in set (0.00 sec)
As seen in the results, all triggers were executed, even when the queries were run against the view. Since this was an updatable view, it worked. On the contrary, if we try on a non-updatable view it fails (we can force ALGORITHM = TEMPTABLE to test it).
mysql> CREATE ALGORITHM=TEMPTABLE VIEW view_main_table_temp AS SELECT * FROM main_table;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO view_main_table_temp VALUES (7, ‘VIEW_H’, 70, time(NOW()));
ERROR 1471 (HY000): The target table view_main_table_temp of the INSERT is not insertable-into
mysql> UPDATE view_main_table_temp SET letters = ‘VIEW_MOD’ WHERE id = 5;
ERROR 1288 (HY000): The target table view_main_table_temp of the UPDATE is not updatable
mysql> DELETE FROM view_main_table_temp WHERE id = 5;
ERROR 1288 (HY000): The target table view_main_table_temp of the DELETE is not updatable
As mentioned before, MariaDB shows the same behavior. The difference, however, is that the documentation is correct in mentioning the limitations, since it only shows the following:
https://mariadb.com/kb/en/mariadb/trigger-limitations/
Triggers cannot operate on any tables in the mysql, information_schema or performance_schema database.
Corollary to the Discussion
It’s always good to thoroughly check the documentation, but it’s also necessary to test things and prove the documentation is showing the real case (bugs can be found everywhere, not just in the code :)).

Blog Poll: What Operating System Do You Run Your Production Database On?

In this post, we’ll use a blog poll to find out what operating system you use to run your production database servers.
As databases grow to meet more challenges and expanding application demands, they must try and get the maximum amount of performance out of available resources. How they work with an operating system can affect many variables, and help or hinder performance. The operating system you use for your database can impact consumable choices (such as hardware and memory). The operation system you use can also impact your choice of database engine as well (or vice versa).
Please let us know what operating system you use to run your database. For this poll, we’re asking which operating system you use to actually run your production database server (not the base operating system).
If you’re running virtualized Linux on Windows, please select Linux as the OS used for development. Pick up to three that apply. Add any thoughts or other options in the comments section:
Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.
Thanks in advance for your responses – they will help the open source community determine how database environments are being deployed.

ProxySQL Admin Interface Is Not Your Typical MySQL Server!

In this blog post, I’ll look at how ProxySQL Admin behaves in some unusual and unexpected ways from a MySQL perspective.
ProxySQL allows you to connect to its admin interface using the MySQL protocol and use familiar tools, like the MySQL command line client, to manage its configuration as a set of configuration tables. This ability may trick you into thinking that you’re working with a stripped-down MySQL server – and expect it to behave like MySQL. 
It would be a mistake to think this! In fact, ProxySQL embeds the SQLite database to store its configuration. As such, it behaves much closer to SQLite!
Below, I’ll show you a few things that confused me at first. All of these are as of ProxySQL 1.3.6 (in case behavior changes in the future).
Fake support for Use command
mysql> show databases;
+—–+———+——————————-+
| seq | name    | file                          |
+—–+———+——————————-+
| 0   | main    |                               |
| 2   | disk    | /var/lib/proxysql/proxysql.db |
| 3   | stats   |                               |
| 4   | monitor |                               |
+—–+———+——————————-+
4 rows in set (0.00 sec)
mysql> select database();
+————+
| DATABASE() |
+————+
| admin      |
+————+
1 row in set (0.00 sec)
mysql> use stats;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select database();
+————+
| DATABASE() |
+————+
| admin      |
+————+
1 row in set (0.00 sec)
mysql> use funkydatabase;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
So here we can see that:

There is a concept of multiple databases in the ProxySQL admin interface
The ProxySQL admin interface supports the select database(); function, which is always same value independent of the database you tried to set. Typically it will be “admin” or “stats”, depending on what user you use to connect to the database.
You can use the “use” command to change the database – but it does not really change the database. This is a required command, because if you don’t support it many MySQL clients will not connect.

Invisible tables
mysql> show tables;
+————————————–+
| tables                               |
+————————————–+
| global_variables                     |
| mysql_collations                     |
| mysql_query_rules                    |
| mysql_replication_hostgroups         |
| mysql_servers                        |
| mysql_users                          |
| runtime_global_variables             |
| runtime_mysql_query_rules            |
| runtime_mysql_replication_hostgroups |
| runtime_mysql_servers                |
| runtime_mysql_users                  |
| runtime_scheduler                    |
| scheduler                            |
+————————————–+
13 rows in set (0.00 sec)
mysql> show tables from stats;
+——————————–+
| tables                         |
+——————————–+
| global_variables               |
| stats_mysql_commands_counters  |
| stats_mysql_connection_pool    |
| stats_mysql_global             |
| stats_mysql_processlist        |
| stats_mysql_query_digest       |
| stats_mysql_query_digest_reset |
| stats_mysql_query_rules        |
+——————————–+
8 rows in set (0.00 sec)
mysql> select count(*) from stats_mysql_commands_counters;
+———-+
| count(*) |
+———-+
| 52       |
+———-+
1 row in set (0.00 sec)
We can query a list of tables in our default database (which can’t change), and we also get lists of tables in the “stats” database with very familiar MySQL syntax. But we can also query the “stats” table directly without specifying the “stats” database, even if it is not shown in “show tables” for our current database.
Again this is SQLite behavior!
Strange Create Table syntax
mysql> show create table scheduler G
*************************** 1. row ***************************
      table: scheduler
Create Table: CREATE TABLE scheduler (
   id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
   active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
   interval_ms INTEGER CHECK (interval_ms>=100 AND interval_ms<=100000000) NOT NULL,
   filename VARCHAR NOT NULL,
   arg1 VARCHAR,
   arg2 VARCHAR,
   arg3 VARCHAR,
   arg4 VARCHAR,
   arg5 VARCHAR,
   comment VARCHAR NOT NULL DEFAULT ”)
1 row in set (0.00 sec)
If we look into the ProxySQL Admin interface table structure, we see it is not quite MySQL. It uses CHECK constraints and doesn’t specify the length for VARCHAR. This is because it is SQLite table definition. 
SHOW command nuances
The ProxySQL Admin interface supports SHOW PROCESSLIST and even SHOW FULL PROCESSLIST commands, but not all the commands match the MySQL server output:
mysql> show processlist;
+———–+—————+——–+———–+———+———+——–+
| SessionID | user          | db     | hostgroup | command | time_ms | info   |
+———–+—————+——–+———–+———+———+——–+
| 129       | proxysql_user | sbtest | 10        | Query   | 14      | COMMIT |
| 130       | proxysql_user | sbtest | 10        | Query   | 16      | COMMIT |
| 131       | proxysql_user | sbtest | 10        | Query   | 9       | COMMIT |
| 133       | proxysql_user | sbtest | 10        | Query   | 0       | COMMIT |
| 134       | proxysql_user | sbtest | 10        | Query   | 5       | COMMIT |
….
| 191       | proxysql_user | sbtest | 10        | Query   | 4       | COMMIT |
| 192       | proxysql_user | sbtest | 10        | Query   | 1       | COMMIT |
+———–+—————+——–+———–+———+———+——–+
62 rows in set (0.01 sec)
SHOW VARIABLES works, as does SHOW GLOBAL VARIABLES, but not SHOW SESSION VARIABLES.
SHOW STATUS doesn’t work as expected:
mysql> show status;
ERROR 1045 (#2800): near “show”: syntax error
As you can see, while some typical MySQL commands and constructs work, others don’t. This is by design: ProxySQL implemented some of the commands to make it easy and familiar for MySQL users to navigate the ProxySQL interface. But don’t get fooled! It is not MySQL, and doesn’t always behave as you would expect.
You’ve been warned!

MySQL Encryption at Rest – Part 1 (LUKS)

In this first of a series of blog posts, we’ll look at MySQL encryption at rest.
At Percona, we work with a number of clients that require strong security measures for PCI, HIPAA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.
Part one of this series is implementing disk-level encryption using crypt+LUKS.
In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.
Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.
LUKS Tutorial
The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.
WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.
Install the necessary utilities:
# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk
The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.
Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):
# openssl rand -base64 32
# date | md5 | rev | head -c 24 | md5 | tail -c 32
Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.
WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!
# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4
You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:
# cryptsetup luksOpen /dev/sda4 mysqldata
You will be prompted for the passphrase you used above. On success, you should see the device show up:
# ls /dev/mapper/
lrwxrwxrwx 1 root root 7 Jun 2 11:50 mysqldata -> ../dm-0
You can now format this encrypted block device and create a filesystem:
# mkfs.ext4 /dev/mapper/mysqldata
Now you can mount the encrypted block device you just formatted:
# mount /dev/mapper/mysqldata /var/lib/mysql
Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).
Creating a Backup of Encryption Information
The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:
# cryptsetup luksHeaderBackup –header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4
Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:
# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat
GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.
Unmounting and Closing a Disk
If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:
# umount /var/lib/mysql/
# cryptsetup luksClose mysqldata
In order to mount this device again, you must “open” it and provide one of the keys.
Rotating Keys (Adding / Removing Keys)
Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.
Take a look at the existing header information:
# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: ENABLED
Iterations: 181059
Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b
69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:
# cryptsetup luksAddKey –key-slot 1 -v /dev/sda4
Enter any passphrase:
Key slot 0 unlocked.
Enter new passphrase for key slot:
Verify passphrase:
Command successful.
LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.
Now that we have saved the new key in slot 1, we can remove the key in slot 0.
# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
No key available with this passphrase.
In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.
Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:
# cryptsetup luksKillSlot /dev/sda4 0
Enter any remaining LUKS passphrase:
# cryptsetup luksDump /dev/sda4
LUKS header information for /dev/sda4
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 4096
MK bits: 256
MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e
MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f
47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf
MK iterations: 44500
UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6
Key Slot 0: DISABLED
Key Slot 1: ENABLED
Iterations: 229712
Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df
cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6
Key material offset: 264
AF stripes: 4000
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.
Conclusion
Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.
In Part Two, we will cover using InnoDB tablespace encryption.

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

Join Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).
Register Now
Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.
When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”
Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

Public and private cloud
Migration to the cloud
Best practices
High availability
Scaling

Register for the webinar here.

Peter Zaitsev, Percona CEO and Founder
Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.
Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Webinar May 31, 2017: Online MySQL Backups with Percona XtraBackup

Please join Percona’s solution engineer, Dimitri Vanoverbeke as he presents Online MySQL Backups with Percona XtraBackup on Wednesday, May 31, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).
Register Now

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®. Percona XtraBackup provides:

Fast and reliable backups
Uninterrupted transaction processing during backups
Savings on disk space and network bandwidth with better compression
Automatic backup verification
Higher uptime due to faster restore time

This webinar will discuss the different features of Percona XtraBackup, including:

Full and incremental backups
Compression, streaming, and encryption of backups
Backing up to the cloud (swift)
Percona XtraDB Cluster / Galera Cluster
Percona Server specific features
MySQL 5.7 support
Tools that use Percona XtraBackup
Limitations

Register for the webinar here.

Dimitri Vanoverbeke, Solution Engineer

At the age of seven, Dimitri received his first computer. Since then he has felt addicted to anything with a digital pulse. Dimitri has been active in IT professionally since 2003, when he took various roles from internal system engineering to consulting. Prior to joining Percona, Dimitri worked as a consultant for a leading open source software consulting firm in Belgium. During his career, Dimitri has become familiar with a broad range of open source solutions and with the devops philosophy. Whenever he’s not glued to his computer screen, he enjoys traveling, cultural activities, basketball and the great outdoors.
 

Percona Server for MySQL 5.7.18-15 is Now Available

Percona announces the GA release of Percona Server for MySQL 5.7.18-15 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.
Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-15 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-15 milestone at Launchpad.

Bugs Fixed:

The server would crash when querying partitioning table with a single partition. Bug fixed #1657941 (upstream #76418).
Running a query on InnoDB table with ngram full-text parser and a LIMIT clause could lead to a server crash. Bug fixed #1679025 (upstream #85835).

The release notes for Percona Server for MySQL 5.7.18-15 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

How to Save and Load Docker Images to Offline Servers

In this post, we’ll see how to make Docker images available to servers that don’t have access to the Internet (i.e., machines where docker pull <image_name> does not work).
As a specific example, we will do this with the latest Percona Monitoring and Management Docker images, since we had requests for this from users and customers. With the following steps, you’ll be able to deploy PMM within your secure network, without access to the Internet. Additionally, the same steps can be used when you need to upgrade the containers’ version in future releases.
There are two ways in which we can do this:

the easy way, by using docker save and docker load, or
the not-so-easy way, by setting up our own registry

We’ll focus on the first option, since the latter is a bit more convoluted. If you need your own registry, you are probably looking into something else rather than simply avoiding a firewall to pull one image to a server. Check out the Docker online docs in case option two fits your needs better.
As of this writing, 1.1.3 is the latest PMM version, so this is what we’ll use in the example. An image name is comprised of three parts, namely:

user_account/ (note the ‘/’ at the end); or empty string (and no ‘/’) for the official Docker repo
image_name
:tag (note the ‘:’ at the beginning)

The PMM Docker images have the following syntax: percona/pmm-server:1.1.3, but you can change this in the following examples to whatever image name you want, and it will work just the same. Before moving on to the commands needed, let’s imagine that serverA is the machine that has access to the Internet and serverB is the machine behind the firewall.
The steps are simple enough. On serverA, get the image, and save it to a file:
serverA> docker pull percona/pmm-server:1.1.3
1.1.3: Pulling from percona/pmm-server
45a2e645736c: Pull complete
7a3c6f252004: Pull complete
2cc1d8878ff1: Pull complete
6c49ea4e9955: Pull complete
bc4630d3a194: Pull complete
75f0952c00bd: Pull complete
79d583a1689c: Pull complete
5a820193ac79: Pull complete
927a0614b164: Pull complete
Digest: sha256:5310b23066d00be418a7522c957b2da4155a63c3e7b08663327aef075674bc2e
Status: Downloaded newer image for percona/pmm-server:1.1.3
serverA> docker save percona/pmm-server:1.1.3 > ~/pmm-server_1.1.3.tar
Now, all you need to do is move the generated tar file to serverB (by using “scp” or any other means), and execute the following:
serverB> docker load < ~/pmm-server_1.1.3.tar
serverB> docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
percona/pmm-server   1.1.3               acc9af2459a4        3 weeks ago         1.146 GB
Now you’ll be able to use the image as if you had used docker pull percona/pmm-server:1.1.3​:
serverB> docker create … percona/pmm-server:1.1.3 /bin/true
301a9e89ee95886f497482038aa6601d6cb2e21c0532e1077fa44213ef597f38
serverB> docker run -d … percona/pmm-server:1.1.3
dbaffa80f62bc0b80239b922bbc746d828fbbeb212a638cfafea92b827141abb
serverB> curl http://localhost | grep “Percona Monitoring and Management”

                   <p>Percona Monitoring and Management (PMM) is a free and open-source
solution for managing and monitoring performance on MySQL and MongoDB, and provides
time-based analysis of performance to ensure that your data works as efficiently as
possible.</p>

Lastly, let me add the relevant documentation links, so you have them at hand, if needed:
https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569