Author: MySQL Performance Blog

Percona Toolkit 3.0.3 is Now Available

Percona announces the release of Percona Toolkit 3.0.3 on May 19, 2017.
Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.
You download Percona Toolkit packages from the web site or install from official repositories.
This release includes the following changes:
New Features

Added the –skip-check-slave-lag option for pt-table-checksum, pt-online-schema-change, and pt-archiverdp.This option can be used to specify a list of servers where to skip checking for slave lag.

1642754: Added support for collecting replication slave information in pt-stalk.

PT-111: Added support for collecting information about variables from Performance Schema in pt-stalk. For more information, see 1642753.

PT-116: Added the –[no]use-insert-ignore option for pt-online-schema-change to force or prevent using IGNORE on INSERT statements. For more information, see 1545129.

Bug Fixes

PT-115: Fixed OptionParser to accept repeatable DSNs.

PT-126: Fixed pt-online-schema-change to correctly parse comments. For more information, see 1592072.

PT-128: Fixed pt-stalk to include memory usage information. For more information, see 1510809.

PT-130: Fixed pt-mext to work with non-empty RSA public key. For more information, see 1587404.

PT-132: Fixed pt-online-schema-change to enable –no-drop-new-table when –no-swap-tables and –no-drop-triggers are used.

You can find release details in the release notes. Report bugs in Toolkit’s launchpad bug tracker.

Percona Live Open Source Database Conference 2017 Slides and Videos Available

The slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.
To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.
To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.
There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.
Some examples of videos and slide decks from the Percona Live conference:
MongoDB 101: Efficient CRUD Queries in MongoDBAdamo Tonete, Senior Technical Engineer, Percona
Video: https://www.percona.com/live/17/content/efficient-crud-queries-mongodbSlides: https://www.percona.com/live/17/sessions/efficient-crud-queries-mongodb
MySQL 101: Choosing a MySQL High Availability SolutionMarcos Albe, Principal Technical Services Engineer, Percona
Video: https://www.percona.com/live/17/content/choosing-mysql-high-availability-solutionSlides: https://www.percona.com/live/17/sessions/choosing-mysql-high-availability-solution
Breakout Session: Using the MySQL Document StoreMike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: https://www.percona.com/live/17/content/using-mysql-document-storeSlides: https://www.percona.com/live/17/sessions/using-mysql-document-store
Keynote: Continuent is Back! But What Does Continuent Do Anyway?Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: https://www.percona.com/live/17/content/continuent-back-what-does-continuent-do-anywaySlides: https://www.percona.com/live/17/sessions/continuent-back-what-does-continuent-do-anyway
Please let us know if you have any issues. Enjoy the videos!

Percona Live Europe 2017: Dublin, Ireland!
This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.
We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.
Download a prospectus here.
We look forward to seeing you there!

Percona Server for MySQL 5.5.55-38.8 is Now Available

Percona announces the release of Percona Server for MySQL 5.5.55-38.8 on May 10, 2017. Based on MySQL 5.5.55, including all the bug fixes in it, Percona Server for MySQL 5.5.55-38.8 is now the current stable release in the 5.5 series.
Percona Server for MySQL is open-source and free. You can find release details in the 5.5.55-38.8 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.
New Features:

Percona Server 5.5 packages are now available for Ubuntu 17.04 (Zesty Zapus).

Bugs Fixed:

If a bitmap write I/O errors happened in the background log tracking thread while a FLUSH CHANGED_PAGE_BITMAPS is executing concurrently it could cause a server crash. Bug fixed #1651656.
Querying TABLE_STATISTICS in combination with a stored function could lead to a server crash. Bug fixed #1659992.
Queries from the INNODB_CHANGED_PAGES table would needlessly read potentially incomplete bitmap data past the needed LSN range. Bug fixed #1625466.
It was not possible to configure basedir as a symlink. Bug fixed #1639735.

Other bugs fixed: #1688161, #1683456, #1670588 (upstream #84173), #1672389, #1675623, #1660243, #1677156, #1680061, #1680510 (upstream #85838), #1683993, #1684012, #1684025, and #1674281.
Find the release notes for Percona Server for MySQL 5.5.55-38.8 in our online documentation. Report bugs on the launchpad bug tracker.

How much disk space should I allocate for Percona Monitoring and Management?

I heard a frequent question at last week’s Percona Live conference regarding Percona Monitoring and Management (PMM): How much disk space should I allocate for PMM Server?
First, let’s review the three components of Percona Monitoring and Management that consume non-negligible disk space:

Prometheus data source for the time series metrics
Query Analytics (QAN) which uses Percona Server XtraDB (Percona’s enhanced version of the InnoDB storage engine)
Orchestrator, also backed by Percona Server XtraDB

Of these, you’ll find that Prometheus is generally your largest consumer of disk space. Prometheus hits a steady state of disk utilization once you reach the defined storage.local.retention period. If you deploy Percona Monitoring and Management 1.1.3 (the latest stable version), you’ll be using a retention period of 30 days. “Steady state” in this case means you’re not adding or removing nodes frequently, since each node comes with its own 1k-7k metrics to be scraped. Prometheus stores a one-time series per metric scraped, and automatically trims chunks (like InnoDB pages) from the tail of the time series once they exceed the retention period (so the disk requirement per static list of metrics remains “fixed” for the retention period).
However, if you’re in a dynamic environment with nodes being added and removed frequently, or you’re on the extreme end like these guys who re-deploy data centers every day, steady state for Prometheus may remain an elusive goal. The guidance you find below may help you establish at least a minimum disk provisioning threshold.

QAN is based on a web application and uses Percona Server 5.7.17 as it’s datastore. The Percona QAN agent runs one instance per monitored MySQL server, and obtains queries from either the Slow log or Performance Schema. It performs analysis locally to generate a list of unique queries and their corresponding metrics: min, max, avg, med, and p95. There are dimensions based on Tmp table, InnoDB, Query time, Lock time, etc. Check the schema for a full listing, as there are actually 149 columns on this table (show create table pmm.query_class_metricsG). While the table is wide, it isn’t too long: PMM Demo is ~9mil rows and is approximately 1 row per distinct query per minute per host.
Finally, there is Orchestrator. While the disk requirements for Orchestrator are not zero, they are certainly dwarfed by Prometheus and QAN.  As you’ll read below, Percona’s Orchestrator footprint is a measly ~250MB, which is a rounding error. I’d love to hear other experiences with Orchestrator and how large your InnoDB footprint is for a large or active cluster.
For comparison, here is the resource consumption from Percona’s PMM Demo site:

~47k time series

25 hosts, which is on average ~1,900 time series/host, some are +4k

8-day retention for metrics in Prometheus
Prometheus data is ~40GB

Which should not increase until we add more host, as this isn’t a dynamic Kubernetes environment

QAN db is 6.5GB

We don’t currently prune records, so this will continue to grow
90% of space consumed is in query_class_metrics, which is ~9mil rows
Our first record is ~September 2016, but only in the past three months
This is MySQL QAN only, the MongoDB nodes don’t write anything into QAN (yet… we’re working on QAN for MongoDB and hope to ship this quarter!!)

Orchestrator db is ~250MB

audit table is 97% of the space consumed, ~2mil rows

So back to the original question: How much space should I allocate for Percona Monitoring and Management Server? The favorite answer at Percona is “It Depends®,” and this case is no different. Using PMM Demo as our basis, 46GB / 25 hosts / 8 days = ~230MB/host/day or ~6.9GB/host/30 day retention period. For those of you running 50 instances in PMM, you should be provisioning ~400GB of disk.
Of course, your environment is likely to be different and directly related to what you do and don’t enable. For example, a fully verbose Percona Server 5.7.17 configuration file like this:
## PMM Enhanced options
long_query_time=0
log_slow_rate_limit=100
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1
query_response_time_stats=ON
performance_schema_instrument=’%=on’
with none of the mysqld_exporter features disabled:
–disable-binlogstats
–disable-processlist
–disable-queryexamples
–disable-tablestats
–disable-userstats
can lead to an instance that has +4k metrics and will push you above 230MB/host/day. This is what the top ten metrics and hosts by time series count from the PMM Demo look like:

What does the future hold related to minimizing disk space consumption?

The PMM development team is working on the ability to purge a node’s data without access to the instance

Today you need to call pmm-admin purge from the instance – which becomes impossible if you’ve already terminated or decommissioned the instance!

We are following Prometheus’ efforts on the 3rd Gen storage re-write in Prometheus 2.0, where InfluxDB will do more than just indices
Again we are following Prometheus’ efforts on Remote Read / Remote Write so we can provide a longer-term storage model for users seeking > 30 days (another popular topic at PL2017)

Allows us to store less granular data (every 5s vs. every 1s)
Usage of Graphite, OpenTSDB, and InfluxDB as secondary data stores on the Remote end

I’d love to hear about your own experiences using Percona Monitoring and Management, and specifically the disk requirements you’ve faced! Please share them with us via the comments below, or feel free to drop me a line directly michael.coburn@percona.com. Thanks for reading!

Storing UUID and Generated Columns

A lot of things have been said about UUID, and storing UUID in an optimized way. Now that we have generated columns, we can store the decomposed information inside the UUID and merge it again with generated columns. This blog post demonstrates this process.
First, I used a simple table with one char field that I called uuid_char to establish a base case. I used this table with and without a primary key:
CREATE TABLE uuid_char (
uuid char(36) CHARACTER SET utf8 NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE uuid_char_pk (
uuid char(36) CHARACTER SET utf8 NOT NULL,
PRIMARY KEY (uuid)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I performed the tests on a local VM over MySQL 5.7.17 for 30 seconds, with only two threads, because I wanted to just compare the executions:
sysbench
–oltp-table-size=100000000
–test=/usr/share/doc/sysbench/tests/db/insert_uuid_generated_columns.uuid_char.lua
–oltp-tables-count=4
–num-threads=2
–mysql-user=root
–max-requests=5000000
–report-interval=5
–max-time=30
–mysql-db=generatedcolumn
run
One pair of executions is with the UUID generated by sysbench, which simulates the UUID that comes from the app:
rs = db_query(“INSERT INTO uuid_char (uuid) VALUES ” .. string.format(“(‘%s’)”,c_val))
An alternative execution is for when the UUID is generated by the MySQL function uuid():
rs = db_query(“INSERT INTO uuid_char (uuid) VALUES (uuid())”)
Below we can see the results: 

The inserts are faster without a PK (but only by 5%), and using the uuid() function doesn’t impact performance.
Now, let’s see the alternative method, which is decomposing the UUID. It has four main information sets:

Timestamp: this is a number with seven decimals.
MAC: the MAC address of the device that creates the UUID
Unique value: this value avoids duplicate cases scenarios
UUID version: this will always be “1”, as we are going to use version 1. If you are going to use another version, you will need to review the functions that I used.

The structure of the table that we’ll use is:
CREATE TABLE `uuid_generated` (
`timestamp` decimal(18,7) unsigned NOT NULL,
`mac` bigint(20) unsigned NOT NULL,
`temp_uniq` binary(2) NOT NULL,
PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
To understand how a UUID is unwrapped, I used this store procedure (which receives a UUID and inserts it into the table):
CREATE PROCEDURE ins_generated_uuid (uuid char(38))
begin
set @hex_timestamp = concat(substring(uuid, 16, 3), substring(uuid, 10, 4), substring(uuid, 1, 8));
set @timestamp = concat(conv(@hex_timestamp,16,10)div 10000000 – (141427 * 24 * 60 * 60),’.’,right(conv(@hex_timestamp,16,10),7));
set @mac = conv(right(uuid,12),16,10);
set @temp_uniq = unhex(substring(uuid,20,4));
insert into uuid_generated (timestamp,mac,temp_uniq) values (@timestamp,@mac,@temp_uniq);
end ;;
Explanation:

@hex_timestamp is a temporary variable that collects the timestamp in hexadecimal format from the different sections of the UUID
@timestamp transforms the hexadecimal timestamp to a decimal number
@mac pulls the last number in the UUID (a MAC) so we can store it in as a bigint
@temp_uniq is a value to conserve the uniqueness, which is why we store it as binary and it is at the end of the Primary Key

If I wanted to get the UUID again, I can use these two generated columns:
`hex_timestamp` char(40) GENERATED ALWAYS AS (conv(((`timestamp` * 10000000) + (((141427 * 24) * 60) * 600000000)),10,16)) VIRTUAL,
`uuid_generated` char(38) GENERATED ALWAYS AS (concat(right(`hex_timestamp`,8),’-‘,substr(`hex_timestamp`,4,4),’-1′,left(`hex_timestamp`,3),’-‘,convert(hex(`temp_uniq`) using utf8),’-‘,lpad(conv(`mac`,10,16),12,’0′))) VIRTUAL,
We performed tests over five scenarios:

Without the generated columns, the insert used data generated dynamically
Same as before, but we added a char field that stores the UUID
With the char field, and adding the generated column
We used the store procedure detailed before to insert the data into the table
We also tested the performance using triggers

The difference between the Base and the previous table structure with Primary Keys is very small. So, the new basic structure has no impact on performance.
We see that Base and +Char Field have the same performance. So leaving a char field has no performance impact (it just uses more disk space).
Using generated columns impact performance. This is expected, as the columns are generated to validate the type before the row is inserted.
Finally, the use of triggers and store procedure has the same impact in performance.
These are the three structures to the tables:
CREATE TABLE `uuid_generated` (
`timestamp` decimal(18,7) unsigned NOT NULL,
`mac` bigint(20) unsigned NOT NULL,
`temp_uniq` binary(2) NOT NULL,
PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `uuid_generated_char` (
`timestamp` decimal(18,7) unsigned NOT NULL,
`mac` bigint(20) unsigned NOT NULL,
`temp_uniq` binary(2) NOT NULL,
`uuid` char(38) DEFAULT NULL,
PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `uuid_generated_char_plus` (
`timestamp` decimal(18,7) unsigned NOT NULL,
`mac` bigint(20) unsigned NOT NULL,
`temp_uniq` binary(2) NOT NULL,
`uuid` char(38) DEFAULT NULL,
`hex_timestamp` char(40) GENERATED ALWAYS AS (conv(((`timestamp` * 10000000) + (((141427 * 24) * 60) * 600000000)),10,16)) VIRTUAL,
`uuid_generated` char(38) GENERATED ALWAYS AS (concat(right(`hex_timestamp`,8),’-‘,substr(`hex_timestamp`,4,4),’-1′,left(`hex_timestamp`,3),’-‘,convert(hex(`temp_uniq`) using utf8),’-‘,lpad(conv(`mac`,10,16),12,’0′))) VIRTUAL,
PRIMARY KEY (`timestamp`,`mac`,`temp_uniq`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
And this is the trigger:
DROP TRIGGER IF EXISTS ins_generated_uuid;
delimiter ;;
CREATE TRIGGER ins_uuid_generated BEFORE INSERT ON uuid_generated
FOR EACH ROW
begin
set @hex_timestamp = concat(substring(NEW.uuid, 16, 3), substring(NEW.uuid, 10, 4), substring(NEW.uuid, 1, 8));
set NEW.timestamp = concat(conv(@hex_timestamp,16,10)div 10000000 – (141427 * 24 * 60 * 60),’.’,right(conv(@hex_timestamp,16,10),7));
set NEW.mac = conv(right(NEW.uuid,12),16,10);
set NEW.temp_uniq = unhex(substring(NEW.uuid,20,4));
end ;;
delimiter ;

Conclusions
Decomposing the UUID is an alternative to storing them in order, but it won’t speed up inserts. It is simpler to execute queries over a range of dates, and look at the row for a particular device, as you will be able to use the MAC (it is recommended to add an index for it). Generated columns give you the possibility to build the UUID back in just one string.

Percona Live 2017: Day One Keynotes

Welcome to the first day of the Percona Live Open Source Database Conference 2017, and the first set of Percona Live keynotes!
It’s a beautiful day in Santa Clara, but we don’t know because we’re inside the Hyatt Regency Convention Center listening to various rock stars in the open source database community talk about open source technologies. Day one of the conference kicked off with four keynote talks, all of which discussed issues and technologies that are addressed by open source solutions:
Percona Welcoming Keynote
Peter Zaitsev (Percona)
Peter Zaitsev, CEO of Percona, welcomed everyone to Percona Live Open Source Database Conference 2017 and discussed the history of Percona Live. Percona Live has grown significantly over the years, changing from just a MySQL conference into an open source database technology conference. This change has mirrored the growth and acceptance of open source technologies in what traditionally were commercial marketplaces.
Continuent is back! But what does Continuent do Anyway?
Eero Teerikorpi, MC Brown (Continuent)
Eero Teerikorpi from Continuent discussed how Continuent as a company has developed over the years – from startup to acquisition by VMware back to separate entity, with regards to its Tungsten database replication and clustering product. Eero explained how Continuent’s swan logo represents it’s long relationship it has had with its customers, and how its Tungsten product is an outstanding replication solution. Eero asked select Continuent customers to tell everyone about how they use their multi-site/multi-master and advanced replication solutions, and explain how Tungsten helped them solve database replication issues.
Open Source Database Ecosystem
Peter Zaitsev (Percona), Colin Charles (Percona), Dmitry Andreev (Yandex), Justin Teller (Facebook), Tal Levy (Elastic Search), Björn Rabenstein (SoundCloud Ltd.), Paul Dix (InfluxData), Michael J. Freedman (TimescaleDB)
For our third keynote, we had a panel of speakers from a variety of open source companies: Yandex, Facebook, Elastic Search, SoundCloud, InfluxData,TimescaleDB and of course Percona. The topic was the explosion of time series data. Percona sees time series databases as a trend of 2017, hence the idea of having quick 5-minute lightning talks from projects that are stellar. With the increasing availability of IoT devices and the data they provide, time series data is more and more important in the database landscape. All of these companies provide people with an interest in capturing, monitoring, and analyzing time series data could use their various solutions for that purpose.
SQLite: The Most Important Software Component That Many People Have Never Heard Of
Richard Hipp (SQLite.org)
The final keynote this morning was a discussion with the creator of SQLite, one of the most used RDBMS. Rather than use a client-server model, SQLite is embedded in the application. There are more instances of SQLite running today than all other database engines combined. This talk reviewed the features, capabilities, limitations, and usage patterns of SQLite and asks why you are not using SQLite more yourself.
All the keynotes today highlighted the many different aspects of the open source database community that come together to solve database challenges. Percona Live runs through Thursday 4/27. Check out tomorrow’s keynotes here, as well as the numerous breakout sessions with top open source database experts.

Percona XtraDB Cluster Transaction Replay Anomaly

In this blog post, we’ll look at a transaction replay anomaly in Percona XtraDB Cluster.
Introduction
Percona XtraDB Cluster/Galera replays a transaction if the data is non-conflicting but, the transaction happens to have conflicting locks.
Anomaly
Let’s understand this with an example:

Let’s assume a two-node cluster (node-1 and node-2)
Base table “t” is created as follows:

create database test;
use test;
create table t (i int, c char(20), primary key pk(i)) engine=innodb;
insert into t values (1, ‘abc’), (2, ‘abc’), (4, ‘abc’);
select * from t;
mysql> select * from t;
+—+——+
| i | c |
+—+——+
| 1 | abc |
| 2 | abc |
| 4 | abc |
+—+——+

node-2 starts runs a transaction (trx-2):

trx-2: update t set c = ‘pqr';

node-2 creates a write-set and is just about to replicate it. At the same time, node-1 executes the following transaction (trx-1), and is first to add it to the group-channel (before node-2 adds transaction (trx-2))

trx-1: insert into t values (3, ‘a’);

trx-1 is replicated on node-2, and it proceeds with the apply action. Since there is a lock conflict (no certification conflict), node-2 local transaction (trx-2) is aborted and scheduled for replay.
trx-1 causes addition of (3, ‘a’) and then node-2 transaction is REPLAYed.
REPLAY is done using the pre-created write-set that only modifies existing entries (1,2,4).

End-result:
mysql> select * from t;
+—+——+
| i | c |
+—+——+
| 1 | pqr |
| 2 | pqr |
| 3 | a |
| 4 | pqr |
+—+——+

At first, nothing looks wrong. If you look closely, however, the REPLAYed transaction “UPDATE t set c= ‘pqr’” is last to commit. But the effect of it is not seen as there is still a row (3, ‘a’) that has ‘a’ instead of ‘pqr’.

| mysql-bin.000003 | 792 | Gtid | 2 | 857 | SET @@SESSION.GTID_NEXT= ‘6706fa1f-e3df-ee18-6621-c4e0bae533bd:4′ |
| mysql-bin.000003 | 857 | Query | 2 | 925 | BEGIN |
| mysql-bin.000003 | 925 | Table_map | 2 | 972 | table_id: 219 (test.t) |
| mysql-bin.000003 | 972 | Write_rows | 2 | 1014 | table_id: 219 flags: STMT_END_F existing|
| mysql-bin.000003 | 1014 | Xid | 2 | 1045 | COMMIT /* xid=4 */ |
| mysql-bin.000003 | 1045 | Gtid | 3 | 1110 | SET @@SESSION.GTID_NEXT= ‘6706fa1f-e3df-ee18-6621-c4e0bae533bd:5′ |
| mysql-bin.000003 | 1110 | Query | 3 | 1187 | BEGIN |
| mysql-bin.000003 | 1187 | Table_map | 3 | 1234 | table_id: 219 (test.t) |
| mysql-bin.000003 | 1234 | Update_rows | 3 | 1324 | table_id: 219 flags: STMT_END_F |
| mysql-bin.000003 | 1324 | Xid | 3 | 1355 | COMMIT /* xid=5 */ |
+——————+——+—————-+———–+————-+———————————————————————————+
21 rows in set (0.00 sec)

We have used a simple char string, but if there is a constraint here, like c should have X after UPDATE is complete, than the CONSTRAINT will be violated even though the application reports UPDATE as a success.
Is it interesting to note what happens on node-1:

node-1 applies the local transaction (trx-1) and then gets the replicated write-set from node-2 (trx-2) that has changes only for (1,2,4). Thereby data consistency is not compromised.

BEWARE: Increasing fc_limit can affect SELECT latency

In this blog post, we’ll look at how increasing the fc_limit can affect SELECT latency.
Introduction
Recent Percona XtraDB Cluster optimizations have exposed fc_limit contention. It was always there, but was never exposed as the Commit Monitor contention was more significant. As it happens with any optimization, once we solve the bigger contention issues, smaller contention issues start popping up. We have seen this pattern in InnoDB, and Percona XtraDB Cluster is no exception. In fact, it is good because it tells us that we are on the right track.
If you haven’t yet checked the performance blogs, then please visit here and here.
What is FC_LIMIT?
Percona XtraDB Cluster has the concept of Flow Control. If any member of the cluster (not garbd) is unable to match the apply speed with the replicated write-set speed, then the queue builds up. If this queue crosses some threshold (dictated by gcs.fc_limit), then flow control kicks in. Flow control causes members of the cluster to temporary halt/slow-down so that the slower node can catch up.
The user can, of course, disable this by setting wsrep_desync=1 on the slower node, but make sure you understand the effect of doing so. Unless you have a good reason, you should avoid setting it.
mysql> show status like ‘wsrep_flow_control_interval';
+—————————–+————+
| Variable_name | Value |
+—————————–+————+
| wsrep_flow_control_interval | [ 16, 16 ] |
+—————————–+————+
1 row in set (0.01 sec)

Increasing fc_limit
Until recently, the default fc_limit was 16 (starting with Percona XtraDB Cluster 5.7.17-29.20, the default is 100). This worked until now, since Percona XtraDB Cluster failed to scale and rarely hit the limit of 16. With new optimizations, Percona XtraDB Cluster nodes can process more write-sets in a given time period, and thereby can replicate more write-sets (anywhere in the range of three to ten times). Of course, the replicating/slave nodes are also performing at a higher speed. But depending on the slave threads, it is easy to start hitting this limit.
So what is the solution?

Increase fc_limit from 16 to something really big. Say 1600.

Is this correct?
YES and NO.
Why YES?

If you don’t care about the freshness of data on the replicated nodes, then increasing the limit to a higher value is not an issue. Say setting it to 10K means that the replicating node is holding 10K write-sets to replicate, and a SELECT fired during this time will not view changes from these 10K write-sets.
But if you insist on having fresh data, then Percona XtraDB Cluster has a solution for this (set wsrep_sync_wait=7).
Setting wsrep_sync_wait places the SELECT request in a queue that is serviced only after existing replicated write-sets (at the point when the SELECT was fired) are done with. If the queue has 8K write-sets, then SELECT is placed at the 8K+1 position. As the queue progresses, SELECT gets serviced only when all those 8K write-sets are done. This insanely increases SELECT latency and can cause all Monitoring ALARM to go ON.

Why NO?

For the reason mentioned above, we feel it is not a good idea to increase the fc_limit beyond some value unless you don’t care about data freshness and in turn don’t care to set wsrep_sync_wait.
We did a small experiment with the latest Percona XtraDB Cluster release to understand the effects.

– Started 2 node cluster.
– Fired 64-threads workload on node-1 of the cluster.
– node-2 is acting as replicating slave without any active workload.
– Set wsrep_sync_wait=7 on node-2 to ensure data-freshness.
Using default fc_limit (= 16)
—————————–
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+————-+
| sum(k) |
+————-+
| 22499552612 |
+————-+
1 row in set (0.03 sec)
Increasing it from 16 -> 1600
—————————–
mysql> set global wsrep_provider_options=”gcs.fc_limit=1600″;
Query OK, 0 rows affected (0.00 sec)
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+————-+
| sum(k) |
+————-+
| 22499552612 |
+————-+
1 row in set (0.46 sec)
That is whopping 15x increase in SELECT latency.
Increasing it even further (1600 -> 25000)
——————————————-
mysql> set global wsrep_provider_options=”gcs.fc_limit=25000″;
Query OK, 0 rows affected (0.00 sec)
mysql> select sum(k) from sbtest1 where id > 5000 and id < 50000;
+————-+
| sum(k) |
+————-+
| 22499552612 |
+————-+
1 row in set (7.07 sec)
Note: wsrep_sync_wait=7 will enforce the check for all DMLs (INSERT/UPDATE/DELETE/SELECT). We highlighted the SELECT example, as that is more concerning at first go. But latency for other DMLs also increases for the same reasons as mentioned above.
Conclusion
Let’s conclude with the following observation:

Avoid increasing fc_limit to an insanely high value as it can affect SELECT latency (if you are running a SELECT session with wsrep_sync_wait=7 for data freshness).

Percona Server for MySQL in Docker Swarm with Secrets

This quick post demonstrates using Percona Server for MySQL in Docker Swarm with some new authentication provisioning practices.
Some small changes to the startup script for the Percona-Server container image allows us to specify a file that contains password values to set as our root user’s secret. “Why do we need this functionality,” I hear you cry? When we use an environment variable, it’s not terribly hard to locate the value to which someone has set as their database root password. Environment variables are not well suited for sensitive data. We preach against leaving our important passwords in easy to reach places. So moving towards something more secure whilst retaining usability is desirable. I’ll detail the current methods, the problems, and finish off with Docker Secrets – which in my opinion, is the right direction to be heading.
Environment Variables
I’ll elaborate on the main reason why we would want to change from the default given behavior. In the documentation for using the MySQL/Percona and MariaDB containers, we are invited to start containers with an environment variable to control what the instance’s root password is set as upon startup. Let’s demonstrate with the latest official Percona-Server image from Percona’s repository of images on the Docker Hub registry:
moore@chom:~$ docker pull percona/percona-server:latest
latest: Pulling from percona/percona-server
e12c678537ae: Already exists
65ab4b835640: Pull complete
f63269a127d1: Pull complete
757a4fef28b8: Pull complete
b0cb547a5105: Pull complete
4214179ba9ea: Pull complete
155dafd2fd9c: Pull complete
848020b1da10: Pull complete
771687fe7e8b: Pull complete
Digest: sha256:f3197cac76cccd40c3525891ce16b0e9f6d650ccef76e993ed7a22654dc05b73
Status: Downloaded newer image for percona/percona-server:latest
Then start a container:
moore@chom:~$ docker run -d
–name percona-server-1
-e MYSQL_ROOT_PASSWORD=’secret’
percona/percona-server
d08f299a872f1408c142b58bc2ce8e59004acfdb26dca93d71f5e9367b4f2a57
moore@chom:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d08f299a872f percona/percona-server “/entrypoint.sh ” 32 seconds ago Up 31 seconds 3306/tcp percona-server-1
Looks good, eh? Let’s inspect this container a little closer to reveal why this method is flawed:
moore@chom:~$ docker inspect –format ‘{{ index (index .Config.Env) 0}}’ percona-server-1
MYSQL_ROOT_PASSWORD=secret
*facepalm*
We don’t want the root password exposed here, not really. If we wanted to use this method in docker-compose files, we would also be storing passwords inline, which isn’t considered a secure practice.
Environment File
Another approach is to use an environment file. This is simply a file that we can provide docker run or docker-compose in order to instantiate the environment variables within the container. It’s a construct for convenience. So just to illustrate that we have the same problem, the next example uses the mechanism of an environment file for our database container:
moore@chom:~$ echo ‘MYSQL_ROOT_PASSWORD=secret’ > /tmp/ps.env
moore@chom:~$ docker run -d –name percona-server-2 –env-file=/tmp/ps.env percona/percona-server
d5105d044673bd5912e0e29c2f56fa37c5f174d9d2a4811ceaba284092837c84
moore@chom:~$ docker inspect –format ‘{{ index (index .Config.Env) 0}}’ percona-server-2
MYSQL_ROOT_PASSWORD=secret
NOTE: shortly after starting this container failed because we didn’t provide mysql root password options
While we’re not specifying it in our docker run command or our docker-compose.yml file, the password value remains on our filesystem within the environment file. Again, not ideal.
Password File
With the ability to use a password file it obscures this from the inspect output. Let’s roll through the steps we would use to leverage this new option. With our new Percona-Server image, we’re going to start a container, but first let’s create an arbitrary file containing our desired password:
moore@chom:~$ docker:cloud> echo “secret” > /tmp/mysql_pwd_file
Now start a container where we’re going to bind mount the file, and use our new environment variable to point to it:
moore@chom:~$ docker run -v /tmp/mysql_pwd_file:/tmp/mysqlpwd –name percona-secret
-e MYSQL_ROOT_PASSWORD_FILE=/tmp/mysqlpwd percona/percona-server:latest
With the same inspect command, let’s show that there’s no snooping on our password value:
moore@chom:~$ docker inspect –format ‘{{ index (index .Config.Env) 0}}’ percona-secret
MYSQL_ROOT_PASSWORD_FILE=/tmp/mysqlpwd
We are revealing the path where our password was read from within the container. For those eagle-eyed readers, this file was just a bind mounted file in the docker run command, and it’s still on the host’s filesystem.
moore@chom:~$ cat /tmp/mysql_pwd_file
secret
moore@chom:~$ docker exec percona-secret cat /tmp/mysqlpwd
secret
Not perfect, because we need to have that file available on all of our Docker hosts, but it works and we’re closer to a more robust solution.
Docker Secrets
The main reason for the new environment variable is to leverage the docker secrets feature. Since Docker version 1.13 (17.03 is now GA), we have the Docker Secrets feature, however it’s only available to the Docker Swarm workflow. If you’re not already working with Docker Swarm mode, I can’t recommend it enough. It’s part of Docker-engine, simple to get started, and intuitive since 1.13 it is compatible with docker-compose files. You don’t need to have a cluster of hardware, it’s entirely valid to use Docker Swarm on a single node. This allows you to test on your local environment with ease.
I won’t waste pixels explaining what’s already well documented in official channels, but in summary: Docker secrets is a new feature that allows us to keep sensitive information out of source code and configuration files. Secrets are stored in the Raft log which is encrypted and replicated throughout the Docker Swarm cluster. The protection and distribution come for free out of the box, which is a fantastic feature if you ask me.
So, let’s create a Docker Secret. Please note that I’ve moved to my Docker Swarm installation for this next part:
moore@chom:~$ docker:cloud> docker info | egrep -i ‘swarm|version’
Server Version: 17.03.0-ce
Swarm: active
Operating as a swarm manager we have the ability to create a new secret to serve as our root user’s password:
moore@chom:~$ docker:cloud> echo “{secret_string}” | docker secret create mysql_root_password –
ugd8dx0kae9hbyt4opbolukgi
We can list all of our existing secrets:
moore@chom:~$ docker:cloud> docker secret ls
ID NAME CREATED UPDATED
ugd8dx0kae9hbyt4opbolukgi mysql_root_password Less than a second ago Less than a second ago
Now our secret has been created, it’s obscured from us. We are unable to see it’s value.
moore@chom:~$ docker secret inspect mysql_root_password
[
{
“ID”: “ugd8dx0kae9hbyt4opbolukgi”,
“Version”: {
“Index”: 905780
},
“CreatedAt”: “2017-04-11T23:33:08.118037434Z”,
“UpdatedAt”: “2017-04-11T23:33:08.118037434Z”,
“Spec”: {
“Name”: “mysql_root_password”
}
}
]
Now we can use our secret to set our authentication for the MySQL instance by doing the following:
moore@chom:~$ docker service create
–name percona-secret
–secret mysql_root_password
-e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
percona/percona-server:latest
You can see that instead of docker run, I’ve issued the swarm equivalent docker service create, which is going to start a new Percona-Server container in the scope of my Swarm workflow. I’m also using the –secret option to tell docker to mount my secret in the container, which gets mounted to a file under the path /run/secrets/{secret_name}. The final point here, I’m passing MYSQL_ROOT_PASSWORD_FILE=/path/to/secret as an environment variable to let the startup script know where to find the file with my secret value for the root password. Once the startup routine has completed and the container has started successfully I can connect to my container to test the password was set correctly:
moore@chom:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
397bdf9b75f9 percona/percona-server “/entrypoint.sh ” 46 seconds ago Up 44 seconds 3306/tcp percona-secret.1.9xvbneset9363dr5xv4fqqxua
moore@chom:~$ docker exec -ti 397bdf9b75f9 bash
mysql@397bdf9b75f9:/$ cat /run/secrets/mysql_root_password
{secret_string}
mysql@397bdf9b75f9:/$ mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 4
Server version: 5.7.17-11 Percona Server (GPL), Release ’11’, Revision ‘f60191c’
Copyright (c) 2009-2016 Percona LLC and/or its affiliates
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
mysql>
The secret can be shared around any container where it’s necessary, simply by telling Docker to use the secret when instantiating a container. For example, if I wanted to start an application container such as a WordPress instance, I can use a secret object to easily share credentials to the data source safely and consistently.
This method is totally viable for other forms of sensitive data. For example, I can generate SSL certificates and use Docker secrets to add them to my containers for encrypted replication or forcing secure logins from remote clients. I’m still thinking of all the possible use cases for this option and no doubt will have some more to share with you in the near future.
Please share your comments, suggestions and corrections in the comments below. Thank you for reading.

Simplified Percona XtraDB Cluster SSL Configuration

In this blog post, we’ll look at a feature that recently added to Percona XtraDB Cluster 5.7.16, that makes it easier to configure Percona XtraDB Cluster SSL for all related communications. It uses mode “encrypt=4”, and configures SSL for both IST/Galera communications and SST communications using the same SSL files. “encrypt=4” is a new encryption mode added in Percona XtraDB Cluster 5.7.16 (we’ll cover it in a later blog post).
If this option is used, this will override all other Galera/SST SSL-related file options. This is to ensure that a consistent configuration is applied.
Using this option also means that the Galera/SST communications are using the same keys as client connections.
Example
This example shows how to startup a cluster using this option. We will use the default SSL files created by the bootstrap node. Basically, there are two steps:

Set pxc_encrypt_cluster_traffic=ON on all nodes
Ensure that all nodes share the same SSL files

Step 1: Configuration (on all nodes)
We enable the
pxc_encrypt_cluster_traffic option in the configuration files on all nodes. The default value of this option is “OFF”, so we enable it here.[mysqld]
pxc_encrypt_cluster_traffic=ON

Step 2: Startup the bootstrap node
After initializing and starting up the bootstrap node, the datadir will contain the necessary data files. Here is some SSL-related log output:
[Note] Auto generated SSL certificates are placed in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Auto generated RSA key files are placed in data directory.
The required files are ca.pem, server-cert.pem and server-key.pem, which are the Certificate Authority (CA) file, the server certificate and the server private key, respectively.
Step 3: Copy the SSL files to all other nodes
Galera views the cluster as a set of homogeneous nodes, so the same configuration is expected on all nodes. Therefore, we have to copy the CA file, the server’s certificate and the server’s private key. By default, MySQL names these: ca.pem, server-cert.pem, and server-key.pem, respectively.
Step 4: Startup the other nodes
This is some log output showing that the SSL certificate files have been found. The other nodes should be using the files that were created on the bootstrap node.
[Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
[Note] Skipping generation of SSL certificates as certificate files are present in data directory.
[Warning] CA certificate ca.pem is self signed.
[Note] Skipping generation of RSA key pair as key files are present in data directory.
This is some log output (with
log_error_verbosity=3), showing the SST reporting on the configuration used.WSREP_SST: [DEBUG] pxc_encrypt_cluster_traffic is enabled, using PXC auto-ssl configuration
WSREP_SST: [DEBUG] with encrypt=4 ssl_ca=/my/data//ca.pem ssl_cert=/my/data//server-cert.pem ssl_key=/my/data//server-key.pem

Customization
The “ssl-ca”, “ssl-cert”, and “ssl-key” options in the “[mysqld]” section can be used to specify the location of the SSL files. If these are not specified, then the datadir is searched (using the default names of “ca.pem”, “server-cert.pem” and “server-key.pem”).
[mysqld]
pxc_encrypt_cluster_traffic=ON
ssl-ca=/path/to/ca.pem
ssl-cert=/path/to/server-cert.pem
ssl-key=/path/to/server-key.pem
If you want to implement this yourself, the equivalent configuration file options are:
[mysqld]
wsrep_provider_options=”socket.ssl_key=server-key.pem;socket.ssl_cert=server-cert.pem;socket.ssl_ca=ca.pem”
[sst]
encrypt=4
ssl-ca=ca.pem
ssl-cert=server-cert.pem
ssl-key=server-key.pem

How it works

Determine the location of the SSL files

Uses the values if explicitly specified (via the “ssl-ca”, “ssl-cert” and “ssl-key” options in the “[mysqld]” section)
If the SSL file options are not specified, we look in the data directory for files named “ca.pem”, “server-cert.pem” and “server-key.pem” for the CA file, the server certificate, and the server key, respectively.

Modify the configuration

Overrides the values for socket.ssl_ca, socket.ssl_cert, and socket.ssl_key in wsrep_provider_options in the “[mysqld]” section.
Sets “encrypt=4” in the “[sst]” section.
Overrides the values for ssl-ca, ssl-cert and ssl-key in the “[sst]” section.

This is not a dynamic setting, and is only available on startup.

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569