Month: January 2018

Percona Monitoring and Management 1.7.0 (PMM) Is Now Available

Experimental Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.7.0. (PMM ) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently […]

Aurora Hash Join Optimization (with a Gentle Reminder on Lab Features)

The Aurora hash join feature for relational databases has been around for a while now. But unlike MySQL Block Nested Loop algorithm, an Aurora hash join only caters to a specific number of use cases. When implemented with the optimizer properly, they can provide great benefits with certain workloads. Below we’ll see a brief example of a quick win.
This new feature is available in Aurora lab mode version 1.16. Because this is a lab feature, it’s important to make sure to test your queries before upgrading, especially if you are looking to scale up to the new R4 instances before the Superbowl to avoid hitting the same problem I discuss below.
When lab mode is enabled and
hash_join  is ON, you can verify the optimizer feature from the optimizer_switch variable:mysql> SELECT @@aurora_version, @@aurora_lab_mode, @@optimizer_switch G
*************************** 1. row ***************************
@@aurora_version: 1.16
@@aurora_lab_mode: 1
@@optimizer_switch: index_merge=on,…,hash_join=on,hash_join_cost_based=on
Hash joins work well when joining large result sets because – unlike block nested loop in the same query – the optimizer scans the larger table and matches it against the hashed smaller table instead of the other way around. Consider the tables and query below:
| tbl | rows |
| branches | 55143 |
| users | 103949 |
| history | 27168887 |
FROM branches b
INNER JOIN users u ON (b.u_id = u.u_id)
INNER JOIN history h ON (u.u_id = h.u_id);
With hash joins enabled, we can see from the Extra column in the EXPLAIN output how it builds the join conditions:
mysql> EXPLAIN
-> FROM branches b
-> INNER JOIN users u ON (b.u_id = u.u_id)
-> INNER JOIN history h ON (u.u_id = h.u_id);
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | u | index | PRIMARY | PRIMARY | 4 | NULL | 103342 | Using index |
| 1 | SIMPLE | h | ALL | NULL | NULL | NULL | NULL | 24619023 | Using join buffer (Hash Join Outer table h) |
| 1 | SIMPLE | b | index | user_id | user_id | 4 | NULL | 54129 | Using index; Using join buffer (Hash Join Inner table b) |
Without hash joins, it’s a straightforward Cartesian (almost) product of all three tables:
mysql> SET optimizer_switch=’hash_join=off’;
Query OK, 0 rows affected (0.02 sec)
mysql> EXPLAIN
-> FROM branches b
-> INNER JOIN users u ON (b.u_id = u.u_id)
-> INNER JOIN history h ON (u.u_id = h.u_id);
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | h | ALL | NULL | NULL | NULL | NULL | 24619023 | NULL |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | percona.h.u_id | 1 | Using index |
| 1 | SIMPLE | b | ref | user_id | user_id | 4 | percona.h.u_id | 7 | Using index |
Now, the execution times without hash joins enabled:
-> FROM branches b
-> INNER JOIN users u ON (b.u_id = u.u_id)
-> INNER JOIN history h ON (u.u_id = h.u_id);
| COUNT(*) |
| 128815553 |
1 row in set (1 min 6.95 sec)
mysql> SET optimizer_switch=’hash_join=off’;
Query OK, 0 rows affected (0.01 sec)
-> FROM branches b
-> INNER JOIN users u ON (b.u_id = u.u_id)
-> INNER JOIN history h ON (u.u_id = h.u_id);
| COUNT(*) |
| 128815553 |
1 row in set (2 min 28.27 sec)
Clearly with this optimization enabled, we have more than a 50% gain from the example query.
Now while this type of query might be rare, most of us know we need to avoid really large JOINs as they are not scalable. But at some point, we find some that take advantage of the feature. Here is an excerpt from an actual production query I’ve recently worked on. It shows the good execution plan versus the one using hash joins.
This particular EXPLAIN output only differs in the row where without a hash join, it uses an index, and the query executes normally. With the hash join enabled, the optimizer thought it was better to use it instead:

*************************** 3. row ***************************
id: 1
select_type: SIMPLE
table: t
type: eq_ref
possible_keys: PRIMARY,r_type_id_ix,r_id_r_type_id_dt_ix
key_len: 4
ref: db.x.p_id
rows: 1
Extra: Using where

*************************** 3. row ***************************
id: 1
select_type: SIMPLE
table: t
type: index
possible_keys: PRIMARY,r_type_id_ix,r_id_r_type_id_dt_ix
key: r_id_r_type_id_dt_ix
key_len: 18
ref: NULL
rows: 715568233
Extra: Using where; Using index; Using join buffer (Hash Join Inner table t)

Needless to say, it caused problems. Unfortunately, a bug on Aurora 1.16 exists where hash joins cannot be turned off selectively (it is enabled by default) from the parameter group. If you try this, you get an error “Error saving: Invalid parameter value: hash_join=off for: optimizer_switch”. The only way to disable the feature is to turn off
, which requires an instance restart. An alternative is to simply add SET optimizer_switch=’hash_join=off’; from the application, especially if you rely on some of the other lab mode features in Aurora.
To summarize, the new hash join feature is a great addition. But as it’s a lab feature, be careful when upgrading!

Choosing the best indexes for MySQL query optimization

Many of our users, developers and database administrators, keep asking our team about EverSQL’s indexing recommendations algorithm.
So, we decided to write about it.
This tutorial won’t detail all the internals of the  algorithm, but rather try to lay down the most important aspects of indexing, in simple terms.
Also, and most importantly, we’ll present practical examples for properly indexing your tables and queries by relying on a set of rules, rather than on guessing.
Our focus in this tutorial is on MySQL, MariaDB and PerconaDB databases. This information may be relevant for other database vendors as well, but in some cases may not.
Which indexes should I create for my SQL query?
As a general rule of thumb, you can follow these steps for building compound indexes, when trying to optimize your SQL query:

Start by listing all tables used in your query. Create a separate list for each of the subqueries in your query.
So if you have one SELECT query with 2 SELECT subqueries inside, you should have 3 lists, one for each of them, containing the tables referenced in them.
At the end of this process, you will potentially add a list of columns for each of these tables, in each of the query lists.
The left-most columns in any of your indexes should match the columns in the query’s equality comparisons (i.e, age = 25).
You can add several columns, as long as all of them are compared to a constant with an equality operator.
Then, you should choose a single column which will be the ‘range column’. MySQL only supports one range column in each index.
Therefore, you should look at all the comparisons with a range operator (<>, >, <, IN(), BETWEEN, LIKE) and choose the one that will filter the most rows.
Add that column as the next column in your index for that table.
You can get some more information here about the reasons for adding the equality columns before range columns (the slide is written by a team member of MySQL’s optimizer team).
If no range columns exist in the query, you can add the columns from the GROUP BY clause.
If no range columns exist in the query and no GROUP BY clause, you can add the columns from the ORDER BY clause.
In some cases, it makes sense to also create a separate index that holds the ORDER BY clause’s columns, as MySQL sometimes chooses to use it. Please note though that for this to happen, the index should contain all columns from the ORDER BY clause, they should all be specified in the ORDER BY clause with the same order (ASC / DESC). This doesn’t guarantee that the database’s optimizer will pick this index rather than the WHERE compound index, but it’s worth a try.
At the end, add relevant columns from the SELECT clause, which might allow the MySQL to use the index as a covering index. A covering index is an index that contains all columns in both filtering and selection clauses of the query. Such an index allows the database to run the query solely by using the index, without having the need to access the table. In many cases this approach is significantly faster.

Let’s look at an example to clarify:
SELECT id, first_name, last_name, age from employees where first_name = ‘John’ AND last_name = ‘Brack’ and age > 25 ORDER BY age ASC;
For this query, we’ll start with adding the columns first_name and last_name, which are compared with an equality operator. Then, we’ll add the age column which is compared with a range condition. No need to have the ORDER BY clause indexed here, as the age column is already in the index. Last but not least, we’ll add id from the SELECT clause to the index to have a covering index.
So to index this query properly, you should add the index:employees (first_name, last_name, age, id).
The above is a very simplified pseudo-algorithm that will allow you to build simple indexes for rather simple SQL queries.
If you’re looking for a way to automate this process, while also adding the benefit of a proprietary indexing algorithm and query optimization, you can try out EverSQL Query Optimizer which does all the heavy lifting for you.
What not to do when indexing (or writing SQL queries)?
We gathered some of the most common mistakes we see programmers and database administrators do when writing queries and indexing their tables.
Indexing each and every column in the table separately
In most cases, MySQL won’t be able to use more than one index for each table in the query.
Therefore, when creating a separate index for each column in the table, the database is bound to perform only one of the search operations using an index, and the rest of them will be significantly slower, as the database can’t use an index to execute them.
We recommend using compound indexes (explained later in this article) rather than single-column indexes.
The OR operator in filtering conditions
Consider this query:SELECT a, b FROM tbl WHERE a = 3 OR b = 8.
In many cases, MySQL won’t be able to use an index to apply an OR condition, and as a result, this query is not index-able.
Therefore, we recommend to avoid such OR conditions and consider splitting the query to two parts, combined with a UNION DISTINCT (or even better, UNION ALL, in case you know there won’t be any duplicate results)
The order of columns in an index is important
Let’s say I hand you my contacts phone book which is ordered by the contact’s first name and ask you to count how many people are there named “John” in the book. You’ll grab the book in both hands and say “no problem”. You will navigate to the page that holds all names starting with John, and start counting from there.
Now, let’s say I change the assignment and hand you a phone book that is ordered by the contact’s last name, but ask you to still count all contacts with the first name “John”. How would you approach that? Well, the database scratches his head in this situation as well.
Now lets look at an SQL query to demonstrate the same behavior with the MySQL optimizer:SELECT first_name, last_name FROM contacts WHERE first_name = ‘John’;
Having the index contacts (first_name, last_name) is ideal here, because the index starts with our filtering condition and ends with another column in the SELECT clause.
But, having the reverse index contacts (last_name, first_name) is rather useless, as the database can’t use the index for filtering, as the column we need is second in the index and not first.
The conclusion from this example is that the order of columns in an index is rather important.
Adding redundant indexes
Indexes are magnificent when trying to optimize your SQL queries and they can improve performance significantly.
But, they come with a downside as well. Each index you’re creating should be kept updated and in sync when changes occur in your databases. So for each INSERT / UPDATE / DELETE in your databases, all relevant indexes should be updated. This update can take sometime, especially with large tables / indexes.
Therefore, do not create indexes unless you know you’ll need them.
Also, we highly recommend to analyze your database once in a while, searching for any redundant indexes that can be removed.

MySQL Connector/Java 8.0.9-rc has been released

Dear MySQL users,
MySQL Connector/Java 8.0.9-rc is the first Release Candidate
of the 8.0 branch of MySQL Connector/J, providing an insight into
upcoming features. It is suitable for use with MySQL Server versions
5.5, 5.6, 5.7, and 8.0. It supports the Java Database Connectivity
(JDBC) 4.2 API.
This release includes the following new features and changes, also
described in more detail on
MySQL Connectors and other MySQL client tools and applications now
synchronize the first digit of their version number with the (highest)
MySQL server version they support.
This change makes it easy and intuitive to decide which client version
to use for which server version.
As always, we recommend that you check the “CHANGES” file in the download
archive to be aware of changes in behavior that might affect your application.
To download MySQL Connector/Java 8.0.9-rc, see the “Development
Releases” tab at
Changes in MySQL Connector/J 8.0.9 (2018-01-30, Release
Functionality Added or Changed
* X DevAPI: In the process of refining the definition of
the X DevAPI to cover the most relevant usage scenarios,
the following API components have been removed from the X
DevAPI implementation for Connector/J:
+ Components that support DDLs for views, including
the createView(), dropView(), and modifyView()
+ Components that support DDLS for tables, including
the createTable(), dropTable(), and modifyTable()
+ Components that support session configurations,
including the SessionConfig object, the
PersistenceHandler interface, the PasswordHandler
interface, and the SessionConfigManager class.
* X DevAPI: Added the setSavepoint(), rollbackTo(), and
releaseSavepoint() methods to the Session interface to
support the SAVEPOINT
statements. See MySQL Connector/J X DevAPI Reference
( for more
* X DevAPI: A new patch() function has been added to the
ModifyStatement interface. The function accepts an
JSON-like object describing document changes and applies
them to documents matched by the modify() filter. See
MySQL Connector/J X DevAPI Reference
( for more
* X DevAPI: The createIndex() method for the Collection
interface now has a new syntax. See MySQL Connector/J X
DevAPI Reference
( for more
* X DevAPI: Added the following methods for single-document
operations in the X DevAPI:
+ replaceOne()
+ addOrReplaceOne()
+ getOne()
+ removeOne()
See MySQL Connector/J X DevAPI Reference
( for more
* X DevAPI: Setters and getters methods have been added for
the configuration properties with the MysqlDataSource,
MysqlXADataSource, and MysqlConnectionPoolDataSource
* X DevAPI: The connection property enabledTLSProtocols can
now be used to select the allowed TLS versions for an X
Protocol connection to the server.
* Connector/J now supports the new caching_sha2_password
authentication plugin, which is the default
authentication plugin for MySQL 8.0.4 and later (see
Caching SHA-2 Pluggable Authentication
gable-authentication.html) for details).
To authenticate accounts with the caching_sha2_password
plugin, either a secure connection to the server using
or an unencrypted connection that supports password exchange
using an RSA key pair (enabled by setting one or both of the
connecting properties allowPublicKeyRetrieval and
serverRSAPublicKeyFile) must be used.
Because earlier versions of Connector/J 8.0 do not
support the caching_sha2_password authentication plugin
and therefore will not be able to connect to accounts
that authenticate with the new plugin (which might
include the root account created by default during a new
installation of a MySQL 8.0 Server), it is highly
recommended that you upgrade now to Connector/J 8.0.9, to
help ensure that your applications continue to work
smoothly with the latest MySQL 8.0 Server.
* Connector/J now takes advantage of the MySQL Server 8.0
data dictionary by making the connection property
useInformationSchema true by default; this makes
Connector/J, by default, access the data dictionary more
efficiently by querying tables in the INFORMATION_SCHEME.
See INFORMATION_SCHEMA and Data Dictionary Integration
for details. Users can still set useInformationSchema to false,
but for MySQL 8.0.3 and later, some data dictionary queries might
then fail, due to deprecations of older data dictionary features.
* In the past, query texts were always passed as strings to
QueryInterceptor methods, even if the texts were not
actually used by them. Now, only suppliers for the texts
are passed, and the texts are only extracted by get()
calls on the suppliers.
Bugs Fixed
* The connection property nullNamePatternMatchesAll, when
set to false (which was the default value), caused some
DatabaseMetaData methods to throw an error when a null
search string was used with them. The behavior was not
compliant with the JDBC specification, which requires
that a search criterion be ignored when a null search
string is used for it. The connection property has now
been removed from Connector/J 8.0. (Bug #26846249, Bug
* Trying to print the query in a PreparedStatement using
the toString() method after it has been closed resulted
in an exception (No operations allowed after statement
closed) being thrown. (Bug #26748909)
* When working with MySQL Server 8.0, an update or delete
statement for a CONCUR_UPDATABLE ResultSet failed when
the ResultSet’s primary keys included a boolean column
and the character set used was not latin1. (Bug
* Connector/J failed to recognize a server greeting error
it received during a handshake with the server and parsed
the error message as a normal greeting packet, causing an
ArrayIndexOutOfBoundsException to be thrown. (Bug
On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

MySQL Cluster 7.6.4 is out

MySQL Cluster 7.6.4 DMR is out.This new version contains a number of goodies.1) Local checkpoint algorithm have been rewrittenThe new checkpointing is designed to scale to at least 16 TBytes of DataMemory sizesCheckpoints will be much faster, this decr…

How To Achieve PCI Compliance for MySQL & MariaDB with ClusterControl – The Replay

Watch and listen to Laurent Blume, Unix Systems Engineer & PCI Specialist and Vinay Joosery, CEO at Severalnines, as they discuss all there is to know about how to achieve PCI compliance for MySQL & MariaDB with ClusterControl in the replay of our latest webinar.

The Payment Card Industry Data Security Standard (PCI-DSS) is a set of technical and operational requirements defined by the PCI Security Standards Council (PCI SSC) to protect cardholder data. These standards apply to all entities that store, process or transmit cardholder data – with requirements for software developers and manufacturers of applications and devices used in those transactions.

Download whitepaper

 PCI Compliance for MySQL & MariaDB with ClusterControl

PCI data that resides in a MySQL or MariaDB database must of course also adhere to these requirements, and database administrators must follow best practices to ensure the data is secured and compliant. The PCI standards are stringent and can easily require a spiraling amount of time spent on meeting their requirements. Database administrators can end up overwhelmed when using software that was not designed for compliance, often because it long predates PCI itself, as is the case for most database systems in use today.

That is why, as often as possible, reliable tools must be chosen to help with that compliance, easing out the crucial parts. Each time the compliance for one requirement can be shown to be implemented, working, and logged accordingly, time will be saved. If well-designed, it will only require regular software upgrades, a yearly review and a moderate amount of tweaking to follow the standard’s evolution over time.

This webinar focuses on PCI-DSS requirements for a MySQL or MariaDB database back-end managed by ClusterControl in order to help meet these requirements. It provides a MySQL and MariaDB user focussed overview of what the PCI standards mean, how they impact database management and provide valuable tips and tricks on how to achieve PCI compliance for MySQL & MariaDB with ClusterControl.

Watch it  here


Introduction to the PCI-DSS standards
The impact of PCI on database management
Step by step review of the PCI requirements
How to meet the requirements for MySQL & MariaDB with ClusterControl


Laurent Blume, Unix Systems Engineer, PCI Specialist

Laurent’s career in IT started in 2000, his work since evolved from POS terminals for a jewelry store chain to infrastructure servers in a government aerospace R&D organization, even touching supercomputers. One constant throughout was the increasing need for security.

For the past 6 years, he has been in charge of first implementing, then keeping up with the PCI-DSS compliance of critical transnational payment authorization systems. Its implementation for databases has been an essential part of the task. For the last few years, it has expanded to the design and productization of MariaDB cluster backends for mobile contactless payments.

Vinay Joosery, CEO & Co-Founder, Severalnines

Vinay is a passionate advocate and builder of concepts and business around distributed database systems.

Prior to co-founding Severalnines, Vinay held the post of Vice-President EMEA at Pentaho Corporation – the Open Source BI leader. He has also held senior management roles at MySQL / Sun Microsystems / Oracle, where he headed the Global MySQL Telecoms Unit, and built the business around MySQL’s High Availability and Clustering product lines. Prior to that, Vinay served as Director of Sales & Marketing at Ericsson Alzato, an Ericsson-owned venture focused on large scale real-time databases.


payment card industry
data security

MySQL Connector/ODBC 5.3.10 has been released

MySQL Connector/ODBC 5.3.10, a new version of the ODBC driver for the MySQL database management system, has been released.
The available downloads include both a Unicode driver and an ANSI driver based on the same modern codebase. Please select the driver type you need based on the type of your application – Unicode or ANSI. Server-side prepared statements are enabled by default. It is suitable for use with any MySQL version from 5.5.
This is the fifth release of the MySQL ODBC driver conforming to the ODBC 3.8 specification. It contains implementations of key 3.8 features, including self-identification as a ODBC 3.8 driver, streaming of output parameters (supported for binary types only), and support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the Unicode driver only).
Also, Connector/ODBC 5.3 introduces a GTK+-based setup library providing a GUI DSN setup dialog on some Unix-based systems, currently included in the Debian 7/8/9, EL6/OL6, EL7/OL7 (64-bit only), Fedora 24/25/26, FreeBSD 10/11, SLES 12, Ubuntu 14/16/17 packages. Other new features in the 5.3 driver are FileDSN and Bookmarks support.
The release is now available in source and binary form for a number of platforms from our download pages at
For information on installing, please see the documentation at
Functionality Added or Changed

SQL query timeout (SQL_ATTR_QUERY_TIMEOUT) support was added. (Bug #26474362, Bug #69416)
When building Connector/ODBC from source, users now have a choice of linking dynamically or statically to the MySQL client library. Dynamic linking is selected by default. See Building Connector/ODBC from a Source Distribution on Windows ( or Building Connector/ODBC from a Source Distribution on Unix ( for details. However, the binary distributions of Connector/ODBC from Oracle remain statically linked to the client library.

Bugs Fixed

Fixed an OpenRecordSet memory leak due to get_session_variable() not freeing a result for errors. (Bug #27155880, Bug #88143
Calling MySQLDriverConnect with the pcbConnStrOut argument set to NULL caused an unexpected failure. (Bug #27101767, Bug #88371)
Connector/ODBC now compiles on MySQL 5.5. Thanks to Vadim Zeitlin for the patch. (Bug #26633971, Bug #87413)

Enjoy and thanks for the support!

Advance Your Oracle Database Career and Save Money

You have passed your SQL exam and created a strong foundation on which to begin building your Oracle Database career. What now? If you are looking for the next steps on your blueprint for a lasting career, look no further. This post will guide you on what to do next.

Continue building your career by earning the Oracle Database 12c R2 Administrator Certified Associate (OCA) certification. 

The Oracle Database 12cR2 Administration | 1Z0-072 exam is currently in beta for the deeply discounted price of $50USD. That’s nearly $200USD off the regular price of an Oracle Certification Exam! This opportunity is only available for a limited time. Register now.

Earning an Oracle Database Administrator OCA certification exposes you to the fundamental concepts and skills DBAs need for daily operational management and maintenance. Earning this certification positions you to seamlessly move forward to the Oracle Certified Professional level when you’re ready. 

While training is not required for this certification, the Oracle Database 12c R2: Install and Upgrade Workshop (coming soon) and the Oracle Database 12c R2: Administration Workshop courses from Oracle University, can propel you towards certification.

Visit and register for exam 1Z1-072. Get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website.

Be among the first to obtain this new Oracle Certified Associate (OCA) certification by registering for the beta exam today.



Build a Strong Database Career by Starting With a Solid Foundation

The Secret to Choosing the Oracle Database Certification that is Right for You

TEL/電話+86 13764045638
QQ 47079569