Author: Øystein Grøvlen

Speaking at FOSDEM

I will be speaking at FOSDEM the coming Sunday (February 4) on Histogram support in MySQL 8.0. If you are at FOSDEM, and want to learn about how you can use histograms to improve your query execution plans, visit the MySQL and Friends devroom at 11:10.Also, please, checkout the entire program for the MySQL devroom.  It is full of interesting talks.

Going Beyond Tabular EXPLAIN

A while ago, Lukas Eder posted a very interesting article on query optimizations that do not depend on the cost model. We often call such optimizations query transformations since they can be applied by rewriting the query.
In his blog post, Lukas investigated how different database systems handle different opportunities for query transformations. For MySQL, he complained that in some cases, the output from EXPLAIN is not sufficient to tell what is going on. However, as I will show in this blog post, there are other ways to get the information that he was looking for.
The EXPLAIN Warning
What may easily be overlooked when executing EXPLAIN for a query, is that it literally comes with a warning. This warning shows the query after the query transformations have been applied. Let’s look at the query Lukas used to explore Transitive Closure:
SELECT first_name, last_name, film_idFROM actor aJOIN film_actor fa ON a.actor_id = fa.actor_idWHERE a.actor_id = 1;
If we run EXPLAIN for this query, and display the associated warning, we see:
mysql> EXPLAIN SELECT first_name, last_name, film_id FROM actor a JOIN film_actor fa ON a.actor_id = fa.actor_id WHERE a.actor_id = 1;+—-+————-+——-+————+——-+—————+———+———+——-+——+———-+————-+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+—-+————-+——-+————+——-+—————+———+———+——-+——+———-+————-+| 1 | SIMPLE | a | NULL | const | PRIMARY | PRIMARY | 2 | const | 1 | 100.00 | NULL || 1 | SIMPLE | fa | NULL | ref | PRIMARY | PRIMARY | 2 | const | 19 | 100.00 | Using index |+—-+————-+——-+————+——-+—————+———+———+——-+——+———-+————-+2 rows in set, 1 warning (0,00 sec)mysql> show warnings;+——-+——+—————————————————————————————————————————————————————————————————————+| Level | Code | Message |+——-+——+—————————————————————————————————————————————————————————————————————+| Note | 1003 | /* select#1 */ select ‘PENELOPE’ AS `first_name`,’GUINESS’ AS `last_name`,`sakila`.`fa`.`film_id` AS `film_id` from `sakila`.`actor` `a` join `sakila`.`film_actor` `fa` where (`sakila`.`fa`.`actor_id` = 1) |+——-+——+—————————————————————————————————————————————————————————————————————+1 row in set (0,00 sec)
If you scroll to the right, you will see that the warning contains: `sakila`.`fa`.`actor_id` = 1. In other words, the predicate actor_id = 1 has been applied to the film_actor table because of transitive closure.
The above warning message also illustrates another aspect of MySQL query optimization. Since the query specifies the value for the primary key of the actor table, the primary key look-up will be done in the optimizer phase. Hence, the warning shows that columns from the actor table have been replaced by the column values for the requested row. Structured EXPLAIN
MySQL 5.6 introduced Structured EXPLAIN. Its output describes the query plan in JSON format. This output contains additional information compared to traditional EXPLAIN. For example, while the tabular EXPLAIN only says “Using where” when a condition is applied when reading a table, the JSON output will display the actual condition. Let’s look at the output for the first example on Removing “Silly” Predicates:
mysql> EXPLAIN FORMAT=JSON SELECT * FROM film WHERE release_year = release_year;+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+| EXPLAIN |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+| { “query_block”: { “select_id”: 1, “cost_info”: { “query_cost”: “212.00” }, “table”: { “table_name”: “film”, “access_type”: “ALL”, “rows_examined_per_scan”: 1000, “rows_produced_per_join”: 100, “filtered”: “10.00”, “cost_info”: { “read_cost”: “192.00”, “eval_cost”: “20.00”, “prefix_cost”: “212.00”, “data_read_per_join”: “78K” }, “used_columns”: [ “film_id”, “title”, “description”, “release_year”, “language_id”, “original_language_id”, “rental_duration”, “rental_rate”, “length”, “replacement_cost”, “rating”, “special_features”, “last_update” ], “attached_condition”: “(`sakila`.`film`.`release_year` = `sakila`.`film`.`release_year`)” } }} |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+
The value for “attached_condition” shows that MySQL does not simplify the condition release_year = release_year to release_year IS NOT NULL. So Lukas is right in his educated guess that MySQL does not optimize this. However, if we look at a similar silly predicate on a NOT NULL column, we see that such predicates are eliminated by MySQL:
mysql> EXPLAIN FORMAT=JSON SELECT * FROM film WHERE film_id = film_id;+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————–+| EXPLAIN |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————–+| { “query_block”: { “select_id”: 1, “cost_info”: { “query_cost”: “212.00” }, “table”: { “table_name”: “film”, “access_type”: “ALL”, “rows_examined_per_scan”: 1000, “rows_produced_per_join”: 1000, “filtered”: “100.00”, “cost_info”: { “read_cost”: “12.00”, “eval_cost”: “200.00”, “prefix_cost”: “212.00”, “data_read_per_join”: “781K” }, “used_columns”: [ “film_id”, “title”, “description”, “release_year”, “language_id”, “original_language_id”, “rental_duration”, “rental_rate”, “length”, “replacement_cost”, “rating”, “special_features”, “last_update” ] } }} |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————–+
In this case, there is no “attached_condition” in the JSON output. In other words, the silly predicate has been removed.
Optimizer Trace
Predicate Merging investigates whether the optimizer will merge two predicates on the same column. There are actually two different aspects here:

Whether predicates are merged and evaluated as a single predicate
Whether key ranges as specified by the query are merged when setting up index range scans
The latter is the most important since it ensures that not more rows than necessary are accessed. Lukas concludes that MySQL does predicate merging based on the estimated number of rows that tabular EXPLAIN shows: mysql> EXPLAIN SELECT * -> FROM actor -> WHERE actor_id IN (2, 3, 4) -> AND actor_id IN (1, 2, 3);+—-+————-+——-+————+——-+—————+———+———+——+——+———-+————-+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+—-+————-+——-+————+——-+—————+———+———+——+——+———-+————-+| 1 | SIMPLE | actor | NULL | range | PRIMARY | PRIMARY | 2 | NULL | 2 | 100.00 | Using where |+—-+————-+——-+————+——-+—————+———+———+——+——+———-+————-+1 row in set, 1 warning (0,00 sec)Here EXPLAIN shows that the estimated number of rows to be read is 2, and this fits with how many rows satisfy the merged predicates. If we look at the output from structured EXPLAIN, we get a different picture:
mysql> EXPLAIN FORMAT=JSON SELECT * FROM actor WHERE actor_id IN (2, 3, 4) AND actor_id IN (1, 2, 3);+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+| EXPLAIN |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+| { “query_block”: { “select_id”: 1, “cost_info”: { “query_cost”: “2.81” }, “table”: { “table_name”: “actor”, “access_type”: “range”, “possible_keys”: [ “PRIMARY” ], “key”: “PRIMARY”, “used_key_parts”: [ “actor_id” ], “key_length”: “2”, “rows_examined_per_scan”: 2, “rows_produced_per_join”: 2, “filtered”: “100.00”, “cost_info”: { “read_cost”: “2.41”, “eval_cost”: “0.40”, “prefix_cost”: “2.81”, “data_read_per_join”: “560” }, “used_columns”: [ “actor_id”, “first_name”, “last_name”, “last_update” ], “attached_condition”: “((`sakila`.`actor`.`actor_id` in (2,3,4)) and (`sakila`.`actor`.`actor_id` in (1,2,3)))” } }} |+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————+As you can see from “attached_condition”, the predicates are not actually merged. It is only the key ranges that are merged. Structured EXPLAIN does not show what key ranges are used for the index range scan. However, we can use Optimizer Trace to find this information. (See instructions on how to obtain the optimizer trace.) The optimizer trace for the above query contains this part: “chosen_range_access_summary”: { “range_access_plan”: { “type”: “range_scan”, “index”: “PRIMARY”, “rows”: 2, “ranges”: [ “2 “3 ] }, “rows_for_plan”: 2, “cost_for_plan”: 2.41, “chosen”: true }This shows that the range optimizer has actually set up two ranges here; one for actor_id = 2 and one for actor_id = 3. (In other words, the range optimizer does not seem to take into account that actor_id is an integer column.) It is the same story for Lukas’ other query investigating predicate merging. That query specifies two overlapping key ranges:
SELECT *FROM filmWHERE film_id BETWEEN 1 AND 100AND film_id BETWEEN 99 AND 200;
Also in this case structured EXPLAIN shows that predicates are not merged, while optimizer trace shows that the key ranges are merged:
“chosen_range_access_summary”: { “range_access_plan”: { “type”: “range_scan”, “index”: “PRIMARY”, “rows”: 2, “ranges”: [ “99 ] }, “rows_for_plan”: 2, “cost_for_plan”: 2.41, “chosen”: true }Since this query has range predicates instead of equality predicates, MySQL will here set up a single range scan from 99 to 100. Concluding RemarksI have in this blog post shown some examples of how you can get additional information about a query plan, beyond what you can see in the output of tabular EXPLAIN. There is a lot of other information that you can deduct from looking at the EXPLAIN Warning, Structured EXPLAIN, or Optimizer Trace than what I have discussed here. That can be the topic for later blog posts.

MySQL 8.0: Improved performance with CTE

Last week I published a blog post in the MySQL Server Blog where I showed how the execution time of a query was reduced by 50% by using a Common Table Expression (CTE) instead of a view.In the coming weeks, there will be several opportunities to attend a presentation on CTE and another SQL feature that will arrive in MySQL 8.0, Window Functions.  This presentation is part of the Oracle MySQL Innovation Day that will be held both in the Bay Area (April 28) and in the Boston Area (May 2).I will also give the presentation at the Boston MySQL Meetup on May 1st.I hope to see some of you there!

 

What to Do When the MySQL Optimizer Overestimates the Condition Filtering Effect

In my previous blog post, I showed an example of how the MySQL Optimizer found a better join order by taking into account the filtering effects of conditions. I also explained that for non-indexed columns the filtering estimate is just a guess, and that there is a risk for non-optimal query plans if the guess is off.
We have received a few bug reports on performance regressions when upgrading from 5.6 to 5.7 that are caused by the optimizer overestimating the filtering effect. In most cases, the cause of the regression is inaccurate filtering estimates for equality conditions on non-indexed columns with low cardinality. In this blog post, I will discuss three ways to handle such regressions:

Create an index
Use an optimizer hint to change the join order
Disable condition filtering First, I will show an example where overestimating the condition filtering effects gives a non-optimal query plan. Example: DBT-3 Query 21
We will look at Query 21 in the DBT-3 benchmark:
SELECT s_name, COUNT(*) AS numwaitFROM supplierJOIN lineitem l1 ON s_suppkey = l1.l_suppkeyJOIN orders ON o_orderkey = l1.l_orderkeyJOIN nation ON s_nationkey = n_nationkeyWHERE o_orderstatus = ‘F’ AND l1.l_receiptdate > l1.l_commitdate AND EXISTS (SELECT * FROM lineitem l2 WHERE l2.l_orderkey = l1.l_orderkey AND l2.l_suppkey l1.l_suppkey) AND NOT EXISTS (SELECT * FROM lineitem l3 WHERE l3.l_orderkey = l1.l_orderkey AND l3.l_suppkey l1.l_suppkey AND l3.l_receiptdate > l3.l_commitdate) AND n_name = ‘JAPAN’GROUP BY s_name ORDER BY numwait DESC, s_name LIMIT 100;
Query 21 is called Suppliers Who Kept Orders Waiting Query. In MySQL 5.7, Visual EXPLAIN shows the following query plan for Query 21:

The four tables of the join are joined from left-to-right, starting with a full table scan of the orders table. There are also two dependent subqueries on the lineitem table that will be executed for each row of the outer lineitemtable. The execution time for this query plan is almost 25 seconds on a scale factor 1 DBT-3 database. This is more than ten times as long as the query plan used in MySQL 5.6!
The filtered column of tabular EXPLAIN shows the optimizer’s estimates for the condition filter effects (some of the columns have been removed to save space):

id
select_type
table
type
key
rows
filtered
Extra

1
PRIMARY
orders
ALL
NULL
1500000
10.00
Using where; Using temporary; Using filesort

1
PRIMARY
l1
ref
PRIMARY
4
33.33
Using where

1
PRIMARY
supplier
eq_ref
PRIMARY
1
100.00
Using index condition

1
PRIMARY
nation
ALL
NULL
25
4.00
Using where; Using join buffer (Block Nested Loop)

3
DEPENDENT SUBQUERY
l3
ref
PRIMARY
4
30.00
Using where

2
DEPENDENT SUBQUERY
l2
ref
PRIMARY
4
90.00
Using where

This shows that the optimizer assumes that the condition o_orderstatus = ‘F’ is satisfied by 10% of the rows in the orders table. Hence, the optimizer thinks that it will be possible to filter out a lot of orders early by starting with the orders table. However, the truth is that almost 50% of the rows have the requested order status. In other words, by overestimating the filtering effect for orders, query plans that start with the orders table will appear to be less costly than is actually the case.
We will now look at how we can influence the optimizer to pick a better query plan for this query. Option 1: Create an Index
As mentioned, the optimizer does not have any statistics on non-indexed columns. So one way to improve the optimizer’s precision is to create an index on the column. For Query 21, since the filtering estimate for o_orderstatus is way off, we can try to see what happens if we create an index on this column:
CREATE INDEX i_o_orderstatus ON orders(o_orderstatus);With this index, the query plan has changed:

id
select_type
table
type
key
rows
filtered
Extra

1
PRIMARY
nation
ALL
NULL
25
10.00
Using where; Using temporary; Using filesort

1
PRIMARY
supplier
ref
i_s_nationkey
400
100.00
NULL

1
PRIMARY
l1
ref
i_l_suppkey
600
33.33
Using where

1
PRIMARY
orders
eq_ref
PRIMARY
1
50.00
Using where

3
DEPENDENT SUBQUERY
l3
ref
PRIMARY
4
30.00
Using where

2
DEPENDENT SUBQUERY
l2
ref
PRIMARY
4
90.00
Using where

We see from the EXPLAIN output that the estimated filtering effect for orders is now 50%. Given that, the optimizer prefers a different join order, starting with the nationtable. This is the same join order as one got in MySQL 5.6, and the execution time with this plan is 2.5 seconds. Instead of accessing 50% of all orders, the query will now just access orders for suppliers in Japan. However, this improvement comes at the cost of having to maintain an index that will probably never be used!
Looking at Query 21, there is also an equality condition on another column without an index; n_name of the nationtable. For this column, 10% is actually a too high estimate. There are 25 nations in the table. Hence, the correct estimate should be 4%. What if we, instead, create an index on this column?
DROP INDEX i_o_orderstatus ON orders;CREATE INDEX i_n_name ON nation(n_name);Then we get this query plan:

id
select_type
table
type
key
rows
filtered
Extra

1
PRIMARY
nation
ref
i_n_name
1
100.00
Using index; Using temporary; Using filesort

1
PRIMARY
supplier
ref
i_s_nationkey
400
100.00
NULL

1
PRIMARY
l1
ref
i_l_suppkey
600
33.33
Using where

1
PRIMARY
orders
eq_ref
PRIMARY
1
10.00
Using where

3
DEPENDENT SUBQUERY
l3
ref
PRIMARY
4
30.00
Using where

2
DEPENDENT SUBQUERY
l2
ref
PRIMARY
4
90.00
Using where

In this case, our new index is actually used! Since scanning a table with 25 rows takes a neglible part of the total execution time, the savings for Query 21 are insignificant, but there might be other queries where such an index could be more useful. Option 2: Join Order Hint
Instead of trying to improve statistics to get a better query plan, we can use hints to influence the optimizer’s choice of query plan. The STRAIGHT_JOIN hint can be used to change the join order. It comes in two flavors:

STRAIGHT_JOIN right after SELECT SELECT STRAIGHT_JOIN … FROM t1, t2, t3 Joins tables in the order specified in FROM clause (i.e., t1 → t2 → t3) STRAIGHT_JOIN used as a join operator … FROM t1 STRAIGHT_JOIN t2 … Left-hand table should be processed before right-hand table (i.e., … t1 → … → t2 …) We will use the second variant and specify that nationshould be processed before orders: SELECT s_name, COUNT(*) AS numwaitFROM supplierJOIN lineitem l1 ON s_suppkey = l1.l_suppkeyJOIN nation ON s_nationkey = n_nationkeySTRAIGHT_JOIN orders ON o_orderkey = l1.l_orderkeyWHERE o_orderstatus = ‘F’ AND l1.l_receiptdate > l1.l_commitdate AND EXISTS (SELECT * FROM lineitem l2 WHERE l2.l_orderkey = l1.l_orderkey AND l2.l_suppkey l1.l_suppkey) AND NOT EXISTS (SELECT * FROM lineitem l3 WHERE l3.l_orderkey = l1.l_orderkey AND l3.l_suppkey l1.l_suppkey AND l3.l_receiptdate > l3.l_commitdate) AND n_name = ‘JAPAN’GROUP BY s_name ORDER BY numwait DESC, s_name LIMIT 100;This way we force the optimizer to pick a query plan where nation comes before orders, and the resulting query plan is the “good one”:

id
select_type
table
type
key
rows
filtered
Extra

1
PRIMARY
nation
ALL
NULL
25
10.00
Using where; Using temporary; Using filesort

1
PRIMARY
supplier
ref
i_s_nationkey
400
100.00
NULL

1
PRIMARY
l1
ref
i_l_suppkey
600
33.33
Using where

1
PRIMARY
orders
eq_ref
PRIMARY
1
10.00
Using where

3
DEPENDENT SUBQUERY
l3
ref
PRIMARY
4
30.00
Using where

2
DEPENDENT SUBQUERY
l2
ref
PRIMARY
4
90.00
Using where

In order to user STRAIGHT_JOIN we had to rearrange the tables in the FROM clause. This is a bit cumbersome, and to avoid this, we have in MySQL 8.0 introduced new join order hintsthat uses the new optimizer hint syntax. Using this syntax, we can add hints right after SELECT and avoid editing the rest of the query. In the case of Query 21, we can add hints like
SELECT /*+ JOIN_PREFIX(nation) */ … or SELECT /*+ JOIN_ORDER(nation, orders) */ …to achieve the desired query plan. Option 3: Disable Condition FilteringMany optimizer features can be disabled by setting the optimizer_switch variable. The following statement will make the optimizer not use condition filtering estimates: SET optimizer_switch=’condition_fanout_filter=off’;Looking that the query plan as presented by EXPLAIN, we see that filtering is no longer taken into account:

id
select_type
table
type
key
rows
filtered
Extra

1
PRIMARY
nation
ALL
NULL
25
100.00
Using where; Using temporary; Using filesort

1
PRIMARY
supplier
ref
i_s_nationkey
400
100.00
NULL

1
PRIMARY
l1
ref
i_l_suppkey
600
100.00
Using where

1
PRIMARY
orders
eq_ref
PRIMARY
1
100.00
Using where

3
DEPENDENT SUBQUERY
l3
ref
PRIMARY
4
100.00
Using where

2
DEPENDENT SUBQUERY
l2
ref
PRIMARY
4
100.00
Using where

Note that you can set optimizer_switch at session level. Hence, it is possible to disable condition filtering for individual queries. However, this requires extra round-trips to the server to set optimizer_switch before and after the execution of the query. (Option 4: Wait for Histograms)
We are working to improve the statistics available to the optimizer by introducing histograms. A histogram provides more detailed information about the data distribution in a table column. With histograms, the optimizer will be able to estimate pretty accurately the filtering effects also for conditions on non-indexed columns. Until then, you will have to resort to one of options presented above to improve bad query plans caused by inaccurate filtering estimates.

MySQL 5.7: Improved JOIN Order by Taking Condition Filter Effect into Account

One of the major challenges of query optimizers is to correctly estimate how many rows qualify from each table of a join. If the estimates are wrong, the optimizer may choose a non-optimal join order. Before MySQL 5.7, the estimated number of rows from a table only took into account the conditions from the WHERE clause that were used to set up the access method (e.g., the size of an index range scan). This often led to row estimates that were far too high, resulting in very wrong cost estimates for join plans. To improve this issue, MySQL 5.7 introduced a cost model that considered the entire WHERE condition when estimating the number of qualifying rows from each table. This model estimates the filtering effect of the table’s conditions. As shown in the above figure, the condition filter effect will reduce the estimated number of rows from tx that will lead to an index look-up on tx+1. For more details on how condition filtering works, see two earlier blog posts on this topic: part1, part2. Taking condition filtering into account, the join optimizer will in many cases be able to find a more optimal join order. However, there are cases where the optimizer will overestimate the filtering effect and choose a non-optimal query plan. In this blog post, I will show an example of a query that benefits from condition filtering, and in a follow-up blog post I will discuss what could be done if condition filtering does not have the desired effect. Example: DBT-3 Query 8To show the benefits of condition filtering, we will look at Query 8 in the DBT-3 benchmark:SELECT o_year, SUM(CASE WHEN nation = ‘FRANCE’ THEN volume ELSE 0 END) / SUM(volume) AS mkt_shareFROM ( SELECT EXTRACT(YEAR FROM o_orderdate) AS o_year, l_extendedprice * (1 – l_discount) AS volume, n2.n_name AS nation FROM part JOIN lineitem ON p_partkey = l_partkey JOIN supplier ON s_suppkey = l_suppkey JOIN orders ON l_orderkey = o_orderkey JOIN customer ON o_custkey = c_custkey JOIN nation n1 ON c_nationkey = n1.n_nationkey JOIN region ON n1.n_regionkey = r_regionkey JOIN nation n2 ON s_nationkey = n2.n_nationkey WHERE r_name = ‘EUROPE’ AND o_orderdate BETWEEN ‘1995-01-01’ AND ‘1996-12-31’ AND p_type = ‘PROMO BRUSHED STEEL’) AS all_nations GROUP BY o_year ORDER BY o_year;Query 8 is called National Market Share Query, and it finds the market share in Europe for French suppliers of a given part type. You do not need to understand this query in detail. The main point is that 8 tables are joined, and that it is important to find an efficient join order for the query to perform well. In MySQL 5.6, Visual EXPLAIN shows the following query plan for Query 8: Tables are joined from left-to-right, starting with a full table scan of the region table, doing index look-ups into all other tables, with the part table as the last table. The execution time for this query is around 6 seconds on a scale factor 1 DBT-3 database. If we look at the WHERE clause of the query, we see that there is one condition with high selectivity: We are interested in just one particular of many possible part types. That the region should be Europe, and that the time period is restricted to two years, will still leave many candidate rows, so these conditions have low selectivity. In other words, a query plan that put the part table at the end is not optimal. If part was processed early, we would be able to filter out many rows, and the number of index look-ups into other tables could be significantly reduced. In MySQL 5.7, the use of condition filtering estimates changes the query plan for Query 8, and the new query plan looks as follows: As you see, part is now processed early. (We see that region is still processed first, but this is a small table with only 5 rows of which only one row matches its condition. Hence, it will not impact the fan-out of the join.) This new query plan takes 0.5 seconds to execute. In other words, execution time is reduced by 92% by changing the join order! The main saving is that, instead of going through all orders for customers in Europe, one will only look at orders of parts of a given type. (The observant reader will have noticed another difference between the Visual EXPLAIN diagrams: The box around the tables is no longer present in the 5.7 diagram. If you look a Query 8, you notice that the many-table join is in a sub-query in the FROM clause; aka derived table. In MySQL 5.6 and earlier, the result of derived tables where materialized in a temporary table, before the main query was executed. The extra box in the 5.6 diagram, represents such a temporary table. In MySQL 5.7, derived tables will be merged into the outer query if possible, and the materialization of the derived table is avoided.) To see the optimizer’s estimates for condition filter effects, we can look at the filtered column of tabular EXPLAIN (some of the columns have been removed to save space): idselect_typetabletypekeyrowsfilteredExtra1SIMPLEregionALLNULL520.00Using where; Using temporary; Using filesort1SIMPLEpartALLNULL20000010.00Using where; Using join buffer (Block Nested Loop)1SIMPLElineitemrefi_l_partkey_suppkey30100.00Using index condition1SIMPLEsuppliereq_refPRIMARY1100.00Using where1SIMPLEn2eq_refPRIMARY1100.00NULL1SIMPLEorderseq_refPRIMARY150.00Using where1SIMPLEcustomereq_refPRIMARY1100.00Using where1SIMPLEn1eq_refPRIMARY120.00Using whereWe can see that there are 4 tables for which the optimizer assumes some filtering; region, part, orders, and n1 (nation). The optimizer has three sources for its estimates: Range estimates: For indexed columns, the range optimizer will ask the storage engine for an estimate as to how many rows are within a given range. This estimate will be pretty accurate. For our query, this is the case for the region and orders table. Europe is 1 out of 5 regions and the requested two year period contains 50% of the orders in the database.Index statistics: For indexed columns, MySQL also keep statistics on the average number of rows per value (records_per_key). This can be used to estimate the filtering effect of conditions that refers to columns of preceeding tables in the join order. For our query this is the case for n1. The query has two conditions involving n1, but the condition on n_nationkey is used for the index look-up and will not contribute to extra filtering. On the other hand, the condition on n_regionkey will provide some extra filtering, and since records_per_key tells that there are 5 distinct values for this column, the estimated filtering effect will be 20%. The assumption is that values are evenly distributed. If the distribution is skewed, the filtering estimate will be less accurate. Another cause of estimation errors is that index statistics are based on sampling, so it may not precisely reflect the actual distribution of values.Guesstimates:For non-indexed columns, MySQL does not have any statistics. The optimizer will then resort to heuristics. For equality conditions, the filtering effect is assumed to be 10%. The DBT-3 database does not have an index on part_type. Hence, the filtering effect for the part table will be set to 10%. This is actually a bit off since there are 150 different parts. However, in this case, just assuming that there is some filtering, made the optimizer choose a better plan.CaveatAs shown in this blog post, taking condition filtering into account may give better query plans. However, as discussed, there is a risk that the filtering estimate are inaccurate, especially for conditions on non-indexed columns. Another cause of estimation errors is that it is assumed that columns are not correlated. Hence, when here are conditions on correlated columns, the optimizer will overestimate the filtering effect of those conditions. We have got a few bug reports on performance regressions when upgrading from 5.6 to 5.7 that are caused by the optimizer overestimating the filtering effect. In most cases, this is because the query contains equality conditions on non-indexed columns with low cardinality, so the guesstimate of 10% is too optimistic. In my next blog post, I will discuss what can be done when this happens. We are also working on adding histograms to MySQL. Histograms will give a much better basis for estimating the filtering effects of conditions on non-indexed columns.

Presentations at pre-FOSDEM’17 MySQL Day

I am currently at FOSDEM in Brussels attending a lot of interesting presentations in the MySQL and Friends devroom.  Yesterday the MySQL community team organized a pre-FOSDEM MySQL Day, where I delivered two talks.  The slides for my talks can be found on Slideshare: MySQL 8.0: Common Table Expressions  Using Optimizer Hints to Improve MySQL Query Performance 

Improving the Stability of MySQL Single-Threaded Benchmarks

I have for some years been running the queries of the DBT-3 benchmark, both to verify the effect of new query optimization features, and to detect any performance regressions that may have been introduced. However, I have had issues with getting stable results. While repeated runs of a query is very stable, I can get quite different results if I restart the server. As an example, I got a coefficient of variation (COV) of 0.1% for 10 repeated executions of a query on the same server, while the COV for the average runtime of 10 such experiments was over 6%! With such large variation in results, significant performance regressions may not be noticed. I have tried a lot of stuff to get more stable runs, and in this blog post I will write about the things that I have found to have positive effects on stability. At the end, I will also list the things I have tried that did not show any positive effects. Test EnviromentFirst, I will describe the setup for the tests I run. All tests are run on a 2-socket box running Oracle Linux 7. The CPUs are Intel Xeon E5-2690 (Sandy Bridge) with 8 physical cores @ 2.90GHz. I always bind the MySQL server to a single CPU, using taskset or numactl, and Turbo Boost is disabled. The computer has 128 GB of RAM, and the InnoDB buffer pool is big enough to contain the entire database. (4 GB buffer pool for scale factor 1 and 32 GB buffer pool for scale factor 10.) Each test run is as follows: Start the serverRun a query 20 times in sequenceRepeat step 2 for all DBT-3 queriesStop the serverThe result for each query will be the average execution times of the last 10 runs. The reason for the long warm-up period is that, from experience, when InnoDB’s Adaptive Hash Index is on, you will need 8 runs or so before execution times are stable. As I wrote above, the variance within each test run is very small, but the difference between test runs can be large. The variance is somewhat improved by picking the best result out of three test runs, but it is still not satisfactory. Also, a full test run on a scale factor 10 database takes 9 hours, so I would like to avoid having to repeat the tests multiple times. Address Space Layout RandomizationA MySQL colleague mentioned that he had heard about some randomization that was possible to disable. After some googling, I learned about Address Space Layout Randomization (ASLR). This is a security technique that is used to prevent an attacker from being able to determine where code and data are located in the address space of a process. I also found some instructions on stackoverflow for how to disable it on Linux. Turning off ASLR sure made a difference! Take a look at this graph that shows the average execution time for Query 12 in ten different test runs with and without ASLR (All runs are with a scale factor 1 DBT-3 database on MySQL 8.0.0 DMR): I will definitely make sure ASLR is disabled in future tests! Adaptive Hash IndexInnoDB maintains an Adaptive Hash Index (AHI) for frequently accessed pages. The hash index will speed up look-ups on primary key, and is also useful for secondary index accesses since a primary key look-up is needed to get from the index entry to the corresponding row. Some DBT-3 queries run twice as slow if I turn off AHI, so it has definitely some value. However, experience shows that I will have to repeat a query several times before the AHI is actually built for all pages accessed by the query. I plan to write another blog post where I discuss more about AHI. Until I stumbled across ASLR, turning off AHI was my best bet at stabilizing the results. After disabling ASLR, also turning off AHI only shows a slight improvement in stability. However, there are other reasons for turning off AHI. I have observed some times that with AHI on, a change of query plan for one query may affect the execution time of subsequent queries. I suspect the reason is that the content of the AHI after a query has been run, may change with a different query plan. Hence, the next query may be affected if it accesses the same data pages. Turning off AHI also means that I no longer need the long warm-up period for the timing to stabilize. I can then repeat each query 10 times instead of 20 times. This means that even if many of the queries take longer to execute, the total time to execute a test run will be lower. Because of the above, I have decided to turn off AHI in most of my test runs. However, I will run with AHI on once in a while just to make sure that there are no major regressions in AHI performance. Preload Data and IndexesI also tried to start each test run with a set of queries that would sequentially scan all tables and indexes. My thinking was that this could give a more deterministic placement of data in memory. Before I turned off ASLR, preloading had very good effects on the stability when AHI was disabled. With ASLR off, the difference is less significant, but there is still a slight improvement. Below is a table that shows some results for all combinations of the settings discussed so far. Ten test runs were performed for each combination on a scale factor 1 database. The numbers shown is the average difference between the best and the worst runs over all queries, and the largest difference between the best and the worst runs for a single query.ASLRAHIPreloadAvg(MAX-MIN)Max(MAX-MIN)✔✔✘6.18%14.75%✔✘✘4.65%14.79%✔✔✔5.56%14.65%✔✘✔2.18%5.05%✘✔✘1.66%3.94%✘✘✘1.58%3.58%✘✔✔1.66%3.78%✘✘✔1.09%3.27%From the above table it is clear that the most stable runs are achieved by using preloading in combination with disabling both ASLR and AHI. For one of the DBT-3 queries, using preloading on a scale factor 10 database leads to higher variance within a test run. While the COV within a test run is below 0.2% for all queries without preloading, query 21 has a COV of 3% with preloading. I am currently investigating this, and I have indications that the variance can be reduced by setting the memory placement policy to interleave. I guess the reason is that with a 32 GB InnoDB buffer pool, one will not be able to allocate all memory locally to the CPU where the server is running. What Did Not Have an Effect?Here is a list of things I have tried that did not seem to have a positive effect on the stability of my results: Different governors for CPU frequency scaling. I have chosen the performance governor, because it “sounds good”, but I did not see any difference when using the powersave governor instead. I also tried turning of the Intel pstate driver, but did not notice any difference in that case either.Bind the MySQL server to a single core or thread (instead of CPU).Bind the MySQL client to a single CPU.Different settings for NUMA memory placement policy using numactl. (With the possible exception of using interleave for scale factor 10 as mentioned above.)Different memory allocation libraries (jemalloc, tcmalloc). jemalloc actually seemed to increase the instability.Disable InnoDB buffer pool preloading: innodb_buffer_pool_load_at_startup = offSet innodb_old_blocks_time = 0ConclusionMy tests have shown that I get better stability of test results if I disable both ASLR and AHI, and that combining this with preloading of all tables and indexes in many cases will further improve the stability of my test setup.I welcome any comments and suggestions on how to further increase the stability for MySQL benchmarks. I do not claim to be an expert in this field, and any input will be highly appreciated.

Presentations at OpenWorld and PerconaLive

During the last month, I presented on MySQL both at Oracle OpenWorld and Percona Live in Amsterdam.  The slides from the presentations have been uploaded to the conference web sites,  and you also find the slides at Slideshare:How to Analyze and Tune MySQL Queries for Better Performance(Tutorial at OpenWorld)MySQL 8.0: Common Table Expressions(Presented at both conferences)

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569