5.6

Fun with Bugs #63 – On Bugs Detected by ASan

Among other things Geir Hoydalsvik stated in his nice post yesterday: “We’ve fixed a number of bugs detected by UBsan and Asan.”
This is indeed true, I already noted many related bugs fixed in recent MySQL 8.0.4. But I think that a couple of details are missing in the blog post. First of all, there still a notable number of bugs detected by ASan or noted in builds with ASan that remain “Verified”. Second, who actually found and reported these bugs?I decided to do a quick search and present my summary to clarify these details. Let me start with the list of “Verified” or “Open” bugs in public MySQL bugs database, starting from the oldest one:

Bug #69715 – “UBSAN: Item_func_mul::int_op() mishandles 9223372036854775809*-1”. The oldest related “Verified” bug I found was reported back in 2013 by Arthur O’Dwyer. Shane Bester from Oracle kindly keeps checking it with recent and upcoming releases, so we know that even ‘9.0.0-dmr-ubsan’ (built on 20 October 2017) was still affected.

Bug #80309 – “some innodb tests fail with address sanitizer (WITH_ASAN)”. It was reported by Richard Prohaska and remains “Verified” for more than two years already.

Bug #80581 – “rpl_semi_sync_[non_]group_commit_deadlock crash on ASan, debug”. This bug reported by Laurynas Biveinis from Percona two years ago is still “Verified”.

Bug #81674 – “LeakSanitizer-enabled build fails to bootstrap server for MTR”. This bug reported by  Laurynas Biveinis affects only MySQL 5.6, but still, why not to backport the fix from 5.7?

Bug #82026 – “Stack buffer overflow with –ssl-cipher=<more than 4K characters>”. Bug detected by ASan was noted by Yura Sorokin from Percona and reported by Laurynas Biveinis.

Bug #82915 – “SIGKILL myself when using innodb_limit_optimistic_insert_debug=2 and drop table”. ASan debug builds are affected. This bug was reported by Roel Van de Paar from Percona.

Bug #85995 – “Server error exit due to empty datadir causes LeakSanitizer errors”. This bug in MySQL 8.0.1 (that had to affect anyone who runs tests on ASan debug builds on a regular basis) was reported by Laurynas Biveinis and stay “Verified” for almost a year.

Bug #87129 – “Unstable test main.basedir”. This test problem reported by Laurynas Biveinis affects ASan builds, among others. See also his Bug #87190 – “Test main.group_by is unstable”.

Bug #87201 – “XCode 8.3.3+ -DWITH_UBSAN=ON bundled protobuf build error”. Yet another (this time macOS-specific) bug found by Laurynas Biveinis.

Bug #87295 – “Test group_replication.gr_single_primary_majority_loss_1 produces warnings”. Potential bug in group replication noted by Laurynas Biveinis in ASan builds.

Bug #87923 – “ASan reporting a memory leak on merge_large_tests-t”. This bug by Laurynas Biveinis is still “Verified”, while Tor Didriksen’s comment states that it it resolved with the fix for Bug #87922 (that is closed as fixed in MySQL 8.0.4). Why not to close this one also?

Bug #89438 – “LeakSanitizer errors on xplugin unit tests”. As Laurynas Biveinis found, X Plugin unit tests report errors with LeakSanitizer.

Bug #89439 – “LeakSanitizer errors on GCS unit tests”. yet another bug report for MySQL 8.0.4 by Laurynas Biveinis.

Bug #89961 – “add support for clang ubsan”. This request was made by Tor Didriksen from Oracle. It is marked as “fixed in 8.0.12”. It means we may get MySQL 8.0.11 released soon. That’s why I decided to mention the bug here.
There were also few other test failures noted on ASan debug builds. I skipped them to make this post shorter.Personally I do not run builds or tests with ASan on a regular basis. I appreciate Oracle’s efforts to make code warning-free, UBSan- and ASan-clean, and fix bugs found with ASan. But I’d also want them to process all/most of related bugs in public database properly before making announcements of new related achievement, and clearly admit and appreciate a lot of help and contribution from specific community members (mostly Laurynas Biveinis in this case).Percona engineers seem to test ASan builds of MySQL 5.7 and 8.0 (or Percona’s closely related versions) regularly, for years, and contribute back public bug reports. I suspect they found way more related bugs than internal Oracle’s QA. I think we should explicitly thank them for this contribution that made MySQL better!

Fun with Bugs #59 – On MySQL Bug Reports I am Subscribed to, Part II

New Year (that starts on Monday!) gives a good opportunity to change something in our lives, start doing something new, better or different. Let’s assume I failed with all these so far, as I am again posting about MySQL bugs here.Since my previous post on this topic I’ve subscribed to 15 more MySQL bugs, and being on a combination of public holidays and vacation now gives me a good opportunity to review these bug reports.Here they are, starting from the most recent:

Bug #89065 – “sync_binlog=1 on a busy server and slow binary log filesystem stalls slaves”. I do not remember seeing multiple threads in “Finished reading one binlog; switching to next binlog” state, but it would be interesting to see this bug report processed properly.

Bug #89051 – “EXPLAIN output is different for same content depending when index was added”. The way optimizer decides on “range” vs “ref” access is always interesting. Here, based on a recent comment by Øystein Grøvlen, the bug is actually that “Cost values are not correct when optimizer switch from ref-access to range-access in order to use more key parts”.

Bug #88914 – “Potential null pointer dereference at pointer node->undo_recs (row0purge.cc)”. It’s funny to see many bugs becoming private as “security” ones and, at the same time, this bug, where reporter suspects it is exploitable, being “Open” and ignored for more than two weeks…

Bug #88891 – “Filtered replication leaves GTID holes with create database if not exists”. I can not even explain how much I “like” all kinds of GTIDs I have to deal with, especially such a long lists of GTIDs that may be created in cases described in this report.

Bug #88863 – “COUNT(*) can sometimes return bogus value”. Now, this is a really funny bug! It must be some race condition, and I’d really prefer to see this bug fixed soon.

Bug #88844 – “MySQL crash with InnoDB assertion failure in file pars0pars.cc”. Nice crash (that I’ve never seen before) quickly reproduced by Shane Bester.

Bug #88834 – “Uneven slowdown on systems with many users”. What can be better to speed up connection than checking the list of users one by one, especially when there are thousands of users?

Bug #88827 – “innodb uses too much space in the PK for concurrent inserts into the same table”. As Mark Callaghan put it:”I am not sure my reproduction details will ever satisfy Sinisa but I don’t mind if you don’t fix this because I care more about MyRocks today and this bug makes MyRocks look better.”We (Facebook’s MySQL, MariaDB and Percona server users) do have MyRocks, but why poor Oracle MySQL users should suffer? Let’s hope Sinisa Milivojevic will process the bug fast, with all the details clarified there 🙂

Bug #88791 – “Binary log for generated column contains new value as WHERE clause, not old value”. Generated columns and binary logging, what could went wrong?

Bug #88764 – “”ALTER TABLE MODIFY…” takes time even if leaving table as is”. Any simple test cases they come to my mind do NOT let to reproduce this problem, but I feel some potential as soon as more exotic cases like partitioning or data directory settings are considered. Let’s wait for bug reporter to clarify.

Bug #88720 – “Inconsistent and unsafe FLUSH behavior in terms of replication”. Nice summary of problems from Przemyslaw Malkowski. One more reason for me to hate GTIDs, honestly.

Bug #88694 – “MySQL accepts wildcard for database name for table level grant but won’t use it”. One more problem with privileges reported by Daniël van Eeden.

Bug #88674  – “Regression CREATE TBL from 5.7.17 to 20 (part #2: innodb_file_per_table = OFF).” and Bug #88673 – “Regression CREATE TBL from 5.7.17 to 20 (part #1: innodb_file_per_table = ON).” – these two bugs were reported by Jean-François Gagné and clearly show some things that are done wrong by Oracle when fixing (private, “security”) bugs…

Bug #88671 – “ALL + BNL chosen over REF in query plan with a simple SELECT + JOIN”. In this case optimizer (probably) does not take costs into account properly when choosing block nested loop join vs usual “ref” index access. Maybe just a matter of missing/wrong statistics also. It would be interesting to find out eventually.

Bug #88666 – “I_S FILES : all rows are displayed whatever the user privileges”. Yet another bug report from Jocelyn Fournier. I am actually surprised with a number of bugs related to privileges that I notice recently.

Bug #88633 – “Auto_increment value on a table is less than max(id) can happen”. It seems only MySQL 5.7.x is affected, but not 5.6.x.

Bug #88623 – “Modifying virtually generated column is not online”. May be by design, but still surprising.

Bug #88534 – “XA may lost prepared transaction and cause different between master and slave”. XA transactions with MySQL is still a sure way to disaster, I think. See Bug #87526 also that should appear in the next part of this series…
Stay tuned to more posts about MySQL bugs from me in the New Year of 2018!

How to Find Processlist Thread id in gdb

I was involved in a discussion on some complex MySQL-related problem where we had to study backtraces of all threads in gdb (produced by the thread apply all bt command if you ever forgets this) in a hope to find out why MySQL hangs. In the process the question appeared on how to find the thread id for each thread to match it against previous collected outputs of SHOW PROCESSLIST. and SHOW ENGINE INNODB STATUS.I assumed I know the answer, as I had to find this out recently enough for this blog post (and before that for the real customer case). The idea is simple. Find a frame where function has a parameter of THD * type (usually named thd), like this:#10 0x0000000000cb47fe in do_command (thd=0x7f32512b7000)    at /usr/src/debug/percona-server-5.7.18-15/percona-server-5.7.18-15/sql/sql_parse.cc:960and check thread_id item of this structure.In that my blog post it looked as simple as just referring to thd of do_command’s frame without even checking much:(gdb) thread 2[Switching to thread 2 (Thread 0x7f7f5ce02b00 (LWP 9232))]#0  pthread_cond_timedwait@@GLIBC_2.3.2 ()    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238238     ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory.(gdb) p do_command::thd->thread_id$9 = 14I prefer to double check my suggestions before making them, so I immediately tried this with my CentOS 6.9 VM running recent Percona Server 5.7.x by default since that times when I worked at Percona:[root@centos ~]# gdb -p `pidof mysqld`GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)…Loaded symbols for /usr/lib64/mysql/plugin/tokudb_backup.so0x00007f550ad35383 in poll () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.209.el6_9.2.x86_64 jemalloc-3.6.0-1.el6.x86_64 keyutils-libs-1.4-5.el6.x86_64 krb5-libs-1.10.3-65.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libcom_err-1.41.12-23.el6.x86_64 libgcc-4.4.7-18.el6.x86_64 libselinux-2.0.94-7.el6.x86_64 libstdc++-4.4.7-18.el6.x86_64 nss-softokn-freebl-3.14.3-23.3.el6_8.x86_64 numactl-2.0.9-2.el6.x86_64 openssl-1.0.1e-57.el6.x86_64 zlib-1.2.3-29.el6.x86_64(gdb) thread 1[Switching to thread 1 (Thread 0x7f550d2b2820 (LWP 1978))]#0  0x00007f550ad35383 in poll () from /lib64/libc.so.6(gdb) p do_command::thd->thread_idNo frame is currently executing in block do_command(THD*).(gdb) thread 2[Switching to thread 2 (Thread 0x7f54d837b700 (LWP 2183))]#0  0x00007f550ad35383 in poll () from /lib64/libc.so.6(gdb) p do_command::thd->thread_idCannot take address of method thread_id.(gdb) call do_command::thd->thread_id()Cannot evaluate function — may be inlinedAs you can see I started to check threads one by one and apply that good old trick. Thread 1 had no frame executing do_command(), but I did not gave up and proceeded to the next thread, as I knew I had at least one active connection (I checked the output of SHOW PROCESSLIST). There I had a surprise, no way to get thread_id of thd. I used tab completion, so I know that thread_id (variable or method) exists, but attempt to call it also failed as you can see.This is a problem with using gdb-based “tricks” over the code that evolves/changes in time. Last time I used p do_command::thd->thread_id it was for MariaDB 10.1.x probably, and the item was there. But in MySQL 5.7 (and all forks based on it) there were many code changes, so we should be ready to changes in unexpected places.I had not added more comments on finding thread id to that discussion, made a note to myself and then, later, decided to check the source code of MySQL 5.7 (I did not have Percona 5.7 one at hand, but they hardly differs in such basic details) to find out what had changed in the THD structure so that thread_id is not longer just a variable. I expect to see the structure defined in sql/sql_class.h from the past, but grep will help to find this out even if it’s no longer the case:[root@centos mysql-server]# grep -n “class THD” sql/*.hsql/debug_sync.h:27:class THD;sql/derror.h:24:class THD;sql/event_data_objects.h:40:class THD;…sql/sql_class.h:1412:class THD :public MDL_context_owner,sql/sql_class.h:4175:    raise_error() or raise_warning() methods provided by class THD.sql/sql_cmd.h:25:class THD;… I found the following there:class THD :public MDL_context_owner,           public Query_arena,           public Open_tables_state{…private:  my_thread_id  m_thread_id;public:…  /**    Assign a value to m_thread_id by calling    Global_THD_manager::get_new_thread_id().  */  void set_new_thread_id();  my_thread_id thread_id() const { return m_thread_id; }…So, in MySQL 5.7 thread_id() is, indeed, a method that was inlined, and essentially it returns private m_thread_id item. Benefits of C++… I had highlighted Global_THD_manager singleton also as during my next gdb sessions I had found out that simple global list of threads is also gone and in 5.7 everything is done via that Global_THD_manager. This is a topic for some other post, though.At least now I know what to do in gdb:…(gdb) thread 7[Switching to thread 7 (Thread 0x7f54d8236700 (LWP 2275))]#0  0x00007f550ad35383 in poll () from /lib64/libc.so.6(gdb) p do_command::thd->m_thread_id$1 = 86(gdb) p do_command::thd->m_main_security_ctx$3 = {m_user = {m_ptr = 0x7f5500fdaf90 “myuser”, m_length = 6,    m_charset = 0x1ded640, m_alloced_length = 8, m_is_alloced = true},  m_host = {m_ptr = 0x7f54d98ab090 “localhost”, m_length = 9,    m_charset = 0x1ded640, m_alloced_length = 16, m_is_alloced = true},  m_ip = {m_ptr = 0x7f54f0eb0210 “127.0.0.1”, m_length = 9,    m_charset = 0x1ded640, m_alloced_length = 16, m_is_alloced = true},  m_host_or_ip = {m_ptr = 0x7f54d98ab090 “localhost”, m_length = 9,    m_charset = 0x1ded640, m_alloced_length = 0, m_is_alloced = false},  m_external_user = {m_ptr = 0x15167ab “”, m_length = 0,    m_charset = 0x1ded640, m_alloced_length = 0, m_is_alloced = false},  m_priv_user = “myuser”, ’00’ <repeats 89 times>, m_priv_user_length = 6,  m_proxy_user = ’00’ <repeats 161 times>, m_proxy_user_length = 0,  m_priv_host = “localhost”, ’00’ <repeats 51 times>,  m_priv_host_length = 9, m_master_access = 1589248, m_db_access = 0,  m_password_expired = false}…So, I know that specific thread  7 was for a session with Id 86 in the output of SHOW PROCESSLIST, and (from m_main_security_ctx, also a new name for old things in 5.7) I know it was a session of myuser connecting locally.To summarize, there were notable changes in MySQL 5.7 in THD structure and threads management-related code in general, so make sure to re-check your “old gdb tricks” when you start working with 5.7. Reading the code helps.Unfortunately (for gdb beginners like me) a lot of C++ approaches were introduced, including singletons, iterators based on templates instead of simple double linked lists etc, so one has to work hard to adapt to these. I hope to discuss some of my further findings and new “C++ specific” and “MySQL 5.7 specific” approaches studying MySQL in gdb in my upcoming posts.

Fun with Bugs #46 – On Some Bugs I’ve Reported During the Year of 2016

It’s time to summarize the year of 2016. As a kind of a weird summary, in this post I’d like to share a list of MySQL bug reports I’ve created in 2016 that are still remaining “Verified” today:Bug #79831 – “Unexpected error message on crash-safe slave with max_relay_log_size set”. According to Umesh this is not repeatable with 5.7. The fact that I’ve reported the bug on January 4 probably means I was working at that time. I should not repeat this mistake again next year.Bug #80067 – “Index on BIT column is NOT used when column name only is used in WHERE clause”. People say the same problem happens with INT and, what may be even less expected, BOOLEAN columns.Bug #80424 – “EXPLAIN output depends on binlog_format setting”. Who could expect that?Bug #80619 – “Allow slave to filter replication events based on GTID”. In this feature request I’ve suggested to implement filtering by GTID pattern, so that we can skip all events originating from specific master on some slave in a complex replication chain.Bug #82127 – “Deadlock with 3 concurrent DELETEs by UNIQUE key”. It’s clear that manual is not even close to explaining how the locks are really set “by design” in this weird case. See comments in MDEV-10962 for some explanations. Nobody from Oracle event tried to really explain how things are designed to work.Bug #82212 – “mysqlbinlog can produce events larger than max_allowed_packet for mysql”. This happens for encoded row-based events. There should be some way to take this overhead into account while creating binary log, IMHO.Bug #83024 – “Internals manual does not explain COM_SLEEP in details”. One day you’ll see Sleep for some 17 seconds logged into the slow query log, and may start to wonder why…Bug #83248 – “Partition pruning is not working with LEFT JOIN”. You may find some interesting related ideas in MDEV-10946.Bug #83640 – “Locks set by DELETE statement on already deleted record”. This case shows that design of locking in InnoDB does produce really weird outcomes sometimes. This is not about “missing manual”, this is about extra lock set that is absolutely NOT needed (a gap X lock on a record in the secondary unique index is set when the same transaction transaction already has the next key lock on it). As a side note, I keep finding, explaining and reporting weird or undocumented details in InnoDB locking for years, still my talk about InnoDB locks was not accepted by Oracle once again for OOW in 2016. What do I know about the subject and who even cares about those locks… Bug #83708 – “uint expression is used for the value that is passed as my_off_t for DDL log”. I was really shocked by this finding. I assumed that all uint vs unsigned long long improper casts are already found. It seems I was mistaking.Bug #83912 – “Time spent sleeping before entering InnoDB is not measured/reported separately”. The use case that led me to reporting this bug is way more interesting than the fact that some wait is not instrumented in performance_schema. You may see more related bug reports from me next year.Bug #83950 – “LOAD DATA INFILE fails with an escape character followed by a multi-byte one”. This single bug (and related bugs and stories) were original topic for issue #46 of my “Fun With Bugs” series. I was not able to write everything I want properly over last 3 weeks, but trust me: it’s a great story, of “Let’s Make America Great Again” style. With the goal for LOAD DATA to behave exactly as INSERT when wrong utf8 data are inserted, Oracle changed the way LOAD DATA works back and forth, with the last change (back) happened in 5.7.17: “Incompatible Change: A change made in MySQL 5.7.8 for handling of multibyte character sets by LOAD DATA was reverted due to the replication incompatibility (Bug #24487120, Bug #82641)”I just can not keep up with all the related fun people have in replication environments thanks to these ongoing changes… It’s incredible.Bug #84004 – “Manual misses details on MDL locks set and released for online ALTER TABLE”. Nothing new: locks in MySQL are not properly/completely documented, metadata locks included. yes, they are documented better now, after 11+ years of my continuous efforts (of a kind), but we are “not there yet”. I am still waiting for a job offer to join MySQL Documentation Team, by the way :)Bug #84173 – “mysqld_safe –no-defaults & silently does NOT work any more”. Recent MySQL 5.7.17 release had not only given us new Group Replication plugin and introduced incompatible changes. In a hope to fix security issues it comes with pure regression – for the first time in last 11 years mysqld_safe –no-defaults stopped working for me! By the way, mysqld_safe is still NOT safe in a sense that 5.7.17 tried to enforce, and one day (really soon) you will find out why.Bug #84185 – “Not all “Statements writing to a table with an auto-increment…” are unsafe”. If you do something like DELETE FROM `table` WHERE some_col IN (SELECT some_id FROM `other_table`) where `table` has auto_increment column, why should anyone care about it? We do not generate the value, we delete rows… This bug report was actually created by Hartmut Holzgraefe and test case comes from Elena Stepanova (see MDEV-10170). I want to take this opportunity to thank them and other colleagues from MariaDB for their hard work and cooperation during the year of 2016. Thanks to Umesh (who processed most of my bug reports),  Sinisa Milivojevic and Miguel Solorzano for their verifications of my bug reports this year.In conclusion I should say that, no matter how pointless you may consider this activity, I still suggest you to report each and every problem that you have with MySQL and can not understand after reading the manual, as a public MySQL bug. Now, re-read my 4 years old post on this topic and have a Happy and Fruitful New Year 2017!

Testing (again) LOAD DATA on MySQL 5.6, 5.7, 8.0 (non-GA) and MariaDB 10.0, 10.1 and 10.2 (non-GA)

I’ve been lately compiling and generating .deb packages for several MySQL and MariaDB recent versions, and I wanted to try them more in depth -specially MySQL 8.0 and MariaDB 10.2, both of which are still in development.
Several people have already given their first impressions (or will do soon), and testing early is the best way to catch bugs and regressions, and get them fixed before the official release. In fact, as I will comment later, I ran into breaking bugs on both MySQL 8.0 and MariaDB 10.2, which I immediately reported or joined others on the bug report.
Last time I did a similar test, I found an important 30% regression in single-thread performance between 5.6 and 5.7, and I wasn’t the only one reporting single-thread performance concerns. This is why I included 3 generations of major database versions, the latest minor release -as of December 2016- of the old but stable 5.6/10.0; the latest GA versions 5.7/10.1 and the next, in-development versions 8.0/10.2 (exact versions shown bellow).
Very recently, there was a call for better MySQL benchmarks, and I cannot agree more with the general idea. However, I have to make some disclaimers: the following are “tests results”, closer to regression testing than to benchmarking. Benchmarking is hard, and I do not have the time or the resources to do them properly and extensively. What I wanted to do is to do a very specific test of a very specific operation (LOAD DATA in a single thread) under very specific circumstances/configuration [in some cases, bad configuration practices] to see if a previously-occurred problem was still there. This is not a mere thought experiment, it will help me tune better the import/backup recovery process, and did allow me get familiarized with the newest versions’ particular idiosyncrasies. Clearly, as you will discover, I do not yet know how to tune the latest unreleased versions, (who knows at this point), so join me on the discovery process.
Secondly, the test I will be doing (LOAD DATA) did not include secondary indexes or JOINs. Again, this will test import times, but not other more common operations like point selects, updates and range selects. Thirdly, I am comparing both Generally Available (stable) versions and versions still in development. The latter can change a lot between now and the release date. I repeat again: This will be in no way representative of the overall performance of MySQL or MariaDB versions. If you get away with the idea that “X is better than Y” based on a chart shown here, you will probably be very wrong. You have been warned.
I hope, however, that some of these results will be helpful to some fellow DBAs and MySQL/MariaDB users and develpers, and that is why I am sharing them early. You may also help me explain some of the results I am getting, which I may not 100% understand yet.
The setup
Server versions:
Oracle MySQL 5.6.34
Oracle MySQL 5.7.16
Oracle MySQL 8.0.0-dmr (not “generally available”)
MariaDB Server 10.0.28
MariaDB Server 10.1.19
MariaDB Server 10.2.2 (non “generally available”)
All versions were compiled from the source code downloaded from its official website with the recommended options (-DBUILD_CONFIG=mysql_release).
Hardware (desktop grade- no Xeon or a proper RAID setup):
Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz (x86_64 Quad-core with hyperthreading)
16 GB of RAM
Single, dektop-grade, Samsung SSD 850 PRO 512GB
OS and configuration:
Debian GNU/Linux 8.6 “Jessie”
datadir formatted as xfs, mounted with noatime option, all on top of LVM
Several server configurations where used, commented on each individual test.
Dataset:
I used again the nodes.csv I used last time, a 3,700,635,579-byte text (tab-separated) file containing 46,741,126 lines (rows) with an old OSM node data dump for Spain:
171773 38.6048402 -0.0489871 4 2012-08-25 00:37:46 12850816 472193 rubensd
171774 38.6061981 -0.0496867 2 2008-01-19 10:23:21 666916 9250 j3m
171775 38.6067166 -0.0498342 2 2008-01-19 10:23:21 666916 9250 j3m
171776 38.6028122 -0.0497136 5 2012-08-25 00:26:40 12850816 472193 rubensd
171933 40.4200658 -3.7016652 6 2011-11-29 05:37:42 9984625 33103 sergionaranja
Those rows will be loaded using the load_data_04.py script I used last time, which basically creates a table like this:
CREATE TABLE `nodes` (
`id` bigint PRIMARY KEY,
`lat` decimal(9,7),
`lon` decimal(10,7),
`version` int,
`timestamp` timestamp,
`changeset` bigint,
`uid` bigint,
`user` varchar(255)
);
And then executes LOAD DATA, then commits:
# finishing
if (options.engine == ‘MyISAM’):
cursor.execute(‘FLUSH TABLES’)
else:
db.commit()

cursor.close()
db.close()
General impressions and breaking bugs found

Setting up the server for the first time in MySQL and MariaDB has now differed. While MariaDB continues using the mysql_install_db script, MySQL, since 5.7, uses the mysqld server binary directly. Also, while I have to thank Oracle for focusing on security, if you are going to setup a server just for pure testing, the default account creation options can be a bit annoying. I have to say thanks because there is an additional –initialize-insecure which simplifies the setup for one-time setups like this benchmark or a jenkins test, and that you probably should not be using on a proper production. Maybe it could be helpful for automatic deploys, where accounts are taken care by a script.
For MySQL, I found a problem in 8.0.0 in which mysqld –initialize did not work with binary collations (neither it started if I did the initialization with the default collation, and then started the server with binary as the default encoding). The error I got was:
mysqld: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘binary’ at line 1
[ERROR] Data Dictionary initialization failed.
This was clearly caused by the latest feature, the new InnoDB data dictionary. I reported this as Bug #83706 and apparently it will be fixed for 8.0.1.
Binary collation is very important for Wikipedias, because some time ago (and even recently, with the latest extensions to the standard) full Unicode support was difficult to get -and we need to support 300 languages. Also, we have certain needs like custom collations support (e.g. several collations on the same table, numeric sorting). As a consequence of that, most text data is now stored in raw binary. For these tests, I had to disable the binary character for some tests for 8.0.0, which I have appropriately noted it below.
For MariaDB, I ran into an issue that had already been reported (MDEV-10540), in which mysql_install_db got hanged if the log_bin option was enabled. While this bug could be workarounded by doing so just for the installation, the option is so common and important that I think the issue should be solved as soon as possible.
I also run into MDEV-1124, MariaDB Server promoting an external business, a mistake which hopefully will be fixed by the release of 10.1.20.
In order to perform the tests, I had to destroy and recreate the datadir, starting from scratch on each run (rm + –initialize). This is also a testimony that you no longer have binary compatibility between server vendors, due to the differences on the mysql schema, performance schema tables, the new data dictionary, etc.
Another thing to notice is that, because of the latest security patches, you have to enable explicitly the secure_file_priv in order to use the LOAD DATA query. You probably do not want to do that on a production server, more details about the issue on CVE-2016-6662 and the other recently related issues.
Default config testing
While testing with the default config is the #1 sin one could do while doing benchmarking (because you are testing the vendor defaults, not a real performance comparison), my test is silly and simple enough for this to be interesting, and spending some time there. Long gone are the times in which MySQL’s defaults were horrible, and Oracle seems very focused lately on trying to setup good defaults that would be useful for the majority of people. But let’s also be honest, these options (together with what packagers setup) are also the ones that 90% of the mysql users out there will use because they are not running dedicated services that require a lot of tuning nor have a dedicated DBA to do it.
So I went and setup mysql with just a bunch of administrative options. In particular I would like to note that by default:
No binary log enabled in all versions
Default buffer pool size is 134217728 for all versions
innodb_log_file_size defaults to 50MB
[client]
port = 3306
socket = /tmp/mysql.sock

[mysqld]

# administrative options
user = mysql
socket = /tmp/mysql.sock
port = 3306
datadir = /srv/sqldata
basedir = /opt/mysql # mysql is a symbolic link to each basedir
tmpdir = /srv/tmp
# prevent imports/exports to arbitrary dirs
secure_file_priv = /tmp
Some interesting config differences between 5.6 and MariaDB 10.0 to have into account: performance schema is disabled on MariaDB, and many buffers and caches have different size
25,26c5,7
< back_log 150

> back_log 80
61a43
> default_tmp_storage_engine InnoDB
67a51,53
> end_markers_in_json OFF
> enforce_gtid_consistency OFF
> eq_range_index_dive_limit 10
101,103c82
< host_cache_size 128

> host_cache_size 279
208c170
< innodb_open_files 400

> innodb_open_files 2000
259,261c210
< join_buffer_size 131072
< join_buffer_space_limit 2097152

> join_buffer_size 262144
263c212
< key_buffer_size 134217728

> key_buffer_size 8388608
292a243
> master_info_repository FILE
294c245
< max_allowed_packet 1048576

> max_allowed_packet 4194304
319d268
< mrr_buffer_size 262144
341,353c289,304
< optimizer_selectivity_sampling_limit 100
< optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on
< optimizer_use_condition_selectivity 1
< performance_schema OFF

> optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on
> performance_schema ON
388,390c337,338
< query_cache_size 0
< query_cache_strip_comments OFF
< query_cache_type ON

> query_cache_size 1048576
> query_cache_type OFF
399a349
> relay_log_info_repository FILE
415,417c357,358
< secure_auth OFF

> secure_auth ON
423a371,373
> slave_allow_batching OFF
> slave_checkpoint_group 512
> slave_checkpoint_period 300
425,426d374
< slave_ddl_exec_mode IDEMPOTENT
< slave_domain_parallel_threads 0
431,432c379,381
< slave_parallel_max_queued 131072
< slave_parallel_threads 0

> slave_parallel_workers 0
> slave_pending_jobs_size_max 16777216
441c390
< sort_buffer_size 2097152

> sort_buffer_size 262144
447c396
< sql_mode

> sql_mode STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION
465,467c414,416
< sync_master_info 0
< sync_relay_log 0
< sync_relay_log_info 0

> sync_master_info 10000
> sync_relay_log 10000
> sync_relay_log_info 10000
469,471c418,421
< table_definition_cache 400
< table_open_cache 400
< thread_cache_size 0

> table_definition_cache 1400
> table_open_cache 2000
> table_open_cache_instances 1
> thread_cache_size 9
474,479c424
< thread_pool_size 8
< thread_pool_stall_limit 500
< thread_stack 294912

> thread_stack 262144
Some interesting config differences between MySQL 5.7 and MySQL 8.0: Barracuda is now the default innodb file format and innodb_large_prefix is also enabled by default. Also, some extra variables related to the new data dictionary.
93c93
< ignore_db_dirs

> information_schema_stats CACHED
122d121
< innodb_checksums ON
136,138d134
< innodb_file_format Barracuda
< innodb_file_format_check ON
< innodb_file_format_max Barracuda
143c139
< innodb_flush_method

> innodb_flush_method fsync
163d158
< innodb_large_prefix ON
165d159
< innodb_locks_unsafe_for_binlog OFF
208d201
< innodb_stats_sample_pages 8
213d205
< innodb_support_xa ON
331a324
> performance_schema_error_size 1043
348c341
< performance_schema_max_memory_classes 320

> performance_schema_max_memory_classes 350
354c347
< performance_schema_max_rwlock_classes 40

> performance_schema_max_rwlock_classes 50
360c353
< performance_schema_max_statement_classes 193

> performance_schema_max_statement_classes 201
370a364
> persisted_globals_load ON
404a399
> schema_definition_cache 256
465a461
> stored_program_definition_cache 256
468d463
< sync_frm ON
475a471
> tablespace_definition_cache 256
Some interesting config differences between MariaDB 10.1 and MariaDB 10.2: some extra features integrated/gone compatible from the xtradb engine and more recent versions of MySQL; enabling the buffer pool dump by default, also setting Barracuda as the default file format, sql strict mode, new checksum algorithms, deleting deprecated variables, option for CTEs, and server_id now defaults to 1.
16c16
< aria_recover NORMAL

> aria_recover_options NORMAL
31c31
< binlog_checksum NONE

> binlog_checksum CRC32
47c47,48

> check_constraint_checks ON
121c122
< innodb_adaptive_hash_index_partitions 1

> innodb_adaptive_hash_index_parts 8
123d123
< innodb_additional_mem_pool_size 8388608
136c136,137
< innodb_buffer_pool_dump_at_shutdown OFF

> innodb_buffer_pool_chunk_size 134217728
> innodb_buffer_pool_dump_at_shutdown ON
138c139
< innodb_buffer_pool_dump_pct 100

> innodb_buffer_pool_dump_pct 25
140c141
< innodb_buffer_pool_instances 8

> innodb_buffer_pool_instances 1
142c143
< innodb_buffer_pool_load_at_startup OFF

> innodb_buffer_pool_load_at_startup ON
144d144
< innodb_buffer_pool_populate OFF
148c148
< innodb_checksum_algorithm INNODB

> innodb_checksum_algorithm crc32
150d149
< innodb_cleaner_lsn_age_factor HIGH_CHECKPOINT
158d156
< innodb_corrupt_table_action assert
161a160
> innodb_default_row_format dynamic
171d169
< innodb_empty_free_list_algorithm BACKOFF
177d174
< innodb_fake_changes OFF
180c177
< innodb_file_format Antelope

> innodb_file_format Barracuda
182c179
< innodb_file_format_max Antelope

> innodb_file_format_max Barracuda
183a181
> innodb_fill_factor 100
187a186
> innodb_flush_sync ON
192d190
< innodb_foreground_preflush EXPONENTIAL_BACKOFF
210,212c208
< innodb_kill_idle_transaction 0
< innodb_large_prefix OFF
< innodb_lock_schedule_algorithm fcfs

> innodb_large_prefix ON
214d209
< innodb_locking_fake_changes ON
221c212
< innodb_log_checksum_algorithm INNODB

> innodb_log_checksums ON
225a217
> innodb_log_write_ahead_size 8192
227,228d218
< innodb_max_bitmap_file_size 104857600
< innodb_max_changed_pages 1000000
230c220
< innodb_max_dirty_pages_pct_lwm 0.001000

> innodb_max_dirty_pages_pct_lwm 0.000000
233c223
< innodb_mirrored_log_groups 1

> innodb_max_undo_log_size 1073741824
243a234
> innodb_page_cleaners 1
248c239,240
< innodb_purge_threads 1

> innodb_purge_rseg_truncate_frequency 128
> innodb_purge_threads 4
256d247
< innodb_sched_priority_cleaner 19
259,261d249
< innodb_show_locks_held 10
< innodb_show_verbose_locks 0
< innodb_simulate_comp_failures 0
275c263
< innodb_strict_mode OFF

> innodb_strict_mode ON
279a268
> innodb_temp_data_file_path ibtmp1:12M:autoextend
283,284c272,273
< innodb_track_changed_pages OFF
< innodb_undo_directory .

> innodb_undo_directory ./
> innodb_undo_log_truncate OFF
289d277
< innodb_use_global_flush_log_at_trx_commit ON
292,293d279
< innodb_use_stacktrace OFF
< innodb_use_sys_malloc ON
352a339
> max_recursive_iterations 4294967295
388c375
< optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=off

> optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on
413c400
< performance_schema_max_statement_classes 178

> performance_schema_max_statement_classes 184
465c452,456
< server_id 0

> server_id 1
479a471
> slave_parallel_workers 0
508a501
> standards_compliant_cte ON
520c513,514
< thread_cache_size 0

> table_open_cache_instances 8
> thread_cache_size 151
525a520,521
> thread_pool_prio_kickup_timer 1000
> thread_pool_priority auto
528c524
< thread_stack 295936

> thread_stack 297984
These are the results I got:

5.6.34
5.7.16
8.0.0-dmr
10.0.28
10.1.18
10.2.2
avg run (seconds)
193.378
215.2826667
186.081
194.032
195.0173333
198.0726667
median (seconds)
194.017
215.425
187.855
197.746
199.459
197.354
rows inserted/s
241708.6018
217115.1385
251186.9885
240893.9041
239676.7774
235979.6876
insert throughput compared to 5.6
100.00%
89.83%
103.92%
99.66%
99.16%
97.63%
If you prefer a graph:
Nothing surprising here, for MySQL, the 5.7 regression still shows, now only around 10%; not sure if because of the work done already to mitigate it on BUG#75981, or just the different hardware I used compared to my last test. The good news is that the regressions seems solved for 8.0, getting even better results than 5.6. We will see if that maintains when we normalize the configuration options.
Regarding MariaDB, 10.0 shows a similar performance to 5.6 (probably within the range of potential measurement errors). What it is noticeable is a very small degradation for 10.1 and 10.2. Nothing that worries me, normally more features create a small overhead (the same has happened in MySQL in the past) which is in most cases worth paying, and many times not even noticeable under normal load. There is also still time for 10.2 to optimize potential problems.
WMF-like config
Let’s get a bit more serious, let’s test a configuration used in real life an let’s uniformize configurations. Wikimedia core production databases use this configuration. There are several things, however, I have to comment that will affect this test:
The tests here were not done on the same production machines, and some of the configuration has been heavily tuned for large amounts of memory available and hardware RAID controller, not available for the test
Some of the configuration options are MariaDB 10-dependent, with our build. They do not work on MySQL, or other versions of MariaDB- in those cases those options were disabled. Some of them work and were kept, but they may have a negative impact in this context.
It is obvioulsy not optimized for the specific LOAD DATA use case
WMF configuration is MariaDB 10-focused, so I expect MySQL server to not be properly tuned for it.
SSL was not used, probably not interesting for a single connection
Binary collation is used, as I commented before, it is the chosen charset/collation for mediawiki in WMF servers. As 8.0.0-dmr had the bug I mentioned not allowing that collation, I had to use the default collation there. Beware, that could skew its results
This is the configuration used for this test:
[client]
port = 3306
socket = /tmp/mysql.sock

[mysqld]

# administrative options
user = mysql
socket = /tmp/mysql.sock
port = 3306
# MariaDB only
#extra-port = 3307
datadir = /srv/sqldata
basedir = /opt/mysql
tmpdir = /srv/tmp
server_id = 1

skip-external-locking
skip-name-resolve
temp-pool

# Always start in read_only mode.
# Master selection will be handled by orchestration.
read_only = 1

# prevent imports/exports to arbitrary dirs
secure_file_priv = /tmp
# thread and connection handling
# Mariadb only
#thread_handling = pool-of-threads
# Mariadb only
#thread_pool_stall_limit = 100
# Mariadb only
#thread_pool_size = 32
# Mariadb only
#thread_pool_max_threads = 2000
max_connections = 10000
back_log = 500
# MariaDB only
# extra_max_connections = 10
max_connect_errors = 1000000000
max_allowed_packet = 32M
connect_timeout = 3
query_cache_size = 0
query_cache_type = 0
event_scheduler = 1
log-warnings = 0
thread_stack = 192K
thread_cache_size = 300
interactive_timeout = 28800
wait_timeout = 3600
transaction-isolation = REPEATABLE-READ

# binary log an replication options
log-bin
log-slave-updates
sync-binlog = 1
binlog-cache-size = 1M
max-binlog_size = 1000M
binlog-format = STATEMENT
expire-logs-days = 7
slave-transaction-retries = 4294967295

# table limits
tmp-table-size = 64M
max-heap-table-size = 64M
table-open-cache = 50000
table-definition-cache = 40000
open-files-limit = 200000

# optimizer options
# MariaDB only
#sql-mode = IGNORE_BAD_TABLE_OPTIONS
optimizer_switch = ‘mrr=on,mrr_cost_based=on’
# MariaDB only
# ,mrr_sort_keys=on,optimize_join_buffer_size=on’
# MariaDB only
#use-stat-tables = preferably

# charsets and collations
character_set_server = binary
character_set_filesystem = binary
collation_server = binary

# InnoDB options
default-storage-engine = InnoDB
innodb_file_per_table = 1
innodb_buffer_pool_size = 12G
innodb_log_file_size = 2G
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 0
innodb_io_capacity = 1000
# MariaDB only
# innodb_stats_sample_pages = 16
innodb_stats_method = nulls_unequal
# MariaDB only
#aria_pagecache_buffer_size = 1G
# MariaDB only
#join_cache_level = 8

# dump and load innodb buffer at start and stop
innodb_buffer_pool_load_at_startup = 1
innodb_buffer_pool_dump_at_shutdown = 1

# profiling and metrics
# Enabling performance_schema (disabled by default in MariaDB10)
performance_schema = 1
# downsizing performance schema memory usage: T99485
performance_schema_max_thread_instances = 500
performance_schema_max_cond_instances = 1000
performance_schema_accounts_size = 300
performance_schema_hosts_size = 300
performance_schema_events_statements_history_size = 10
performance_schema_events_statements_history_long_size = 1000
performance_schema_events_waits_history_size = 10
performance_schema_events_waits_history_long_size = 1000
performance_schema_events_stages_history_size = 10
performance_schema_events_stages_history_long_size = 1000
performance_schema_max_mutex_instances = 5000
performance_schema_max_rwlock_instances = 2000
performance_schema_max_socket_instances = 500
performance_schema_max_table_instances = 1000

[mysqldump]

quick
max_allowed_packet = 32M
As you can see, the biggest changes were:
Buffer pool increased to 12GB, so that the whole file can fit into memory
Log file size increased to 2GB, so also that the whole file can fit into the transaction log
Performance schema enabled by default, this is very important for our query monitoring
No pool of connections (not available on MySQL)
Very large open_file_limits and max_connections, which requires downsizing the P_S parameters
These are the results:

5.6.34
5.7.16
8.0.0-dmr
10.0.28
10.1.18
10.2.2
avg (seconds)
212.501
234.3543333
220.1906667 (*)
185.5396667
187.598
210.6946667
median (seconds)
212.48
234.608
220.123 (*)
188.16
191.393
209.287
rows inserted/s
219957.2049
199446.391
212275.6914 (*)
251919.8554
249155.7799
221842.9481
insert throughput compared to 5.6
100.00%
90.68%
96.51% (*)
114.53%
113.27%
100.86%
(*) Using UTF-8 collation because an 8.0.0 bug, which may impact the results
Again, if you prefer some graph bars:
The main conclusions that I can get from here is that my current performance configuration is not MySQL or MariaDB 10.2-ready. While in the case of 10.0 and 10.1, the performance has gotten around 11% better, for MariaDB 10.2 and MySQL, it has gotten worse, not better.
One thing that is not on the above summary is that MariaDB’s results, unlike MySQLs, even if better on average, have a higher amount of execution time variability. All test done on the 3 MySQL versions ended up within a second from each other. MariaDB results end up in a 10-second range. Throughput is not the only variable to take into account when doing performance testing: stability of timings could be important in many cases to provide a stable response time.
The other thing to comment, do these results show a more important regression for 10.2? Let’s do another test to confirm it.
Insecure config
Let’s try to test a configuration that optimizes for LOAD DATA speed, without regards for security (disabling features that you normally wouldn’t want to do in production: double write buffer, innodb checksums, …
This was the configuration used:
[client]
port = 3306
socket = /tmp/mysql.sock

[mysqld]

# administrative options
user = mysql
socket = /tmp/mysql.sock
port = 3306
datadir = /srv/sqldata
basedir = /opt/mysql
tmpdir = /srv/tmp

#binlog disabled

# prevent imports/exports to arbitrary dirs
secure_file_priv = /tmp
max_connections = 500

query_cache_size = 0
query_cache_type = 0

# optimizer options
# MariaDB only
sql-mode = ”

# charsets and collations
#character_set_server = binary
#character_set_filesystem = binary
#collation_server = binary

# InnoDB options
default-storage-engine = InnoDB
innodb_file_per_table = 1
innodb_buffer_pool_size = 12G
innodb_log_file_size = 2G
innodb_flush_log_at_trx_commit = 1

# profiling and metrics
performance_schema = 0
skip-innodb-doublewrite
innodb_flush_neighbors = 0
innodb_checksum_algorithm = none
And again, the results:

5.6.34
5.7.16
8.0.0-dmr
10.0.28
10.1.18
10.2.2
avg (seconds)
146.038
167.482
172.653
147.721
152.1256667
155.683
median (seconds)
146.016
167.054
171.364
147.889
151.937
155.414
rows inserted/s
320061.395
279081.4894
270722.9298
316414.9038
307253.3848
300232.6908
insert throughput compared to 5.6
100.00%
87.20%
84.58%
98.86%
96.00%
93.80%
Graphically:
The trend is similar to the previous results, with a 20-25% improvement in performance compared to the last test case. Except one version-8.0 seems to perform worse than on the previous configuration. This doesn’t make sense to me, but the results, like in the previous case, are consistent and repeatable. Is there a regression hidden here?
Regarding MariaDB, this configuration seems more stable (load-wise) for it, with less variability between runs. Still a downwards trend for newer versions.
One more result -SELECT count(*)
While checking that all the records were being inserted correctly, and no rows were lost in the process, I run into a curious case of different count + full scan speeds. When doing:
SELECT count(*) FROM nodes;
I got the following timings on console:

5.6.34
5.7.16
8.0.0-dmr
10.0.28
10.1.18
10.2.2
run time (seconds)
8.31 sec
4.81 sec
5.02 sec
1 min 47.88 sec
2 min 5.08 sec
6.61 sec
For those still obsessed on getting a nice bar chart every time:

I knew there were some optimizations on SELECT count(*) that made that faster for InnoDB (not instant, like MyISAM), but I didn’t know it was enabled as early as 5.6. It seems MariaDB only merged that feature (or a similar one doing the same) starting with 10.2.
Conclusions
Providing data without an explanation is a bad practice, however, I believe this will help as a starting point to debug (by me, or any others that would want to reproduce the results- I think I have given enough information to do that) and see what code points are blocking this memory-only, 1-thread concurrency specific test. Again, remember this is a very specific test and not a proper benchmark to base decisions on. The intended goals which were: 1) identify regressions 2) identify bugs 3) get a first quick look at the options changed and 4) start working with the newest versions, all were accomplished.
As a followup, I would like to explore more in depth those potentially observed regressions on 8.0 and 10.2, and see if it is just a case of adapting the configuration or if there is an underlying cause slowing them down. If they happened to be bugs, there should be plenty of time to fix those before a stable release from both vendors. The other thing that I would like to analyze next is this very same load type, but with innodb compression, and compare both the performance and the resulting file sizes.
I hope this has been helpful for someone else other than me, too. I have probably made some mistakes along the way, please tell me so on a comment or @jynus.

MySQL Support Engineer’s Chronicles, Issue #1

I have a habit to keep a log of everything I work on (or even just read about while working) every day. It’s just a plain text file, with date labels, where most lines are URLs of issues and bugs I contributed to somehow, and my comments on what I did and why. But there are also code fragments, texts, copy-pasted outputs and links to interesting sites, articles and documentation.Some of lines from this log I post on Facebook separately as I add them, these are mostly related to MySQL bugs and other things that annoy me. I know that some of my Facebook friends actually do not like this kind of posts at all… When I used LiveJornal actively, I’ve posted digests of my log from time to time (you can check what was there if you read Russian). But that digests contained everything, from funny videos of owls to tips for setting up Informix on Mac OS X, with few MySQL-related things in between. For this blog I decided to resurrect that digests, limit them to things related to MySQL and related forks and technologies, and try to make them regular. The idea is to post what I considered interesting and important at the end of every week. So, this is my first issue of what I am going to call “MySQL Support Engineer’s Chronicles”. Let’s see how it is going to work long term…This week started with more news related to MyRocks. First of all, it was announced that MyRocks will be included into MariaDB 10.2. New forum was also set up in Google Groups for the related discussions, myrocks-dev. I see only 3 topics there so far, but I still joined this group. I’ve also spent some time this week testing how well MyRocks scale for SELECTs like those in the famous Bug #68079. Good news is that (while InnoDB after that bug fix still seem to work faster) for that use case MyRocks now runs like 20% faster than last time I checked (end of April, 2016). Both InnoDB in 5.7.15 and MyRocks scale to 32+ threads on my old QuadCore box, that is, they run the same number of queries faster with more threads doing the job. I’ll share more details on this test case later, when I get a chance to spend more time running tests with higher concurrency and checking for the bottlenecks.Other good news were about MySQL & Friends Devroom at FOSDEM 2017. Call for papers is open! FOSDEM is my favorite conference and the only one I am going to participate in by all means every year (as I did in 2015 and 2016). I am going to submit several talks and, besides something with “gdb” in the title, I am thinking about a talk on some special bugs (like Bug #68079, Bug #28404, or Bug #82968) and their influence on MySQL users and further development (maybe even separate talk per each of these bugs), some new MariaDB 10.2 features (like recursive CTEs) or locking in MyRocks.I’ve spent some time this week checking different ways to compress data in InnoDB tables (and I expect more work on this next week). Originally the idea was to show that MariaDB’s compression really works. I am still thinking about writing a separate blog post showing step by step how it works and why it does not or why somebody can assume it does not (hints: check du and ls -ls commands, don’t use just ls -l to make conclusions). For now just check this blog post by my colleague Jan Lindstrom and do not hesitate to try it on HDD with, say, ext4 filesystem. If you care enough to try different compression algorithms (you have to compile MariaDB from source for this), check also this great study by Percona.Unfortunately these days I can not avoid Galera clusters and related issues, so I spent some time reading and trying to find some proper documentation on what may happen when poor user just decides to shut down all nodes in the cluster (for whatever reason) and then start the cluster again. SST may happen, and this is rarely good. I ended up checking this old post by my former colleague Przemysław Malkowski, some hints by Severalnines and Galera bugs (like Issue #283 and Issue #344). I was reading and thinking about step by step checks for some blog post, but my colleague Geoff Montee already summarized current state of documentation in this Galera Issue #431, “Document that the GCache is invalidated when a node restarts”. Take extra care when you decide to shut down the last remaining Galera cluster node! While checking for known bugs for one of the problems I discussed, I noted this MySQL bug, Bug #80580, “count(*) much slower on 5.7 than 5.6”. Unfortunately there was no public test case, even though the bug was “Verified”. I do not like to see bug reports like that. So, I’ve added my test case as a comment. Enjoy and keep thinking that MySQL 5.7 is the best of them all (besides already available 8.0, that added indexes to the tables in Performance Schema). It just has some performance problems with queries that do SELECT count(*) to InnoDB tables, also in case of fulltext indexes, see this recent Bug #83398 by Daniël van Eeden. But nobody cares much about few minor bugs for corner cases (and about those who process that bugs, but that’s a different topic)…By the way, Justin Swanhart had already added support to MySQL 8 to his ps_history schema/tool. Read his blog post for more details. Probably this is one of the first open source third party tools to support MySQL 8 explicitly. Correct me if I am wrong, but the only other one I know about is MySQL Sandbox by Giuseppe Maxia.During some relatively non-busy time on Friday I decided to upgrade one of my Ubuntu VMs (the one running 15.04) to 16.04, and did that successfully (first to 15.10, then to 16.04). It’s nice to know that Ubuntu allowed seamless in-place upgrades since 12.04. I had Percona Server installed from their .deb packages running there, historically, so I made sure it is also upgraded to the latest version for 16.04. In the process I hit a small problem with their repository that Percona tried to warn us all about in this blog post. Unfortunately, as you can see from comments (mine and other user’s) suggested way to avoid the warning does not work as expected. I’ve got no reply from Percona about this.Also on Friday Alibaba had finally released their AliSQL as open source on GitHub. I’ve cloned and built it immediately on my Ubuntu 14.04 netbook, and it works! It identifies itself as just MySQL 5.6.32, but read here about some of the great features now available. No bugs found so far, but I had no time to try hard.So, that were some of things I read about, was excited about, complained about and worked on during this week as a support engineer in MariaDB. Do you consider this kind of digest useful?

The fastest MySQL Sandbox setup ever!

MySQL-Sandbox 3.1.11 introduces a new utility, different from anything I have put before in the MySQL Sandbox toolkit.make_sandbox_from_url downloads a tiny MySQL tarball from a repository and install it straight away.As of today, the following packages are available Major release versions package size(what you download) expanded size(storage used) original size(not included) 5.0 5.0.96 20M 44M 371M 5.1 5.1.72 23M 59M 485M 5.5 5.5.50 15M 49M 690M 5.6 5.6.31 18M 61M 1.1G 5.7 5.7.13 33M 108M 2.5G The sizes of the tarballs mentioned in the table above are much smaller than the original packages. The binaries have been stripped of debug info, compressed whenever possible, and purged of all binaries that are not needed for sandbox operations. This means that:You can download the needed tarball very fast;The storage needed for the binaries is reduced immensely.Here is an example of the script in action. We download and install mySQL 5.0.96 in one go:$ make_sandbox_from_url 5.0 — –no_showwget -O 5.0.96.tar.gz’http://github.com/datacharmer/mysql-docker-minimal/blob/master/dbdata/5.0.96.tar.gz?raw=true’URL transformed to HTTPS due to an HSTS policy–2016-07-10 17:59:33–https://github.com/datacharmer/mysql-docker-minimal/blob/master/dbdata/5.0.96.tar.gz?raw=trueResolving github.com (github.com)… 192.30.253.112Connecting to github.com (github.com)|192.30.253.112|:443… connected.HTTP request sent, awaiting response… 302 FoundLocation:https://github.com/datacharmer/mysql-docker-minimal/raw/master/dbdata/5.0.96.tar.gz[following]–2016-07-10 17:59:33–https://github.com/datacharmer/mysql-docker-minimal/raw/master/dbdata/5.0.96.tar.gzReusing existing connection to github.com:443.HTTP request sent, awaiting response… 302 FoundLocation:https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/5.0.96.tar.gz[following]–2016-07-10 17:59:34–https://raw.githubusercontent.com/datacharmer/mysql-docker-minimal/master/dbdata/5.0.96.tar.gzResolving raw.githubusercontent.com (raw.githubusercontent.com)…151.101.12.133Connecting to raw.githubusercontent.com(raw.githubusercontent.com)|151.101.12.133|:443… connected.HTTP request sent, awaiting response… 200 OKLength: 20052235 (19M) [application/octet-stream]Saving to: ‘5.0.96.tar.gz’5.0.96.tar.gz100%[=================================================================================>]19.12M 15.2MB/s in 1.3s2016-07-10 17:59:37 (15.2 MB/s) – ‘5.0.96.tar.gz’ saved [20052235/20052235] The MySQL Sandbox, version 3.1.11 (C) 2006-2016 Giuseppe Maxia# Starting server. sandbox server started# Loading grantsYour sandbox server was installed in $HOME/sandboxes/msb_5_0_96If you call the same command twice, you will get a message saying that you can now use make_sandbox x.x.xx to install your sandbox.The script is doing what I should probably have done from the beginning by default: expands the tarball in $SANDBOX_BINARY (by default $HOME/opt/mysql) from where it is easy to reuse with minimum typing.As of today, the binaries are Linux ONLY. I made this repository to use it with Docker (I will write about it soon) and that means using Linux. This is still part of an experiment that so far is working well. The project can either evolve in smarter directions or merge with clever containers. It’s early to say. For now, enjoy the fastest set-up that MySQL Sandbox can offer!

MySQL auto update date_created and date_updated columns

It’s usually a good practice to have two columns called date_created and date_updated in every table. One can always use it in application and it helps in debugging too as to when a record was created and last updated in various circumstances and contexts.This responsibility can be given to MySQL to automatically assign current time values to these columns.In MySQL 5.6 onwards, this can be done by setting the data type of the columns to be either date time or timestamp and creating date_created column with NOT NULL DEFAULT CURRENT_TIMESTAMP  schema and date_updated column with NOT NULL DEFAULT ‘0000-00-00 00:00:00’ as schema with attribute ON UPDATE CURRENT_TIMESTAMP.Below is a sample schema of a table containing date_created and date_updated columns:CREATE TABLE `time_stamp` (  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,  `date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,  `date_updated` timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’ ON UPDATE CURRENT_TIMESTAMP,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8But there is a catch here, since this is not applicable to all MySQL versions. Earlier to 5.6 version, MySQL allows a table to have only one TIMESTAMP column with an automatic TIMESTAMP value, that is you can either have date_created or date_updated auto updated to CURRENT_TIMESTAMP not both.Related to this the MySQL documentation has some very good info at this page: https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.htmlExcerpt: “By default, TIMESTAMP columns are NOT NULL, cannot contain NULL values, and assigning NULL assigns the current timestamp.”So the workaround this is to have date_created column with timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’ schema  and date_updated column with timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’ ON UPDATE CURRENT_TIMESTAMP schema.With above, the date_updated field is correctly updated by MySQL automatically when there is an update and to update date_created field with current timestamp value, we have to explicitly pass NULL value to date_created field which will then store the CURRENT_TIMESTAMP value in the field.Below is a sample schema for above changes:CREATE TABLE `time_stamp` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `date_created` timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’, `date_updated` timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’ ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8So for a structure as above the insert query would be as below:INSERT INTO `test`.`time_stamp`(`id`, `date_created`) VALUES(NULL, NULL);For update the query would be:UPDATE     `test`.`time_stamp` SET     `id` = ‘4’ WHERE     `time_stamp`.`id` = 1;Above will just update the date_updated with the CURRENT_TIMESTAMP and the date_created value will remain same as earlier. Caution needs to be taken while updating records so as not update date_created field with NULL value.One could also say that instead of MySQL automatically updating the date_updated field, I would like the date_created field to be updated automatically by it. Yes, it is just a reverse case and can be used, in which case while updating the record one has to pass NULL value to date_updated field. But the thing is that since there will be many updates and only a one time insert so passing a NULL while a one time insert reduces the overhead on us of passing a NULL for every subsequent update and let MySQL handle it for us. But yes this is debatable and there are cases where this reverse structure can be used.Another workaround this is to have date_created field updated with current timestamp value using  a trigger when a record is inserted.Hope this helps 🙂

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569