Month: October 2016

Playing with MySQL Source code; Adding “STOP ALL SLAVES” command

This blog post is a result of reading book.
To be clear -> “Expert MySQL” by Charles Bell -> http://www.apress.com/gp/book/9781430246596
This book explain internals of MySQL based on 5.6.x version. So relatively new and really good resource to use.
So the task is:
Chapter 8, “Extending MySQL High Availability” -> page 310, “Extending Replication”.
Where we should add new command -> “SQLCOM_STOP_SLAVES” executed on master and causing to STOP IO_THREAD on all slaves.
The basic idea is to run “STOP ALL SLAVES” on master, which in turn will appear on binary log and then it will applied on slaves.
Great
MySQL version is 5.6.32.
So, as book advice go to sql/lex.h. “We are adding the new symbol for the command. This file contains the symbol array stored in alphabetical order.”
Added:

{ “SLAVE”, SYM(SLAVE)},
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add SLAVES command */
{ “SLAVES”, SYM(SLAVES)},
/* END CAB MODIFICATION*/
{ “SLOW”, SYM(SLOW)},

Then go to sql/sql_cmd.h to add a new enumeration for the big switch. Locate enum enum_sql_command definition near the top of the file.
Added:

SQLCOM_COMMIT, SQLCOM_SAVEPOINT, SQLCOM_RELEASE_SAVEPOINT,
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/*Add SQLCOM_STOP_SLAVES enum */
SQLCOM_SLAVE_START, SQLCOM_SLAVE_STOP, SQLCOM_STOP_SLAVES,
/*END CAB MODIFICATION*/
SQLCOM_BEGIN, SQLCOM_CHANGE_MASTER,

“Next we need to add a new token to be used in the new rule. Once again, the list of tokens is arranged in alphabetical order. Open the sql/sql_yacc.yy and locate the section where new tokens are defined. In this case, we need to add a definition for a token for new command. We will name it SLAVES.”
Added near line 1539/1540:

%token SLAVE
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/*Add SLAVES token */
%token SLAVES
/*END CAB MODIFICATION*/
%token SLOW

“Next modify the section where the “%type” defintion resides. We need to add new token to this definition.”
Added near line 1855/1856:

slave master_def master_defs master_file_def slave_until_opts
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add stop to list of NONE types */
repair analyze check start stop checksum
/*END CAB MODIFICATION*/
field_list field_list_item field_spec kill column_def key_def

“Next, we add a new command definition to the list of commands so that the parser can direct control to the new rule. Notice we add a new ‘or’ condition mapping to a new rule to be evaluated.”
Added near line 2079/2080:

| start
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add stop to list of statement targets */
| stop
/*END CAB MODIFICATION*/
| truncate

“Last, we will add the new rule to process the STOP ALL SLAVES command. The rule simply saves the new enumeration to the lex->sql_command attribute. This is how the code maps the result of the rule(and the processing of the command) to the big switch to a case equal to the enumeration value.”
Added near line 8114/8115:

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add rule for stop ALL SLAVES command*/
stop:
STOP_SYM ALL SLAVES
{
LEX *lex= Lex;
lex->sql_command= SQLCOM_STOP_SLAVES;
}
;
/*END CAB MODIFICATION*/

“With changes to the YACC file complete, we can add a new case for the big switch to ensure the command, once captured by the parser, is directed to code to written the event to the binary log. Normally, the STOP SLAVE command is not replicated. Our code would also override this restriction. Let us add that case statement. Open the sql/sql_parse.cc”
Added near line 3159/3160:

mysql_mutex_unlock(&LOCK_active_mi);
break;
}

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add case statement for STOP ALL SLAVES command */
case SQLCOM_STOP_SLAVES:
{
if (!lex->no_write_to_binlog)
{
res = write_bin_log(thd, TRUE, “STOP SLAVE IO_THREAD”, 20);
}

break;
}
/*END CAB MODIFICATION*/

#endif /* HAVE_REPLICATION */

case SQLCOM_RENAME_TABLE:

“The first statement is designed to check to see if the server is able to write to the binary log. If so, we add a new log event, passing it the STOP SLAVE SQL command. Notice that we use a specific version of the STOP SLAVE command. In this case, we are stopping only IO_THREAD.”
I assume that you are familiar with compiling MySQL from source code. If not then read -> installing-mysql-from-source-cmake-issues
Go ahead and run “make” again. It will likely fail with:

Scanning dependencies of target sql
[ 54%] Building CXX object sql/CMakeFiles/sql.dir/sql_parse.cc.o
[ 54%] Building CXX object sql/CMakeFiles/sql.dir/mysqld.cc.o
In file included from /home/sh/MySQLPakcages/mysql-5.6.32/include/my_compiler.h:151:0,
from /home/sh/MySQLPakcages/mysql-5.6.32/include/my_global.h:473,
from /home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc:17:
/home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc: In function ‘int init_common_variables()’:
/home/sh/MySQLPakcages/mysql-5.6.32/include/my_attribute.h:38:42: error: size of array ‘compile_time_assert’ is negative
# define MY_ATTRIBUTE(A) __attribute__(A)
^
/home/sh/MySQLPakcages/mysql-5.6.32/include/my_global.h:377:52: note: in expansion of macro ‘MY_ATTRIBUTE’
typedef char compile_time_assert[(X) ? 1 : -1] MY_ATTRIBUTE((unused)); \
^
/home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc:3871:3: note: in expansion of macro ‘compile_time_assert’
compile_time_assert(sizeof(com_status_vars)/sizeof(com_status_vars[0]) – 1 ==
^
sql/CMakeFiles/sql.dir/build.make:3581: recipe for target ‘sql/CMakeFiles/sql.dir/mysqld.cc.o’ failed
make[2]: *** [sql/CMakeFiles/sql.dir/mysqld.cc.o] Error 1
CMakeFiles/Makefile2:5192: recipe for target ‘sql/CMakeFiles/sql.dir/all’ failed
make[1]: *** [sql/CMakeFiles/sql.dir/all] Error 2
Makefile:160: recipe for target ‘all’ failed
make: *** [all] Error 2

As a beginner in source code world, I suddenly found a BUG in book
After reading comments I have found something inside sql/sql_cmd.h:

/*
When a command is added here, be sure it’s also added in mysqld.cc
in “struct show_var_st status_vars[]= {” …
*/

So the new command should also be added inside sql/mysqld.cc. But there is no such struct show_var_st status_vars[] there. It took me some time to figure out that instead of status_vars[], command should be added into SHOW_VAR com_status_vars[]
See related report -> #83613
So edited mysqld.cc file near line 3571/3572:

{“slave_stop”, (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SLAVE_STOP]), SHOW_LONG_STATUS},
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
{“stop_slaves”, (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_STOP_SLAVES]), SHOW_LONG_STATUS},
/*END CAB MODIFICATION*/

Save and recompile it will succeed. Next is to look at easy way to setup replication and test the new added command. In the book, author used tools from “MySQL Utilities”. But I have used “MySQL Sandbox” as:

make_sandbox_from_source /home/sh/MySQLPakcages/mysql-5.6.32 replication

That’s it, 1 master and 2 slaves are ready to test.
For my bad, ” stop all slaves” command failed with following assertion on master:

Version: ‘5.6.32-debug-log’ socket: ‘/tmp/mysql_sandbox23097.sock’ port: 23097 Shahriyar Rzayev’s Ubuntu 16.04 MySQL-5.6.32
mysqld: /home/sh/MySQLPakcages/mysql-5.6.32/sql/protocol.cc:515: void Protocol::end_statement(): Assertion `0′ failed.
14:49:07 UTC – mysqld got signal 6 ;

So ideally my statement, should return something like “Query OK”. After asking Weixiang Zhai issue fixed by adding my_ok(thd)
So the final code for sql/sql_parse.cc:

mysql_mutex_unlock(&LOCK_active_mi);
break;
}

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add case statement for STOP ALL SLAVES command */
case SQLCOM_STOP_SLAVES:
{
if (!lex->no_write_to_binlog)
{
res = write_bin_log(thd, TRUE, “STOP SLAVE IO_THREAD”, 20);
}

if(!res)
{
my_ok(thd);
}

break;
}
/*END CAB MODIFICATION*/

#endif /* HAVE_REPLICATION */

case SQLCOM_RENAME_TABLE:

Recompiled and reinstalled with sandbox, the result of running new command on master:

master [localhost] {msandbox} ((none)) > stop all slaves;
Query OK, 0 rows affected (0.00 sec)

Result on slaves:

Slave_IO_Running: No
Slave_SQL_Running: Yes

So it works like a charm

Percona Server for MongoDB 3.2.10-3.0 is now available

Percona Server for MongoDBPercona announces the release of Percona Server for MongoDB 3.2.10-3.0 on October 31, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Percona Server for MongoDB 3.2.10-3.0 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage […]

Webinar Wednesday November 2: MongoDB Backups, All Grown Up!

MongoDB BackupsPlease join us on Wednesday, November 2, 2016 at 10:00 am PDT / 1:00pm EDT (UTC-7) for the webinar MongoDB Backups, All Grown Up, featuring David Murphy, Percona’s Mongo Practice Manager.    It has been a long road to stable and dependable backups in the MongoDB space. This webinar covers the current types of backups and their limitations […]

MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint

I have been following Facebook’s MyRocks project (and Mark Callaghan’s blog) for a long time. The idea of an LSM based engine for MySQL is actually a great idea.We all know that InnoDB sucks at INSERTs.  BTree in general sucks when it’s about insertion speed, and the more rows you insert, the more it sucks at it. There are many blog posts on the web that shows the insert speed degradation in InnoDB when the amount of rows in the table grows. Things get much worse faster if your primary key is a random key, for example an UUID.We hit this problem with our caching servers (yes, we do caching with MySQL!), and in order to be able to scale these servers up we moved since a couple years to the TokuDB engine with great success. TokuDB is based on fractal tree technology, and guarantees the same insert speed, no matter the number of rows you have in the table; furthermore, it has no problems with random INSERTs (inserting rows with random PK values) and it also achieves great compression when compared to InnoDB.Enter RocksDB and the LSM (Log Structured Merge) technology: we are entering a new planet as far as data storage goes. The idea of an “append only” type of approach for writing database rows is something I never heard before. Databases usually are the most random writer ever! And although SSD will not suffer from write randomness, there are other factors that need to be taken into consideration. I’ll not go into more details about how LSM or RocksDB work, or why LSM is good for your SSD and your workload, as there is plenty of information about it on Mark’s blog (see above) or on the official page for RocksDB and for MyRocks (the RocksDB engine for MySQL) on GitHub; just check the Wiki there.For quite some time I dreamed  to give MyRocks a try with our giant (at least by our standards) write-mostly tracking database, currently occupying about 2.1T of disk space and growing about 200G per month (and yes we DO have data retention in place).  This cluster has a very write intensive workload and the written data is  very seldom read, usually to extract statistical reports, or for application or incident post mortem analysis. At the time of this post, it contains approximately 830 tables.So I was really excited to hear at the recent Percona Live AMS  Yoshinori Matsunobu announcing that MyRocks was ready for production use.  I explicitly asked him if partitioning was supported by MyRocks (as we use range partitioning to prune old data) and got a positive response. Yay!!Even if production ready, MyRocks is still in early stages and therefore I needed to clone the git repository and compile the server from source code. I also had to compile the GCC compiler first, as it is a good idea to use one of the GCC versions that were tested by the Facebook team so that I can report issues that may happen later. Peter Z announced that MyRocks will be included in Percona Server soon, but if you want to give it a try now, this is the only way.Next step was to obtain a consistent dump of the cluster, along with master position in order to add the MyRocks slave to the cluster after the data import. I knew this dump was going to take a long time, so instead of using mysqldump  I decided to use mydumper, the parallel dump-and-load tool that I have used many times in the past, especially to migrate out of Amazon RDS (anyone?). Also, I knew that some editing of the dump was required in order to accomplish the following:- remove references to InnoDB and/or replace them with ROCKSDB- introduce binary collation required by MyRocks  (insert  “COLLATE latin1_bin” or “COLLATE utf8_bin” in the table definitions)- remove InnoDB compression where enabled (many of the larger tables are compressed)So I downloaded, compiled and installed mydumper and after a whopping 44 hours, my dump was complete (and occupying about 2T of disk space uncompressed).MyDumper creates two files for each table, one contains the table creation command and the other one contains the actual dump of the table’s data.  I crafted a simple script that goes through all the table creation files and applies the changes above in them:#!/bin/bash#for f in bigdump/*schema*sqldo  fn=$(basename $f)  sed -e “s/InnoDB/ROCKSDB/g” -e “s/ DEFAULT CHARSET=latin1/ DEFAULT CHARSET=latin1 COLLATE=latin1_bin/g” -e “s/ DEFAULT CHARSET=utf8/ DEFAULT CHARSET=utf8 COLLATE=utf8_bin/g” -e “s/ ROW_FORMAT=COMPRESSED/ /g” < bigdump/$fn > bigdump/$fn.mangled  mv bigdump/$fn tmp/$fn.orig    mv bigdump/$fn.mangled bigdump/$fndoneSo I was ready to import this 2T of stuff into MyRocks. Of course, this is also a very time consuming operation, when you have more than a few G’s to import. So it is really important to use all the available optimizations in order to speed up the bulk load. Turns out that MyRocks contains quite a few tweaks to help with this; the most notable ones being rocksdb_skip_unique_check, which disables checking on unique constraints, and rocksdb_commit_in_the_middle which only commits every 1,000 rows or so. More info about options suitable for bulk loading data can be found on the Wiki page here.   Beware that these settings cannot be used in regular production mode, they can cause corruption of your data! Therefore it is best to create a dedicated config for bulk data loading purposes only, and use the standard “production” config after the import is completed.Side note: before you attempt to load large datasets into MyRocks, make sure you set your open file limit high enough! I have learned it the hard way, that MyRocks keeps one file descriptor open for each SST file you have in your database, and the number of SST files can grow very high when you are importing lots of data. In my case, I have topped 34,000 open file descriptors when the load was completed, which are then kept open even during normal use. I had originally reported this issue and following that a better default was recommended to me for the variable target_file_size_base in rocksdb_default_cf_options. Changing the recommended value from 32m to 64m made a big change regarding the amount of file descriptors required, dropping the above number to 12,500.After a couple failed attempt at importing the data (mostly due to me being too lazy to properly read the available documentation) I finally managed to complete the import with satisfying results, using the configuration below.  You will notice by reading the config that MyRocks cannot at this time coexist with InnoDB. I have been inquiring with Yoshinori about this, and the reason is that at this time, the crash recovery mechanism of the two engines will conflict. I can’t wait for the day when we will be able to use RocksDB and InnoDB in the same server!core-filerocksdbskip-innodbdefault-storage-engine=rocksdbdefault-tmp-storage-engine=MyISAMcollation-server=latin1_bin# config for bulk loadrocksdb_max_open_files=-1rocksdb_base_background_compactions=1rocksdb_max_total_wal_size=4Grocksdb_block_size=16384rocksdb_block_cache_size=16Grocksdb_table_cache_numshardbits=6rocksdb_default_cf_options=write_buffer_size=128m;target_file_size_base=64m;max_bytes_for_level_base=512m;level0_file_num_compaction_trigger=4;level0_slowdown_writes_trigger=256;level0_stop_writes_trigger=256;max_write_buffer_number=16;compression_per_level=kNoCompression:kNoCompression:kNoCompression:kZlibCompression:kZlibCompression:kZlibCompression;bottommost_compression=kZlibCompression;compression_opts=-14:1:0;block_based_table_factory={cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=1};level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=truerocksdb_override_cf_options=system={memtable=skip_list:16}rocksdb_skip_unique_check=1rocksdb_commit_in_the_middle=1rocksdb_write_disable_wal=1rocksdb_max_background_flushes=40rocksdb_max_background_compactions=40During the import I have been watching the disk space (footprint) occupied by MyRocks’ data directory. MyRocks creates a .rocksdb subfolder in your datadir and stores all its files in there, so I ran a script to monitor both the # of file descriptor in use (with lsof) and the footprint (with du) every 60 seconds:3888223G /storage/rockstat/data/.rocksdb3959229G /storage/rockstat/data/.rocksdb3901219G /storage/rockstat/data/.rocksdb3924220G /storage/rockstat/data/.rocksdb3916226G /storage/rockstat/data/.rocksdb3815211G /storage/rockstat/data/.rocksdbYou can clearly see the compaction of data taking place while the data loads. Compaction pushes data to lower levels, packing and compressing it in the process and therefore the footprint decrease while the import progresses.  Cool!!But what was really cool is finding out that MyRocks was able to pack 2.1T worth of data into a mere 611G  – reducing an already compressed dataset by more than 3 times! This is even more amazing if you consider that about a dozen of the larger tables in this particular dataset contain BLOBs which have  compressed data in them. In the steps above I had disabled compression at the engine level where it was configured in the source dataset, but of course compressed data in BLOBs will remain compressed and MyRocks did a very good job at compressing regardless.Importing the dump took about 17  hours. During the import I ran into some stalls that I reported here. Once the import completed I shut the server down and switched the my.cnf file used for bulk load with the “production” one below. Please note that I had to keep rocksdb_block_cache_size low due to other stuff using memory on the test server at the same time;  you should raise this accordingly to the amount of memory available, keeping in mind that MyRocks, similarly to TokuDB, also uses filesystem cache extensively. I didn’t find a recommendation on the MyRocks wiki, so I decided to set this parameter to 50% of the available server memory, which is the recommended value for Toku. core-filerocksdbskip-innodbdefault-storage-engine=rocksdbdefault-tmp-storage-engine=MyISAMcollation-server=latin1_binrocksdb_max_open_files=-1rocksdb_base_background_compactions=1rocksdb_max_background_compactions=8rocksdb_max_total_wal_size=4Grocksdb_max_background_flushes=4rocksdb_block_size=16384rocksdb_block_cache_size=16Grocksdb_table_cache_numshardbits=6rocksdb_default_cf_options=write_buffer_size=128m;target_file_size_base=64m;max_bytes_for_level_base=512m;level0_file_num_compaction_trigger=4;level0_slowdown_writes_trigger=10;level0_stop_writes_trigger=15;max_write_buffer_number=4;compression_per_level=kNoCompression:kNoCompression:kNoCompression:kZlibCompression:kZlibCompression:kZlibCompression;bottommost_compression=kZlibCompression;compression_opts=-14:1:0;block_based_table_factory={cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=1};level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=trueUsing the information saved by mydumper in the “metadata” file I set up replication from the actual InnoDB based slave to the MyRocks slave and let it catch up. I used MTS (Multi Threaded Slave) to speed up the catch up (parallel replication happens to work very well for this cluster’s workload) and the slave caught up pretty quickly. After the catch up, I stopped the slave and disabled parallel replication, as it is not compatible with relay_log_recovery in case of a slave crash, and the latter option is needed by MyRocks to guarantee a crash safe slave.I quickly found out that having a slave which uses an engine different than the master needs also some other tweaks in the configuration. Remember that I had to disable InnoDB in order to make MyRocks work properly.  So what happens when on the master some DDL is executed which explicitly references ENGINE=InnoDB?Here’s the answer:               Last_SQL_Errno: 1286               Last_SQL_Error: Worker 7 failed executing transaction ” at master log dbstat02.034383, end_log_pos 99530607; Error ‘Unknown storage engine ‘innodb” on query. Default database: ‘rick_rocks’. Query: ‘create table ttable2 (id int not null auto_increment primary key) engine=innodb’D’oh! Of course, this is not what I wanted. I quickly remembered about a SQL_MODE option that I always wondered what  could be used for: NO_ENGINE_SUBSTITUTION. From the manual page:With NO_ENGINE_SUBSTITUTION disabled, for CREATE TABLE the default engine is used and a warning occurs if the desired engine is unavailable. For ALTER TABLE, a warning occurs and the table is not altered.Hey, sounds like exactly what I want!! We never use ALTER TABLE anyways as we do everything  via pt-online-schema-change, so this option should be all we need on the slave.This option (NO_ENGINE_SUBSTITUTION) is enabled by default since MySQL 5.6.6, so I needed to revert the sql_mode to a blank string, by setting in my.cnf the following:sql_mode=”To my dismay, after restarting the MyRocks slave, the behaviour was unchanged.  A bit of digging in the MySQL source code and I discovered that the slave uses the master’s SQL_MODE, and not its own! The mode gets written to the binary log for each and every transaction, and is used by the slave when applying events.So, I went to the master and changed the SQL_MODE there. After restarting replication, now the behaviour is correct and a table created with Engine=INNODB is created with engine ROCKSDB on the slave.Finally, here’s a comparison of the disk usage between a classic “InnoDB” slave and our new “RocksDB” one,  over the last 48 hours:MyRocksInnoDBCan you spot without using the caption which one of the two is our MyRocks slave?  The compaction algorithm can be clearly spotted in the graph.  The two big drops around 6am is when the older partitions are dropped every night to apply retention.The graphs above show that the MyRocks slave grew from 47.75% to 48%, while the InnoDB slave grew from 77% to 78%.  What  they don’t show is that the MyRocks storage is  a mere 1.4T where the InnoDB one is double that, at 2.8T!So, bottom line is, MyRocks grew 0.25% of 1.4T = 3.5G while InnoDB grew 1% of 2.8T = 28G. This is a 7x improvement!We will be monitoring this new slave for some time trying to compare CPU metrics and to observe general behaviour, like lag and general reliability.But for now, I would say that MyRocks really … rocks!!!  Well done FB Team!

MySQL Support Engineer’s Chronicles, Issue #2

It’s time to continue my new series that I’ve started 2 weeks ago. I’d like to start with a reminder that it’s time to send your talks for “MySQL and Friends Devroom” at FOSDEM 2017 – the only MySQL-related event next year that I plan to attend in any case. It seems we have one more week to submit, but I’ve already filled in all the details for the “main” talk, “Understanding MyRocks locks and deadlocks”. I’d like to apply my usual source code reading and gdb breakpoints approach in case if by the end of January, 2017 official documentation still misses important details. Official MySQL manual is still the process of improving with regards to explaining InnoDB locks, and I am happy to admit that this week yet another my related documentation request, Bug #71735, “Manual does not explain locks set by SELECT … FOR UPDATE properly”, was closed.I am really excited by community and developers activity around MyRocks these days, so it was easy to decide to spend more time on MyRocks while preparing to FOSDEM. Just take a look at recent issues reported by Justin Swanhart, for example: Issue #365 and Issue #369. How often do you see so active, almost real time discussion in the bug reports about something else MySQL-related these days?I had to spend a lot of time recently trying to understand and prove the root cause of one performance problem that happens once in while on a node of Galera cluster, and for this I’ve used Performance_Schema, maybe for the first time (since reporting Bug #68079) on a real life case. It turned out that proper sizing of the tables there, so that under high load and concurrency we still get relevant data few seconds after problem had happened in its tables is not a trivial task. Moreover, it seems Galera code is not instrumented in enough details, so it is hard to measure the impact it may have. While trying to find a proper trade off between the amount of data collected, memory usage and performance impact, I decided to use profilers, from PMP to ideas of replacing gdb there with quickstack, to perf (that I’ve used to make some really good observations). I’d like to summarize some recent (and upcoming) profiling experience in a talk that I am still trying to submit for FOSDEM, titled “Applying profilers to MySQL”. In the meantime, check comments to this my Facebook post, for many useful details.I’d like to submit also a talk on how single bug report may influence MySQL development and development of skills of the bug reporter. It’s just the idea for now, and I plan to concentrate on a story around Bug #68079 that will be 4 years old by the time of FOSDEM 2017. I have new interesting results to share that I’ve got while trying to check how the fix helped and how this same use case performs and scales in MyRocks now. Not sure if this is a good idea and if anything but a blog post may come out of it. Some of my previous attempts to build a talk around this bug were not really well accepted…This week I had to explain how to get rid of huge ibdata* file(s) while moving each InnoDB table to a separate .ibd file. I’ve found a great summary of options in this old blog post by Shlomi Noach. Very useful in case you can not just dump everything and reload into a new instance, for whatever reason.Another problem I had to work on was related to bad query plan on a slave where the table was partitioned. We tried to fight with bad cardinality estimations (see Bug #67351 etc), but even after getting them closer to reality optimizer still uses wrong index sometimes. My next hope is Engine-independent table statistics of MariaDB.I’ve noted that customers recently are actively trying to use PAM and LDAP with MySQL. I consider this blog post by my colleague Geoff Montee very useful for them (and myself).Finally, I’ve found this great blog post by Jervin Real very useful while working with colleagues on continuous InnoDB crashes caused by corrupted partition .ibd file. The idea was to restore partition data from backup on a separate instance and also recover data for all non-corrupted partitions of a huge table there, while problematic instance is starting up in a forced recovery mode to drop the problematic table and check for any further problems.As I already mentioned several bugs in this post, I’d like to share a link to the list of bugs recently reported by my colleague from MariaDB Foundation, Sergey Vojtovich. Dear Oracle MySQL engineers, please, pay proper attention to them, especially those with “bs” in the “Summary”!

FromDual.en: FromDual Backup and Recovery Manager for MySQL 1.2.3 has been released

FromDual has the pleasure to announce the release of the new version 1.2.3 of the popular Backup and Recovery Manager for MySQL and MariaDB (fromdual_brman).

You can download the FromDual Backup and Recovery Manager from here.

In the inconceivable case that you find a bug in the Backup and Recovery Manager please report it to our Bugtracker or just send us an email.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.2.x to 1.2.3

# cd ${HOME}/product
# tar xf /download/fromdual_brman-1.2.3.tar.gz
# rm -f fromdual_brman
# ln -s fromdual_brman-1.2.3 fromdual_brmanChanges in FromDual Backup Manager 1.2.3

This release contains mainly fixes related to MySQL 5.7 and various minor fixes.

You can verify your current FromDual Backup Manager version with the following command:

shell> fromdual_bman –versionFromDual Backup Manager

New tests added to test-suite.
MyEnv library myEnv.inc merged into project.
Output of overview made nicer for ARCHIVE tables.
Specified but missing configuration file is caught correctly now (bug #163).
Return and error codes fixed and cleaned up.
1M table creation script added to utl/create_1M_tables.php.
ORDER BY added to all GROUP BY statements to be also correct in future MySQL releases.
Added short options -h and -V for version and help to fromdual_bman.php.
Removed redundant error number in fromdual_bman.inc.
Function parseConnectString moved from fromdual_bman.inc to myEnv.inc.
FromDual Backup Manager and MySQL 5.7

Bugs related to MySQL 5.7 version detection fixed.
FromDual Backup Manager Privilege Backup

Privilege backup was fixed for new MySQL 5.7 SHOW GRANTS behaviour.
Semicolon (;) was added to privilege backup output.
FromDual Backup Manager and Xtrabackup

New behaviour of MySQL 5.7 for physical backup fixed.
Backup manager works with new xtrabackup v2.3.x again. Typo bug fixed.
FromDual Backup Manager and LVM Snapshot Backup

Started to implement LVM snapshot backup functionality.
FromDual Backup and Recovery Manager Catalog

Catalog version mismatch error made more clear.
FromDual Backup Manager Binary Log Backup

Error catch implemented for binary logs removed manually.
FromDual Recovery Manager

Changed wrong rc in progress_bar.php
Taxonomy upgrade extras: BackupRestoreRecoverypitrfromdual_brman

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569