MySQL 5.7.12 was released more than 2 weeks ago. New features introduced there in a form of “rapid plugins” are widely discussed, but I am more interested in bugs reported by MySQL Community users that are fixed there. Unfortunately I do not see MySQL Community Release Notes by Morgan (like this) for quite a some time, so I have to continue describing key bug fixes and name people who reported and verified bugs in my “Fun with Bugs” series.As usual, let’s start with InnoDB bugs fixed:Bug #80070 – “allocated_size and file_size differ if create general tablespace outside datadir”. It was reported by my former colleague from Percona Shahriyar Rzayev and verified by Bogdan Kecman. Nice to see more people from Oracle involved in processing community bug reports!Bug #79185 – “Innodb freeze running REPLACE statements”. This bug (that affected many users, also on versions 5.5.x and 5.6.x, and was a kind of a regression) was reported by Will Bryant and verified (probably) and fixed by Shaohua Wang. The fix is also included into versions 5.5.49 and 5.6.30.Bug #73816 – ”MySQL instance stalling “doing SYNC index””. It was reported by Denis Jedig and a lot of additional evidence was provided by my former colleague Aurimas Mikalauskas. This bug was fixed (and probably verified) by Shaohua Wang.Bug #79200 – “InnoDB: “data directory” option of create table fails with pwrite() OS error 22″, is a widely noted regression (I’ve seen customer issue with a potentially related MariaDB problem this week). This bug was reported by Frank Ullrich and verified by Bogdan Kecman. It is also fixed in MySQL 5.6.30.Bug #79725 – “Check algorithm=innodb on crc32 checksum mismatch before crc32(big endian)”. This bug was created to track the patch contributed by Daniel Black at GitHub. It was verified by Umesh.Next, let’s review replication bugs fixed in 5.7.12:Bug #79504 – “STOP SLAVE IO THREAD prints wrong LOST CONNECTION message in error log file”. It was reported by Venkatesh Duggirala.Bug #78722 – “Relay log info currently_executing_gtid is not properly initialized or protected”. This bug was reported by Pedro Gomes. It contains a nice simple test case and fix suggested.Bug #78445 is private. So, I can only quote the release notes:”RESET SLAVE ALL could delete a channel even when master_pos_wait and wait_until_sql_thread_after_gtid were still waiting for binlog to be applied. This could cause a MySQL server exit when the functions tried to access the channel that was deleted. Now, a channel reference counter was added that is increased if the channel should not be deleted when the functions are running. RESET SLAVE ALL will wait for no reference, and then it will delete the channel.”I am not sure this crash is a “security” bug of any kind, but what do I know…Bug #78352 – “Slow startup of 5.7.x slave with relay_log_recovery = ON and many relay logs”. I reported it based on regression comparing to 5.6.x reported by a customer of Percona, and verified by Umesh. Nice to see it fixed, as it was really annoying for almost anyone who upgraded production replication setup to 5.7.Bug #78133 – “Slave_worker::write_info() incorrect DBUG_ENTER (contribution)”. This bug was created to track the patch contributed by Stewart Smith at GitHub. It was verified by Umesh.Bug #77740 – “silent failure to start if mysql.gtids_executed gets HA_ERR_LOCK_WAIT_TIMEOUT “. It was reported and verified by Shane Bester.Bug #77237 – “Multi-threaded slave log spamming on failure”. This bug was reported by Davi Arnaut and verified by Umesh. Fix is also included in MySQL 5.6.30.Bug #78963 – “super_read_only aborts STOP SLAVE if relay_log_info_repository=TABLE, dbg crash”. It was reported by my former colleague in Percona Laurynas Biveinis and verified by Umesh. Check also related Bug #79328 – “super_read_only broken as a server option”.Bug #77684 – “DROP TABLE IF EXISTS may brake replication if slave has replication filters”. This bug was reported by my former colleague in Percona Fernando Laudares Camargos for MySQL 5.6.x and verified by Umesh. MySQL 5.6.30 also got this fixed. We all remember that Performance Schema is perfect and the next greatest thing after sliced bread, but sometimes bugs are noted even there. Check Bug #79934 – “i_perfschema.table_leak random result failure” reported and verified by Magnus Blåudd. Another example is Bug #79784 – “update setup_instruments do not affect the global mutex/rwlock/cond” reported by Zhang Yingqiang and verified by Umesh. The later, IMHO, is related to or a super set of my good old report, Bug #68097 – “Manual does not explain that some P_S instruments must be enabled at startup” that remains open as a feature request (after some changes in the manual) for more than 3 years already. I truly hope 5.7.12 fixed this for a general case – it’s truly important to be able to enable instruments dynamically if we expect Performance Schema to be used as a main tool for troubleshooting.I’d also want to highlight a couple of fixes related to optimizer:Bug #77209 – “Update may use index merge without any reason (increasing chances for deadlock)”. It was reported and verified by my former colleagues from Oracle, Andrii Nikitin. MySQL 5.6.30 also includes the fix.Bug #72858 – “EXPLAIN .. SELECT .. FOR UPDATE takes locks”. This bug was reported by my former colleague in Percona (and, I hope, my colleague again soon) Justin Swanhart, who has a birthday today. Happy Birthday to you, Justin! The bug was verified by Umesh and is also fixed in MySQL 5.6.30. Justin had reported another bug fixed in 5.7.12, Bug #69375 – “LOAD DATA INFILE claims to be holding ‘System Lock’ in processlist”.Several more bugs reported by community were also fixed, but they were in the areas (or for platforms) I am not particularly interested in.To summarize, MySQL 5.7.12 contains important bug fixes in replication and InnoDB and it makes sense to consider upgrade even if you do not care about any “rapid plugins”, X protocol, encryption of data at rest, MySQL Keyring and other “cool” new shiny features.
The MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.25. See the release notes and changelog for details on this release. Download MariaDB 10.0.25 Release Notes Changelog What is MariaDB 10.0? MariaDB APT and YUM Repository Configuration Generator Thanks, and enjoy MariaDB!
The post MariaDB 10.0.25 now available appeared first on MariaDB.org.
MariaDB MaxScale is mentioned in many blog posts recently. It’s Application of the Year 2016 after all! I’d like to test it, follow posts like this etc, all that on my favorite and readily available testing platforms that are now Ubuntu of all kinds and, surely, Fedora 23 (on my wife’s workstation, the most powerful hardware at hand).My old habits force me to build open source software I test from source, and I do not want to even discuss the topic of “MaxScale binaries availability” that was quite “popular” some time ago. So, after building MaxScale 1.4.1 on CentOS 6.7 back on March 31, 2016 (mostly just following MariaDB KB article on the topic) using libmysqld.a from MariaDB 10.0.23, this morning I decided to check new branch, 1.4.2, and build it on Fedora 23, following that same KB article (that unfortunately does not even mention Fedora after the fix to MXS-248). Thing is, Fedora is not officially supported as a platform for MaxScale 1.4.x, but why should we, those who can build things from source for testing purposes, care about this?I started with cloning MaxScale:git clone https://github.com/mariadb-corporation/MaxScale.gitcd MaxScaleand then:[openxs@fc23 MaxScale]$ git branch -r… origin/HEAD -> origin/develop… origin/release-1.4.2…I remember spending enough time fighting with develop branch while building on CentOS 6.7, mostly with sqlite-related things it contained, so this time I proceed immediately to the branch I want to build:[openxs@fc23 MaxScale]$ git checkout release-1.4.2Branch release-1.4.2 set up to track remote branch release-1.4.2 from origin.Switched to a new branch ‘release-1.4.2′[openxs@fc23 MaxScale]$ git branch develop* release-1.4.2[openxs@fc23 MaxScale]$ mkdir build[openxs@fc23 MaxScale]$ cd buildLast two steps originate from the KB article. We are almost ready for building, but what about the prerequisites? I’ve collected all the packages required for CentOS in that article and tried to install them all:[openxs@fc23 build]$ sudo yum install mariadb-devel mariadb-embedded-devel libedit-devel gcc gcc-c++ ncurses-devel bison flex glibc-devel cmake libgcc perl make libtool openssl-devel libaio libaio-devel librabbitmq-devel libcurl-devel pcre-devel rpm-build[sudo] password for openxs:Yum command has been deprecated, redirecting to ‘/usr/bin/dnf install mariadb-devel mariadb-embedded-devel libedit-devel gcc gcc-c++ ncurses-devel bison flex glibc-devel cmake libgcc perl make libtool openssl-devel libaio libaio-devel librabbitmq-devel libcurl-devel pcre-devel rpm-build’.See ‘man dnf’ and ‘man yum2dnf’ for more information.To transfer transaction metadata from yum to DNF, run:’dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate’Last metadata expiration check: 0:26:04 ago on Wed Apr 27 10:43:24 2016.Package gcc-5.3.1-6.fc23.x86_64 is already installed, skipping….Package pcre-devel-8.38-7.fc23.x86_64 is already installed, skipping.Dependencies resolved.================================================================================ Package Arch Version Repository Size================================================================================Installing: autoconf noarch 2.69-21.fc23 fedora 709 k automake noarch 1.15-4.fc23 fedora 695 k dwz x86_64 0.12-1.fc23 fedora 106 k flex x86_64 2.5.39-2.fc23 fedora 328 k ghc-srpm-macros noarch 1.4.2-2.fc23 fedora 8.2 k gnat-srpm-macros noarch 2-1.fc23 fedora 8.4 k go-srpm-macros noarch 2-3.fc23 fedora 8.0 k libcurl-devel x86_64 7.43.0-6.fc23 updates 590 k libedit-devel x86_64 3.1-13.20150325cvs.fc23 fedora 34 k librabbitmq x86_64 0.8.0-1.fc23 updates 43 k librabbitmq-devel x86_64 0.8.0-1.fc23 updates 52 k libtool x86_64 2.4.6-8.fc23 updates 707 k mariadb-common x86_64 1:10.0.23-1.fc23 updates 74 k mariadb-config x86_64 1:10.0.23-1.fc23 updates 25 k mariadb-devel x86_64 1:10.0.23-1.fc23 updates 869 k mariadb-embedded x86_64 1:10.0.23-1.fc23 updates 4.0 M mariadb-embedded-devel x86_64 1:10.0.23-1.fc23 updates 8.3 M mariadb-errmsg x86_64 1:10.0.23-1.fc23 updates 199 k mariadb-libs x86_64 1:10.0.23-1.fc23 updates 637 k ocaml-srpm-macros noarch 2-3.fc23 fedora 8.1 k patch x86_64 2.7.5-2.fc23 fedora 123 k perl-Thread-Queue noarch 3.07-1.fc23 updates 22 k perl-generators noarch 1.06-1.fc23 updates 15 k perl-srpm-macros noarch 1-17.fc23 fedora 9.7 k python-srpm-macros noarch 3-7.fc23 updates 8.1 k redhat-rpm-config noarch 36-1.fc23.1 updates 59 k rpm-build x86_64 4.13.0-0.rc1.13.fc23 updates 137 kTransaction Summary================================================================================Install 27 PackagesTotal download size: 18 MInstalled size: 64 MIs this ok [y/N]: Y…Complete!Now, let’s try simple approach:[openxs@fc23 build]$ cmake …..– MySQL version: 10.0.23– MySQL provider: MariaDB– Looking for pcre_stack_guard in MYSQL_EMBEDDED_LIBRARIES_STATIC-NOTFOUNDCMake Error: The following variables are used in this project, but they are set to NOTFOUND.Please set them or make sure they are set and tested correctly in the CMake files:MYSQL_EMBEDDED_LIBRARIES_STATIC linked by target “cmTC_2494a” in directory /home/openxs/git/MaxScale/build/CMakeFiles/CMakeTmpCMake Error: Internal CMake error, TryCompile configure of cmake failed– Looking for pcre_stack_guard in MYSQL_EMBEDDED_LIBRARIES_STATIC-NOTFOUND – not found– PCRE libs: /usr/lib64/libpcre.so– PCRE include directory: /usr/include– Embedded mysqld does not have pcre_stack_guard, linking with system pcre.CMake Error at cmake/FindMySQL.cmake:115 (message): Library not found: libmysqld. If your install of MySQL is in a non-default location, please provide the location with -DMYSQL_EMBEDDED_LIBRARIES=<path to library>Call Stack (most recent call first): CMakeLists.txt:37 (find_package)– Configuring incomplete, errors occurred!See also “/home/openxs/git/MaxScale/build/CMakeFiles/CMakeOutput.log”.See also “/home/openxs/git/MaxScale/build/CMakeFiles/CMakeError.log”.Failure, cmake can not find libmysqld.a it seems. Let me try to find it:[openxs@fc23 build]$ sudo find / -name libmysqld.a 2>/dev/null/home/openxs/git/percona-xtrabackup/libmysqld/libmysqld.a/home/openxs/dbs/5.7/lib/libmysqld.a/home/openxs/dbs/p5.6/lib/libmysqld.a/home/openxs/dbs/fb56/lib/libmysqld.a/home/openxs/10.1.12/lib/libmysqld.aThat’s all, even though I installed all packages that looked as required based on the article! I have the library in many places (in my own builds and even in sandbox with MariaDB 10.1.12), but it’s not installed where expected. Some more desperate tries (installing MariaDB server with sudo yum install mariadb-server, searches for package that provides libmysqld.a etc), chat with engineers of MariaDB and I’ve ended up with the fact that my packages are from Fedora (not MariaDB) and they just do not include the static library. Looks like a bug in Fedora packaging, if you ask me.I was not ready to add MariaDB’s repository at the moment (to get MariaDB-devel etc, something KB article also suggests for supported platforms), so I decided that it would be fair just to build current MariaDB 10.1.13 from source and use everything needed from there. Last time I built 10.2 branch, so I had to check out 10.1 first:[openxs@fc23 server]$ git checkout 10.1Switched to branch ‘10.1’Your branch is behind ‘origin/10.1’ by 2 commits, and can be fast-forwarded. (use “git pull” to update your local branch)[openxs@fc23 server]$ git pullUpdating 1cf852d..071ae30Fast-forward client/mysqlbinlog.cc | 523 ++++++++++++++++++++++——— mysql-test/r/mysqlbinlog_raw_mode.result | 274 ++++++++++++++++ mysql-test/t/mysqlbinlog_raw_mode.test | 387 +++++++++++++++++++++++ sql/sql_priv.h | 3 +- storage/innobase/dict/dict0boot.cc | 20 +- storage/xtradb/dict/dict0boot.cc | 20 +- 6 files changed, 1062 insertions(+), 165 deletions(-) create mode 100644 mysql-test/r/mysqlbinlog_raw_mode.result create mode 100644 mysql-test/t/mysqlbinlog_raw_mode.test Then I’ve executed the following while in server directory:make clean rm CMakeCache.txtcmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_SSL=system -DWITH_ZLIB=bundled -DMYSQL_MAINTAINER_MODE=0 -DENABLED_LOCAL_INFILE=1 -DWITH_JEMALLOC=system -DWITH_WSREP=ON -DWITH_INNODB_DISALLOW_WRITES=ON -DWITH_EMBEDDED_SERVER=ON -DCMAKE_INSTALL_PREFIX=/home/openxs/dbs/maria10.1makemake install && make cleanNote that I’ve explicitly asked to build embedded server. I checked that the library is in the location I need:[openxs@fc23 server]$ sudo find / -name libmysqld.a 2>/dev/null/home/openxs/git/percona-xtrabackup/libmysqld/libmysqld.a/home/openxs/dbs/maria10.1/lib/libmysqld.a/home/openxs/dbs/5.7/lib/libmysqld.a/home/openxs/dbs/p5.6/lib/libmysqld.a/home/openxs/dbs/fb56/lib/libmysqld.a/home/openxs/10.1.12/lib/libmysqld.aThen I moved back to MaxScale/build directory and explicitly pointed out the location of headers, library and messages that I want to use with MaxScale:[openxs@fc23 build]$ cmake .. -DMYSQL_EMBEDDED_INCLUDE_DIR=/home/openxs/dbs/maria10.1/include/mysql -DMYSQL_EMBEDDED_LIBRARIES=/home/openxs/dbs/maria10.1/lib/libmysqld.a -DERRMSG=/home/openxs/dbs/maria10.1/share/english/errmsg.sys -DCMAKE_INSTALL_PREFIX=/home/openxs/maxscale -DWITH_MAXSCALE_CNF=N…– Build files have been written to: /home/openxs/git/MaxScale/build[openxs@fc23 build]$ make…[ 95%] [BISON][ruleparser] Building parser with bison 3.0.4ruleparser.y:34.1-13: warning: deprecated directive, use Б-?%name-prefixБ-? [-Wdeprecated] %name-prefix=”dbfw_yy” ^^^^^^^^^^^^^[ 96%] Building C object server/modules/filter/dbfwfilter/CMakeFiles/dbfwfilter.dir/ruleparser.c.o[ 96%] Building C object server/modules/filter/dbfwfilter/CMakeFiles/dbfwfilter.dir/token.c.o[ 97%] Linking C shared library libdbfwfilter.so[ 97%] Built target dbfwfilterScanning dependencies of target maxadmin[ 98%] Building C object client/CMakeFiles/maxadmin.dir/maxadmin.c.o[ 98%] Linking C executable maxadmin[100%] Built target maxadminIt seems build completed without problems this time. We can try to test it (some tests do fail):[openxs@fc23 build]$ make testcore… 1/22 Test #1: Internal-TestQueryClassifier …..***Exception: Other 0.35 sec Start 2: Internal-CanonicalQuery 2/22 Test #2: Internal-CanonicalQuery ……….***Failed 0.25 sec Start 3: Internal-CanonicalQuerySelect 3/22 Test #3: Internal-CanonicalQuerySelect ….***Failed 0.04 sec Start 4: Internal-CanonicalQueryAlter 4/22 Test #4: Internal-CanonicalQueryAlter …..***Failed 0.04 sec Start 5: Internal-CanonicalQueryComment 5/22 Test #5: Internal-CanonicalQueryComment …***Failed 0.04 sec Start 6: Internal-TestAdminUsers 6/22 Test #6: Internal-TestAdminUsers ………. Passed 0.44 sec Start 7: Internal-TestBuffer 7/22 Test #7: Internal-TestBuffer ………….. Passed 0.01 sec Start 8: Internal-TestDCB 8/22 Test #8: Internal-TestDCB …………….. Passed 0.01 sec Start 9: Internal-TestFilter 9/22 Test #9: Internal-TestFilter ………….. Passed 0.03 sec…(As a side note, make install in my case had NOT installed anything to /home/openxs/maxscale, something to deal with later, as on CentOS 6.7 it worked…)In any case, I now have binaries to work with, of version 1.4.2:[openxs@fc23 build]$ ls bin/maxadmin maxbinlogcheck maxkeys maxpasswd maxscale[openxs@fc23 build]$ bin/maxscale –versionMariaDB Corporation MaxScale 1.4.2 Wed Apr 27 13:24:01 2016——————————————————MaxScale 1.4.2[openxs@fc23 build]$ bin/maxadmin –versionbin/maxadmin Version 1.4.2To be continued one day… Stay tuned!
Recently I worked on a feature that involved displaying a list of Folders using an auto-complete field. A simple enough task, but one that had a bit of complexity behind it. For starters, each organization in our system has their own “folder hierarchy”, or a tree structure of folders, that could be one to many layers deep. The second thing being that the UI we wanted — auto-complete — meant that we would need to “search” through the folders to find a match (based on the user’s input) as well as display the path information (where the folder lives) so that the user can differentiate folders if they happen to have the same or similar name. Since it was auto-complete, the searching also needed to be fast or else the user experience would be annoying.
The folder tree structure was stored in a MySQL DB table in a very typical self-referencing schema that represented hierarchical data. Each row of data could have a column that referenced the “parent” folder id and if there wasn’t one, then it was a “root” folder. Each row in the table could be represented by an object in Java called Folder. Getting folders by ID and getting them in a tree-like structure was pretty straight forward. However, the path-string wasn’t part of the deal.
My first thought on this was that the paths would need to be pre-computed ahead of time so that they wouldn’t need to be computed every time a user typed in a letter, an expensive and slow function. One way to do it would be to have some application code insert to a table that contained the folder paths whenever a folder was saved. Our resident MySQL expert, Ike Walker, suggested I look into the closure table pattern, discussed in Chapter 3, Naive Trees, of Bill Karwin’s book, “SQL Antipatterns”. Yes, this was exactly what I was thinking of! However, as I thought about it more there were some things I didn’t quite like about it. First, there was already a lot of existing data. Using Closure Tables would mean that I would need to script something to populate the table storing the folder paths, and run it at least once. The fact that I would need to script something like that also made me think about how will this table get updated? In the book “SQL Antipatterns”, a discussion thread example was used. In the case of a discussion thread, individual comments have the same kind of tree-structure that folders might have, but comments and replies typically don’t move around whereas folders do. Updating the folder paths would be required to keep in sync with the underlying tree, along with whole sections of a particular folder’s branch. The thought of having to write this updating code made me think twice about my initial approach.
Caching to the rescue
So what does a developer do when there’s a problem with performance? Cache. So I thought: how can caching help me here? As I pointed out earlier, there was already a Java object — Folder — and a way to get the folders in a tree structure. If the Folder had a field to store the path, then I’m halfway there, right? But wait, doesn’t that mean I’d be storing the path in the DB again? Well, not if I set it on a transient field and set it only when the tree structure is getting built up. Hold on — doesn’t that mean that the paths still need to be calculated every time we search the tree? Here’s where the caching bit comes in. The folder tree would be traversed once and as it is recursing, the paths get computed and stored on a transient field of the Folder object. At the same time, the Folders are stored in a flat List that can be used for iterating and searching, etc. Next, the method which retrieves the folder list is cached and can be cached with the Organization’s ID in the cache-key so that each list can be kept separately. To make sure we don’t cache large lists of heavy Folder objects, I cached only lists of Folder IDs (not objects) and made the Folder objects also cached by their ID. Okay, then what about the updating part I talked about? The cached list of Folders would be invalidated whenever a Folder update occurred. It means that the paths would need to be re-computed again, but only once until the next update and that seemed reasonable enough.
In the end, I didn’t use the closure table, although I’m keen on using it if the opportunity arises. One of the key things to coming up with the solution was knowing not only the data-structure but also knowing the nature of the data (how it’s used and how often does it tend to change). In all fairness, Bill Karwin also points out frequent updating as a weakness/pain-point of the Closure Table pattern. So there you have it my friends, use those clousure tables if you can, and if you can’t, then hopefully the approach I used here will give you some ideas!
This tutorial shows how to prepare a Debian Jessie server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3.1. The web hosting control panel ISPConfig 3 allows you to configure the following services through a web browser: Apache web server, Postfix mail server, Dovecot IMAP/POP3 server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, and many more.
Elastic announced their Graph tool at ElastiCON 2016 (see presentation here). It’s part of the forthcoming X-Pack which bundles Graph along with other helper tools.
SQL Developer has a DBMS Output panel you can open and add to your desktop. You then add a connection you want to poll, and then you enable polling. It’s about 4 clicks. If you want to increase the default buffer size from 20,000 characters, then that’s a few more clicks and keystrokes. A friend […]
MySQL Replication has become an essential component of scale-out architectures in LAMP environments. When there is a necessity to scale out, MySQL offers a multitude of solutions, the most common being to add read replicas. The major bottleneck for our data is generally not so much oriented around writing our data but more around reading back this data. Therefore the easiest way to scale MySQL is to add replicas for reading.
And with today’s cloud environments, where resources are dynamically allocated and deallocated, systems need the ability to automatically adapt to sudden changes. For MySQL Replication, this includes tasks like detecting failures, promoting a slave to master, failing over slaves, and so on. A load balancer with Virtual IP can also help mask topology changes from the application, and dispatches read and write traffic appropriately.
With that in mind, we’d like to introduce the Severalnines Blueprint for MySQL Replication, a new whitepaper which discusses all aspects of a MySQL Replication topology with the ins and outs of deployment, setting up replication, monitoring, upgrades, performing backups and managing high availability using proxies as ProxySQL, MaxScale and HAProxy.
The MySQL Replication Blueprint is about having a complete ops-ready solution from end to end including:
Installation and configuration of master/slave MySQL servers, load balancers, Virtual IP and failover rules
Management of the topology, including failure detection, failover, repair and subsequent reconfiguration of components
Managing topology changes when adding, removing or maintaining servers
Managing configuration changes
Monitoring of all components from one single point
This whitepaper discusses the following core topics in depth:
Why the need for a Replication Blueprint
Introducing the MySQL Replication Blueprint
Get your free copy of our new whitepaper
Tags: MySQLmysql replicationmysql gtidmaster failoverhaproxyMaxScaleproxysqlwhitepaper
Fri, 2016-04-29 08:22nishantvyasRelational databases store data in rows because a typical SQL query looks for multiple fields within a record. For example, if you ask for name, zip code and email address of all your customers in New York, the result is presented in rows, with each row containing several fields from a single record. Row structures are also well optimized to handle a lot of inserts and updates.
But analytic queries are better handled with a column structure because they are more likely to go deep on a column of data, most queries relate only to a tiny subset of all the available columns and they’re also mostly read-only. For example, retrieving daily sales data for all your stores in California for the past two years is a columnar operation because it cuts across many records to retrieve data from a specific field. A typical ad-hoc aggregation query doesn’t care about most fields, just the trends in one field.
MariaDB ColumnStore is not only optimized for columnar operations, but also simplifies management. There is no need for indexing; metadata is stored in memory. That eliminates a cumbersome tuning process. ColumnStore, when paired with MariaDB, supports just about any query you want to throw at it. You can even join a MariaDB ColumnStore table and a InnoDB or remote MySQL table, a feature for unified simplicity. But there is much more.
In the last decade, we kept hearing that SQL is not needed for data processing or analytics, yet in the last few years every single OLTP and analytics solution is building a SQL layer. SQL is the most proven way of processing data. So, MariaDB ColumnStore is compatible with standard SQL using the MariaDB interface. Full SQL compliance means MariaDB ColumnStore works out-of-the-box with your existing business intelligence tools and SQL queries. In fact, it would work with most popular business intelligence tools, like Tableau and Business Objects, as well as anything that supports ODBC/JDBC. For data scientists, it works with R for advanced statistical analysis.
At the same time, we realize that SQL is not the best choice for machine learning and data discovery use-cases. We want to integrate Apache SPARK libraries like MLLib into ColumnStore to complete the picture.
Most importantly, MariaDB ColumnStore is based on an Open-Source GPLv2 fork of InfiniDB community project. We believe community driven software development is the new mandate of our time. We want to leverage our community strength in building MariaDB ColumnStore.
Tags: Big DataBusinessCloudClusteringColumnStore
About the Author
Nishant joins MariaDB as Head of Product and Strategy from LinkedIn, where he was one of the early employees. During his almost nine-year tenure at LinkedIn, he contributed to building, scaling and operating production data stores using technologies like Oracle, MySQL, NoSQL and more. Nishant has extensive experience as a database engineer, database architect and DBA, and has held various leadership roles. He holds a bachelor’s degree in engineering from Gujarat University and a master’s degree in computer science from the University of Bridgeport. Based in the San Francisco bay area, Nishant is co-author of a patent in waterwheel sharding.
While much less common than 10 or 15 (err… even 20) years ago, you still sometimes hear MySQL being called a “toy” database. The good news is, when somebody says that, they’re admitting ignorance and you can help educate them!
Recently, a fellow IBMer submitted a pull request (and bug) to start having MySQL support on Z Series (s390x).
Generally speaking, when there’s effort being spent on getting something to run on Z, it is in no way considered a toy by those who’ll end up using it.