Month: December 2015

New Year Wishes for Providers of MySQL Support Services

Three years ago I shared my wishes for customers of Oracle’s MySQL Support Services. There I basically asked them to report any problem that they suspect to be caused by the bug in MySQL software at http://bugs.mysql.com.This year I want to share wishes mostly for myself (and other providers of MySQL Support services).I have a job of MySQL Support Engineer for almost 10.5 years. I did it in MySQL AB, Sun, Oracle and Percona. I had enough opportunities to see all kinds of approaches, types, kinds and qualities of services. But I still have some dreams in this area that I’d like to see fulfilled for both myself as a provider of service and for customers of such a service:I wish to see MySQL Support mostly done in an asynchronous way, via emails and (when absolutely needed and possible) remote login sessions. In most cases it’s enough for customer to know that she will get a detailed, best possible answer to any her initial question (or problem statement) or any followups question or request in a predictable, well defined time. There is no need for engineer and customer to always work in sync, by talking on phone, chatting or doing a shared screen sessions. Support should work the same way UNIX operating system does: by sharing all available resources (engineers) among all tasks (support requests) at hand, allocating resources for the task for some small amount of time and then forcing the resource to switch to other task, either when time unit allocated is ended or immediately when we have to wait for something to complete. Surely this mode is beneficial for support providers (because of ability to work for more customers concurrently than they have engineers online), but customers also get clear benefits. They can move on and work on something else until they get email back (or time to get a reply passes), and they may get a reply based on concurrent (but asynchronous) work of several engineers (“fan-out”).At the same time, I wish each support provider to have a well defined SLA (time of getting a guaranteed useful technical reply, either a solution, suggestion or further question) not only for the initial reply (as we can see here and, honestly, almost everywhere), but also for the followups, for each and every customer email. Ideally both sides should be able to negotiate the date(time) of the next reply (even if it’s different from formal official SLA), and then make sure to meet this deadline in 100% of cases. Some steps towards this goal are visible here, but so far no well know Support provider is perfect with followups in time, based on my knowledge. I wish Support engineers to never be involved in phone conferences with customers without a clearly defined agenda related to MySQL and limited time to be spent on phone (see item 1 above for the reasons). Sometimes somebody from “services” side should be “there”, in case of questions during some long discussion. I think this is a job for customer’s TAM (technical assistance manager), Sales Engineer (if the topic is related to purchasing some service or software) or anyone who is paid per hour (like Consultant).I wish Support engineers, no matter what Support provider they work for, to always report upstream MySQL bugs at http://bugs.mysql.com/ and fork-specific bugs at their public bug trackers, as openly available (public) to all MySQL users.Some bugs may be repeatable only with customer-specific and confidential data, and some bugs may have security implications. Ideally, Support engineers should always work on a repeatable test case or otherwise well grounded bug report NOT containing customer data. As for security problems, there is always a way to explain in public important details of the possible security attack vector and list versions affected, without giving enough details for “script kiddies” to just blindly copy-paste the test case to get unauthorized access or crash well-managed public MySQL server.I wish Support engineers to present their work and share their experience in public. We all should try to share knowledge we have and get while working with customers, not only internally to our colleagues in services or via internal knowledge bases, but also in our own blogs, articles, on public MySQL forums and on MySQL-related conferences.MySQL Support providers should encourage support engineers to make the results of their work public whenever possible. Not only bugs, but problem solving approaches, code written (if any), experience gained should be shared with MySQL community. This will give us all customers who known more about MySQL and will help us not to re-invent the wheel.  To summarize, I wish our customers in the New Year of 2016 to get a simple, but well-defined, responsible, and reliable 24×7 Support service provided by the engineers who are well known to the Community based on their public work on MySQL (via blog posts, bug reports and conference presentations). I wish all MySQL Support Service providers to deliver what they promise (or more) in 100% of cases. I wish myself to work for MySQL Support Provider that cares about my wishes and tries to help me to see my dreams expressed here coming true.Happy New Year, MySQL Community!

MySQL Benchmark in the Cloud

 Testing functionalities and options for a database can be challenging at times, as a live production environment might be required. As I was looking for different options, I was directed by Derek Downey to this post in the Percona blog.The blog discussed an interesting and fun tool from Percona, tpcc-mysql. I was interested in testing the tool so I decided to play around with it in an AWS EC2 server.In this post I will expand on the Percona blog post, since the tool lacks documentation, as well as explain how I used it to create a MySQL Benchmark in AWS.Why tpcc-mysql?There are various reasons why tpcc-mysql could be a good option to use for a benchmarking project. The following points highlights most of them:Pros:Mimics a full DB structure of a real warehouse.Simulates a real life load on the server.Options and flexibility.Very light footprint on the system.Cons:No documentation.Getting the Server StartedYou’ll probably need to launch a new EC2 server from the AWS Console, or use an existing one that you already have up an running. Either way, you had better save the current state of your database. Luckily, AWS EBS offers really good and convenient solution to achieve this.It is possible to create and manage sanpshots of EBS volumes in the AWS Dashboard with some very basic steps. I personally prefer to setup the MySQL base and data directories together in a different volume from from the root volume. This allows me to swap between different versions and data-sets without having to reconfigure my tools every time I load a snapshot.Writing a  good description helps when creating new volumes.Possible suggestions come up as you start typing based on descriptions . Setting up the BenchmarkOnce you have taken your snapshot and configured you MySQL, move on to setup. First we’ll need to setup the prerequisites.tpcc-mysql uses mysql_config  is part of the libmysqlclient_dev package. We also need Bazaar. So we’ll go ahead and install that:sudo apt-get install libmysqlclient_dev
sudo apt-get install bzr Install & Compile spcc-mysqlUse following commands to download the tpcc-mysql source code and compile it:bzr branch lp:~percona-dev/perconatools/tpcc-mysql
cd tpcc-mysql/src
make all Prepare the Database & Create Required TablesOnce the the tpcc-mysql has been compiled, we will need to prepare the database for the benchmark. This will consist of running a few scripts to create the required database, tables, and generate random data to use during the testing process.Following these steps will create the database and tables made for us, they are all part of the tpcc-mysql package:cd ~/tpcc-mysql
# 1. Create Database to be load data in
mysql -u root -p -e “CREATE DATABASE tpcc1000;”
# 2. Create the required table definitions
mysql -u root -p tpcc1000 < create_table.sql
# 3. Add foreign keys and indexes 
mysql -u root -p tpcc1000 < add_fkey_idx.sqlThe following tables are created from the previous step:$ mysql -u root -p tpcc1000 -e “SHOW TABLES;”
Enter password:+——————–+
| Tables_in_tpcc1000 |
+——————–+
| customer           |
| district           |
| history            |
| item               |
| new_orders         |
| order_line         |
| orders             |
| stock              |
| warehouse          |
+——————–+As you can see, tpcc-mysql mimics a warehouse’s database that tracks clients, items, orders, stock, … etcPrepare the Database & Create Required TablesThe last step remaining before we can start our test is to populate some data into the tables. For that, tpcc-mysql has a script, tpcc_load, that does the job.The tpcc_load script generates random dummy data in the tables created in the previous steps. The script also have a parameter that allows to specify how many warehouses you want to simulate.The script usage is as follow:tpcc_load [server] [DB] [user] [pass] [warehouse]In our example, we’ll use the following:./tpcc-mysql/tpcc_load 127.0.0.1 tpcc1000 root “$pw” 2Beginning the Benchmarking ProcessThis would be a good time to take a snapshot of your server/dataset, so you can come back to it. Also, before we get started, let’s get familiar with the script we need to use for starting the benchmarking process, tpcc_start. The script will start creating transactions that would execute various statements like SELECT, UPDATE, DELETE, and INSERT. The script will also be generating a detailed output of the progress and a summary in the end. You can redirect this output to a file to run some analysis, compare it later on, or use it to run an analysis.The script comes with various parameters to give you flexibility to configure it as you desire:tpcc_start -h[server] -P[port] -d[DB] -u[mysql_user] -p[mysql_password] -w[# of warehouses] -c[# of connections] -r[warmup_time] -l[running_time]Now let’s get to the fun part!We’ll be using the following command will start a simulation of warehouse transactions, and record the output in the file tpcc-output-01.log./tpcc_start -h127.0.0.1 -dtpcc1000 -uroot -p -w2 -c16 -r10 -l1200 > ~/tpcc-output-01.logAnalyzing the Outputtpcc-mysql comes with different scripts that could be used for analysis. Check the tpcc-mysql/scripts folder. Example of some scripts are:$ ls ~/tpcc-mysql/scripts/analyze_min.sh  
analyze.sh          
anal.full.sh    
analyze_modified.sh 

Visual Analysis of the OutputWe can always take these tests a step further in many different directions. Since plotted data is a lot of fun, why not do a quick experiment with it?The same blog post I used as my reference for this post also has a modified version of analyze.sh script that comes with tpcc-mysql. The script is named tpcc-output-analyze.sh. What this script does is that it extracts the time and # of transactions for each time block in a format that gnuplot can read for plotting the data. So let’s use the script on the output file:./tpcc-output-analyze.sh tpcc-logs/tpcc-output-01.log tpcc-analyzed/time_tr_data_01.txt

To install gnuplot you simply run:sudo apt-get install gnuplotThen, we can create the plot using the tpcc-graph-build.sh  script (from here as well) as follows:./tpcc-graph-build.sh tpcc-analyzed/time_tr_data_01.txt tpcc-graphs/graph01.jpg And this generated the following plot for me: ConclusionI hope this was helpful. As you can see, there is a lot of potential of things that can be done using tpcc-mysql. If there is anything that you come up with or experiment with, I would love to hear it from you. Discover more about our expertise in MySQL and the Cloud.

Happy New Year 2016!

Another year has passed. I take the opportunity to thank you for visiting and to wish you a Happy New Year 2016! In case you didn’t recognize: That is supposed to look like fireworks, The Oracle Instructor style 😉 2015 was a great year for uhesse.com with 345,000+ views and the crossing of the one […]

Database Performance Webinar: Tired of MySQL Making You Wait?

Too often developers and DBAs struggle to pinpoint the root cause of MySQL database performance issues, and then spend too much time in trying to fix them. Wouldn’t it be great to bypass wasted guesswork and get right to the issue?
In our upcoming webinar Tired of MySQL Making You Wait? we’re going to help you discover how to significantly increase the performance of your applications and reduce database response time.
In this webinar, Principal Architect Alexander Rubin and Database Evangelist Janis Griffin will provide the key steps needed to identify, prioritize, and improve query performance.
They will discuss the following topics:

Wait time analytics using Performance / Information schemas
Monitoring for performance using DPA
Explaining plan operations focusing on temporary tables and filesort
Using indexes to optimize your queries
Using loose and tight index scans in MySQL

WHEN:
Thursday, January 7, 2016 10:00am Pacific Standard Time (UTC – 8)
PRESENTERS:
Alexander Rubin, Principal Consultant, Percona

Janis Griffin, Database Evangelist, SolarWinds
Register now!

Percona is the only company that delivers enterprise-class software, support, consulting and managed services solutions for both MySQL and MongoDB® across traditional and cloud-based platforms that maximize application performance while streamlining database efficiencies.
Percona’s industry-recognized performance experts can maximize your database, server and application performance, lower infrastructure costs, and provide capacity and scalability planning for future growth.

Log Buffer #455: A Carnival of the Vanities for DBAs

What better to do during the holiday season than to read the Log Buffer? This log buffer edition is here to add some sparkle to Oracle, MySQL and SQL Server on your days off.Oracle:Ops Center version 12.3.1 has just been released. There are a number of enhancements here.Oracle R Enterprise (ORE) 1.5 is now available for download on all supported platforms with Oracle R Distribution 3.2.0 / R-3.2.0. ORE 1.5 introduces parallel distributed implementations of Random Forest, Singular Value Decomposition (SVD), and Principal Component Analysis (PCA) that operate on ore.frame objects.Create a SOA Application in JDeveloper 12c Using Maven SOA Plug-In by Daniel Rodriguez.How reliable are the memory advisors?Oracle Enterprise Manager offers a complete cloud solution including self-service provisioning balanced against centralized, policy-based resource management, integrated chargeback and capacity planning and complete visibility of the physical and virtual environments from applications to disk.SQL Server:SQL Server Data Tools (SSDT) and Database References.Stairway to SQL Server Extended Events Level 1: From SQL Trace to Extended Events.Advanced Mathematical Formulas using the M Language.Liberating the DBA from SQL Authentication with AD Groups.Enterprise Edition customers enjoy the manageability and performance benefits offered by table partitioning, but this feature is not available in Standard Edition.MySQL:Is MySQL X faster than MySQL Y? – Ask query profiler.Usually when one says “SSL” or “TLS” it means not a specific protocol but a family of protocols.The MariaDB project is pleased to announce the immediate availability of MariaDB 10.1.10, MariaDB Galera Cluster 5.5.47, and MariaDB Galera Cluster 10.0.23.EXPLAIN FORMAT=JSON: everything about attached_subqueries, optimized_away_subqueries, materialized_from_subquery.Use MySQL to store data from Amazon’s API via Perl scripts. Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL.

s9s Tools and Resources: Momentum Highlights for MySQL, PostgreSQL, MongoDB and more!

Check Out Our Latest Technical Resources for MySQL, MariaDB, PostgreSQL and MongoDB
This is our last s9s Tools & Resources communication in 2015 and as we prepare to kick off 2016, we’d like to take this opportunity to thank you for your support in the year gone by and to wish you a successful start to the new year!
This is a summary of all the resources we recently published. Please do check it out and let us know if you have any comments or feedback.
Momentum Highlights
Severalnines breaks records on MongoDB, MySQL & PostgreSQL
Over 100% sales growth achieved early in first half of 2015
150+ enterprise customers, 8,000+ community users – thank you for joining us!
New enterprise accounts wins such as the European Broadcast Union, European Gravitational Observatory, BT Expedite and French national scientific research centre, CNRS
Hired Gerry Treacy, former MongoDB executive, as Vice President of Sales
Added support for PostgreSQL to ClusterControl alongside MySQL and MongoDB
Read the full momentum release here
Technical Webinar Replay
Polyglot Persistence for the MongoDB, MySQL & PostgreSQL DBA
During our last webinar of the year, Art van Scheppingen discussed the four major operational challenges for MySQL, MongoDB & PostgreSQL and demonstrated, using ClusterControl, how Polyglot Persistence for datastores can be managed from one single control centre.
View the replay and read the slides here

Customer Case Studies
From small businesses to Fortune 500 companies, customers have chosen Severalnines to deploy and manage MySQL, MongoDB and PostgreSQL.  
Severalnines adds silver lining to database management for cloudstats.me
View our Customer page to discover companies like yours who have found success with ClusterControl.

Partnership Announcement
Percona & Severalnines expand partnership to include MongoDB
Peter Zaitsev, Co-founder and CEO of Percona, had this to say about this new announcement with our long-term partner: “We are very pleased to expand our relationship with Severalnines to bring enhanced management, scalability, and industry-leading expertise to Percona Server for MongoDB deployments. With ClusterControl by Severalnines, organizations can now truly afford to monitor, manage and scale the highly available database infrastructures they need to stay competitive in an information-driven economy.”
Read the full announcement here
ClusterControl Blogs
Our series of blogs focussing on how to use ClusterControl continues. Do check them out!
Managing your Database Configurations
Managing your logfiles
ClusterControl Tips & Tricks: Monitoring multiple MySQL instances on one machine
View all ClusterControl blogs here
The MySQL DBA Blog Series
We’re on the 18th installment of our popular ‘Become a MySQL DBA’ series and you can view all of these blogs here. Here are the latest ones in the series:
Troubleshooting with pt-stalk
Troubleshooting with pt-stalk – part 2
View all the ‘Become a MySQL DBA’ blogs here
Additional Technical Blogs & Resources

Latest Updates on Severalnines Tools: Docker, Puppet, Chef, Vagrant and more
Events
The Percona Live Data Performance Conference (for MySQL and MongoDB users and more) is coming up in just a few months and we’ve been busy with talk submissions for the conference. Two of our talks have already been selected and you can find the ful list of the talks we submitted here. We hope to see you in Santa Clara!
We trust these resources are useful. If you have any questions on them or on related topics, please do contact us!
All our best wishes for the new year,Your Severalnines Team
Blog category: Company NewsTags: MySQLPostgreSQLMongoDBclustercontroldbadatabase administration

On ProxySQL, MaxScale, Persistent Connection, response time, and bugs

Few days ago I came across the announcement that MaxScale 1.3 finally supports Persistent Connection.ProxySQL supports persistent connection since it was a prototype (sometime back in 2013), therefore I am very happy that the MaxScale Team finally introduced a long waited feature.Although, MaxScale implementation of persistent connection has a serious drawback and I would consider it as a serious bug (more details at the end of the article). A bug so serious that shouldn’t reach any GA release.Since I like running benchmark, and due the new feature in MaxScale, I thought it is a good time to compare again ProxySQL vs MaxScale, around 6 months after a previous benchmark .Benchmark : ProxySQL vs MaxScaleBenchmark setupThe benchmark setup is very simple:- a physical server (20 CPU cores) running sysbench , proxysql and maxscale- a physical server (20 CPU cores) running 3 mysqld instances (1 master and 2 slaves) launched using MySQL SandboxSoftware version used:- MySQL 5.6.28- 0.4.12- ProxySQL v1.1.0 stable- MaxScale 1.3.0-betaConfiguration files can be found at the follow URLs:ProxySQLMaxScaleProxySQL configuration was completed running these commands through the admin interface:INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES(1,1,’^SELECT.*FOR UPDATE$’, 1, 1);INSERT INTO mysql_query_rules (rule_id, active, match_pattern, destination_hostgroup, apply) VALUES(2,1,’^SELECT’, 2, 1);LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;Preliminary benchmarks confirm the conclusion of my previous blog post : MaxScale is very CPU intensive, therefore to make a fair comparison between the two proxies I ran both of them with only 1 worker thread.Benchmarks were executed running sysbench against the local proxy (either ProxySQL or MaxScale) , using this command:sysbench –max-requests=0 –test=oltp –mysql-user=rcannao –mysql-password=rcannao \–mysql-db=test –oltp-table-size=1000000 –oltp-read-only=on –oltp-point-selects=1 \–oltp-simple-ranges=0 –oltp-sum-ranges=0 –oltp-order-ranges=0 –oltp-distinct-ranges=0 \–oltp-skip-trx=off –db-ps-mode=disable –max-time=60 \–oltp-reconnect-mode=transaction –mysql-host=10.1.1.164 \–num-threads=$thr  –mysql-port=$port runWhat is important to to note is that the workload is read-only , it performs only point selects, and it will reconnect at the end of each transaction : this is a workload meant to check the performance benefit of Persistent Connection .The benchmark will compare:- ProxySQL configured with read/write split- MaxScale with readwritesplit module (RW)- MaxScale with readconnroute module (RR)- MaxScale with readwritesplit module and persistent connection (RW-PC)- MaxScale with readconnroute module and persistent connection (RR-PC)Benchmark resultHere the graph of the benchmark result about throughput:There are a lot of information, but also some unanswered questions.At very low concurrency, ProxySQL is slightly slower.At 64 connections, ProxySQL and MaxScale RW-PC have very similar throughput, and that is great since these two configurations have similar behaviors.Always at 64 connections it seems that MaxScale without Persistent Connection has reached its maximum throughput: as throughput with Persistent Connection is higher, we can already conclude that this feature is indeed useful and improves performance. MaxScale RR-PC will continue giving more throughput than the others, but this is expected as this routing module is very simple.At 256 connections, throughput of ProxySQL and MaxScale RR-PC are the only two that continue growing. That means that the other configurations have saturated 1 core and are unable to scale anymore, while ProxySQL continues providing all its feature and scales with just 1 core.At 1024 connections, all proxies configurations have a drop in performance. Although the drop in performance in ProxySQL is marginal, the drop in performance in MaxScale is severe.This confirms ProxySQL’s ability to scale.What about response time?From this graph of response time we can note that at high concurrency ProxySQL is able to provide the better response time. Let’s remove the response time for 1024 connections and compare at lower concurrency:What about maximum response time? This is really interesting: No blue columns … did I forgot to add the response time of ProxySQL? No, I didn’t forget, but the max response time of MaxScale is too high for a proper comparison.Starting at 64 connections, the maximum response time of MaxScale becomes so high that it reaches 60 seconds: this matches the max time in sysbench, therefore we should conclude that at least one (or perhaps more) connection created by sysbench isn’t able to complete a transaction until the end of the benchmark.That needs to be further validated by its developers, but it seems that at high concurrency (even if 64 connections shouldn’t be considered “high”) MaxScale is only processing a subset of connections while completely ignoring others. If that’s correct, this should be considered as a serious bug.For further testing, I rerun sysbench with 256 connections (not a lot, at all!) against MaxScale for 600 seconds , and max response times was 600 seconds : that is, at least one transaction (maybe more) wasn’t processed until all the other transactions were completed.I am sure nobody wants a transaction stuck for and undefined amount of time because the proxy is busy processing other transactions.For reference, here the graph of max response times without the off the charts values:Severe bug in Persistent ConnectionThe release note of MaxScale 1.3.0 hides a very important detail that is available only in the Administration Tutorial , that I report here for reference:Please note that because persistent connections have previously been in use, they may give a different environment from a fresh connection. For example, if the previous use of the connection issued “use mydatabase” then this setting will be carried over into the reuse of the same connection. […] In exceptional cases this feature could be a problem.If I read it correctly, this is not a feature but a severe series of bugs.More details below.MaxScale doesn’t track current schema $ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT DATABASE()” test+————+| DATABASE() |+————+| test |+————+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT DATABASE()” mysql+————+| DATABASE() |+————+| test |+————+I assume nobody wants this to happen : the second client believes to connect to schema “mysql” , but in reality it is connected to schema “test” .Unless your application is using only one schema, I strongly discourage the use of persistent connection.MaxScale doesn’t track charset and returns incorrect encodingLet’s try to identify another possible problem: $ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT DATABASE(), @@session.character_set_client” mysql –default-character-set=utf8+————+——————————–+| DATABASE() | @@session.character_set_client |+————+——————————–+| mysql | utf8 |+————+——————————–+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SET NAMES latin1” mysql$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT DATABASE(), @@session.character_set_client” mysql –default-character-set=utf8+————+——————————–+| DATABASE() | @@session.character_set_client |+————+——————————–+| mysql | latin1 |+————+——————————–+In this example the current database is always incorrect (as already pointed out previously), but also the character set is compromised/corrupted. This can be a serious issue for many application, as MaxScale is ignoring the charset as specific by the client.That is, MaxScale is ignoring schemaname and charset as specific during the initial handshake. MaxScale doesn’t track autocommitSame applies for autocommit …$ mysql -u rcannao -prcannao -h 10.1.1..164 -P3307 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | ON |+—————+——-+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SET autocommit=0″$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | OFF |+—————+——-+Here we can see another major issue/bug : an application could issue statements assuming autocommit=ON (the default) while in reality another client could have change it.MaxScale doesn’t track transactionsI think this is perhaps the most serious bugs of how Persistent Connection are implemented in MaxScale.Without MaxScale, when a client disconnects its transaction should be rolled back.Let’s see what happens with MaxScale and Persistent Connection.First, we create a transaction the way many applications do: SET autocommit=0 , followed by any DML :$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | ON |+—————+——-+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SET autocommit=0″$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | OFF |+—————+——-+So far, we have modified autocommit in a Persistent connection.Second, let’s run some SELECT statement:$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT COUNT(*) FROM sbtest” testERROR 1046 (3D000) at line 1: No database selectedOps, error … I forgot that MaxScale ignores my request for a default schema … Now I must specify it in the query itself!$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT COUNT(*) FROM test.sbtest” test+———-+| COUNT(*) |+———-+| 1000000 |+———-+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SELECT COUNT(*) FROM test.sbtest WHERE id < 1000” test+———-+| COUNT(*) |+———-+| 999 |+———-+All looks good so far. Let me check if there are active transactions:  $ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE$No active transactions, that’s good. Now, let’s run a DML statement …  $ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “DELETE FROM test.sbtest WHERE id < 1000″$Let me check again if there are active transactions …$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE—TRANSACTION 2253315, ACTIVE 29 sec$Here is the bug! The client that issued the DML statement and started the transaction is gone/disconnected, yet MaxScale is holding a transaction open.The bad news is that MaxScale doesn’t track transaction no matter if they are started due to autocommit or an explicit START TRANSACTION . Here an example (after restarting maxscale) :$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “START TRANSACTION; SELECT 1 FROM test.sbtest LIMIT 1” test+—+| 1 |+—+| 1 |+—+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P3307 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE—TRANSACTION 2253317, ACTIVE 2 sec$ProxySQL provides a safe environment with Persistent ConnetionBy comparison, ProxySQL has a more mature implementation of Persistent Connection, and keeps track of the environment set by the client, ensuring that the environments of backend and frontend match.ProxySQL tracks current schemaProxySQL isn’t affected by the same bug of MaxScale , and correctly tracks the schema as specific by the client:$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SELECT DATABASE()” test+————+| DATABASE() |+————+| test |+————+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SELECT DATABASE()” mysql+————+| DATABASE() |+————+| mysql |+————+ProxySQL tracks character setProxySQL isn’t affected by the same bug of MaxScale , and correctly tracks the character set as specific by the client:$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SELECT DATABASE(), @@session.character_set_client” mysql –default-character-set=utf8+————+——————————–+| DATABASE() | @@session.character_set_client |+————+——————————–+| mysql | utf8 |+————+——————————–+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SET NAMES latin1” mysql$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SELECT DATABASE(), @@session.character_set_client” mysql –default-character-set=utf8+————+——————————–+| DATABASE() | @@session.character_set_client |+————+——————————–+| mysql | utf8 |+————+——————————–+ProxySQL tracks autocommitAlso in this case, ProxySQL isn’t affected by the same bug of MaxScale , and correctly tracks the value of autocommit as specific by the client:$ mysql -u rcannao -prcannao -h 10.22.20.164 -P6033 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | ON |+—————+——-+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SET autocommit=0″$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SHOW VARIABLES LIKE ‘autocommit'”+—————+——-+| Variable_name | Value |+—————+——-+| autocommit | ON |+—————+——-+  ProxySQL tracks transactionsAlso in this case, ProxySQL isn’t affected by the same bug of MaxScale , and correctly tracks transactions, terminating them if required: $ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “START TRANSACTION; SELECT 1 FROM test.sbtest LIMIT 1” test+—+| 1 |+—+| 1 |+—+$ mysql -u rcannao -prcannao -h 10.1.1.164 -P6033 -e “SHOW ENGINE INNODB STATUS\G” test | grep ACTIVE$ Why ProxySQL implements better Persistent Connection?Since its initial implementation 2 years ago, ProxySQL was designed to handle frontends (clients) and backends (servers) as different entities, only connects them when needed, and remove the link between them as soon as possible.When a client connects to ProxySQL no connection to any backend is established. When a client issues a request ProxySQL determines if the request needs a connection to the backend or not, and only if required it forwards the request to a backend. As soon as the request is completed, ProxySQL determines if the connection to the backend is still required, and if not it returns it to a connection pool.That is: the connection to the backend is returned to the connection pool not when the client disconnects, but when the request is completed, assuming that it can be reused by other clients (for example if there are no active transactions).Similarly, when a backend connection is linked to a client connection, ProxySQL will ensure that the environment is set correctly : schema, charset, autocommit, etc .In other words, ProxySQL doesn’t just implement Persistent Connection, but also Connection Multiplexing. In fact, ProxySQL can handle hundreds of thousands of clients, yet forward all their traffic to few connections to the backend.As a final note, I invite everybody to try ProxySQL , now GA , and feel free to contact me for any question.

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569