Percona’s Technical Director of Quality Assurance Roel Van de Paar shares his findings on the quality of MySQL 8.0.3 RC.
On 21 September 2017, our upstream friends at Oracle released MySQL 8.0.3 RC as the first MySQL 8.0 Release Candidate.
I tested the MySQL 8.0.3 Release Candidate branch with selected Percona bugfixes applied and built it as debug using the pquery QA framework (freely available here). Percona releases Percona Server for MySQL – an improved fork of the MySQL server – with additional features and bug patches.
Any QA engineer would enjoy seeing the many bugs discovered:
================ [Run: 079328] Sorted unique issue strings (11258 trials executed, 246 remaining reducer scripts)
err == DB_SUCCESS .. err == DB_IO_NO_PUNCH_HOLE (Seen 1 times: reducers 5336)
error == DB_SUCCESS (Seen 7 times: reducers 3896,3897,3902,5330,5331,5334,5425)
.field->prefix_len .. field->fixed_len == field->prefix_len(Seen 8 times: reducers 442,1323,3990,7492,7586,8940,10101,10454)
has_prelocking_list .. thd->mdl_context.owns_equal_or_stronger_lock. MDL_key..TABLE, rt->db(Seen 10 times: reducers 49,242,264,1251,4294,5170,6257,6261,7704,7732)
length == 3 (Seen 1 times: reducers 9603)
max_length >= length (Seen 10 times: reducers 418,2100,2304,4094,5171,5603,5695,6530,7526,8335)
.m_fatal (Seen 6 times: reducers 4540,4878,6091,6143,6824,7157)
mon > 0 .. mon < 13 (Seen 10 times: reducers 4,135,185,230,261,266,343,353,367,412)
m_thd->get_transaction (Seen 2 times: reducers 4468,5674)
MYSQL_BIN_LOG..handle_binlog_flush_or_sync_error (Seen 1 times: reducers 3114)
MYSQL_BIN_LOG..new_file_impl (Seen 10 times: reducers 559,563,695,1202,1729,2046,4029,5578,6938,7212)
new_table .= nullptr (Seen 1 times: reducers 5033)
new_value >= 0 (Seen 1 times: reducers 7913)
next_insert_id == 0 (Seen 4 times: reducers 3227,7917,9935,10945)
num_codepoints >= scanner.get_char_index (Seen 10 times: reducers 962,1992,4829,5032,6835,7112,7117,7984,8209,9788)
purge_sys->n_stop == 0 (Seen 10 times: reducers 337,512,606,709,856,1028,1106,1187,1257,1270)
recv_recovery_is_on.. .. Log_DDL..is_in_recovery.. (Seen 10 times: reducers 688,1372,1499,4183,5233,5964,6478,7386,8556,8705)
Sql_cmd_call..prepare_inner (Seen 5 times: reducers 3665,6965,7777,9775,10064)
static_cast<ulint>.slot->n_bytes. < slot->original_len – n_bytes(Seen 10 times: reducers 3749,3892,3893,3898,3900,5333,5335,5423,5424,5897)
String..ptr (Seen 10 times: reducers 1946,2487,2619,2965,3559,4533,5696,6007,6457,6806)
sys_var_pluginvar..global_update (Seen 1 times: reducers 9359)
ticket->m_lock->m_obtrusive_locks_granted_waiting_count .= 0(Seen 10 times: reducers 1696,1942,2253,2398,2501,2752,3343,3449,3679,7039)
trans_table .. .changed .. thd->get_transaction (Seen 10 times: reducers 14,424,484,529,694,1694,2125,2993,3106,3569)
trn_ctx->is_active.Transaction_ctx..SESSION (Seen 6 times: reducers 18,415,4512,6310,6773,9989)
type (Seen 2 times: reducers 5522,10692)
val <= 4294967295u (Seen 3 times: reducers 2298,3837)
* TRIALS TO CHECK MANUALLY (NO TEXT SET: MODE=4) * (Seen 10 times: reducers 11,15,27,45,60,66,69,72,94,96)
However, these results seem to show a lack of quality in the MySQL 8.0.3 RC release.
It looks like there are many new regressions, and I logged no less than 40 crashing, asserting and other bugs (security bugs are hidden from this list) to the MySQL bug tracker at Oracle. For each bug found, the test case is reduced (using reducer.sh, also available in the pquery QA framework) where possible and the bug is verified against upstream. If the bug is found upstream also, we log it there. No bugs were logged against Percona Server for MySQL.
Upstream quality is very important for us for several reasons. Firstly, any bug present upstream means we see the bug too. We then need to figure out whether it is upstream, or in our own code. Secondly, the better upstream quality is, the better our eventual product is as well. We care about quality.
We have seen the upstream developers do some great things in the past when it comes to dealing with a large influx of bugs, and we hope it will be no different this time. There is hope that MySQL 8.0 becomes another successful release from the great team at Oracle!
As soon as Oracle releases 8.0.4 RC (the second release candidate), and our team merges it in the Percona Server for MySQL patches (and perhaps features by then), I will test it again!
If you like to stay tuned for more on MySQL 8.0 tested and reviewed by the world’s leading database experts, just leave us a comment below and tick the “Notify me of new posts via email” subscription box.
Percona’s Technical Director of Quality Assurance Roel Van de Paar shares his findings on the quality of MySQL 8.0.3 RC.
Do you need to get to grips with MySQL proxies? Or maybe you could do with discovering the latest developments and plans for Percona’s software?
Well, wait no more because …
on Wednesday November 15, 2017, we bring you a webinar double bill.
Join Percona’s Chief Evangelist, Colin Charles as he presents “The Proxy Wars – MySQL Router, ProxySQL, MariaDB MaxScale” at 7:00 am PST / 10:00 am EST (UTC-8).
Reflecting on his past experience with MySQL proxies, Colin will provide a short review of three open source solutions. He’ll run through a comparison of MySQL Router, MariaDB MaxScale and ProxySQL and talk about the reasons for using the right tool for an application.
Register for Colin’s Webinar
Meanwhile, return a little later in the day at 10:00 am PST / 1:00 pm EST (UTC-8) to hear Percona CEO Peter Zaitsev discuss what’s new in Percona open source software. In “Percona Software News and Roadmap Update – Q4 2017”, Peter will talk about new features in Percona software, show some quick demos and share highlights from the Percona open source software roadmap. He will also talk about new developments in Percona commercial services and finish with a Q&A.
Join Peter’s Webinar
You are, of course, very welcome to register for either one or both webinars. Please register for your place soon!
Peter Zaitsev, Percona CEO and Co-Founder
Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 30+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University, where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone have both tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.
Colin Charles, Chief Evangelist
Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.
In this blog post, I’ll walk through some of the improvements to the Percona Monitoring and Management (PMM) MySQL dashboard in release 1.4.0.
As the part of Percona Monitoring and Management development, we’re constantly looking for better ways to visualize information and help you to spot and resolve problems faster. We’ve made some updates to the MySQL dashboard in the 1.4.0 release. You can see those improvements in action in our Percona Monitoring and Management Demo Site: check out the MySQL Overview and MySQL InnoDB Metrics dashboards.
MySQL Client Thread Activity
One of the best ways to characterize a MySQL workload is to look at the number of MySQL server-client connections (Threads Connected). You should compare this number to how many of those threads are actually doing something on the server side (Threads Running), rather than just sitting idle waiting for a client to send the next request.
MySQL can handle thousands of connected threads quite well. However, many threads (hundred) running concurrently often increases query latency. Increased internal contention can make the situation much worse.
The problem with those metrics is that they are extremely volatile – one second you might have a lot of threads connected and running, and then none. This is especially true when some stalls on the MySQL level (or higher) causes pile-ups.
To provide better insight, we now show Peak Threads Connected and Peak Threads Running to help easily spot such potential pile-ups, as well as Avg Threads Running. These stats allow you look at a high number of threads connected and running to see if it there are just minor spikes (which tend to happen in many systems on a regular basis), or something more prolonged that warrants deeper investigation.
To simplify it even further: Threads Running spiking for a few seconds is OK, but spikes persisting for 5-10 seconds or more are often signs of problems that are impacting users (or problems about to happen).
InnoDB Logging Performance
Since I wrote a blog post about Choosing MySQL InnoDB Log File Size, I thought it would be great to check out how long the log file space would last (instead of just looking at how much log space is written per hour). Knowing how long the innodb_log_buffer_size lasts is also helpful for tuning this variable, in general.
This graph shows you how much data is written to the InnoDB Log Files, which helps to understand your disk bandwidth consumption. It also tells you how long it will take to go through your combined Redo Log Space and InnoDB Log Buffer Size (at this rate).
As I wrote in the blog post, there are a lot of considerations for choosing the InnoDB log file size, but having enough log space to accommodate all the changes for an hour is a good rule of thumb. As we can see, this system is close to full at around 50 minutes.
When it comes to innodb_log_buffer_size, even if InnoDB is not configured to flush the log at every transaction commit, it is going to be flushed every second by default. This means 10-15 seconds is usually good enough to accommodate the spikes. This system has it set at about 40 seconds (which is more than enough).
This graph helps you understand how InnoDB Read-Ahead is working out, and is a pretty advanced graph.
In general, Innodb Read-Ahead is not very well understood. I think in most cases it is hard to tell if it is helping or hurting the current workload in its current configuration.
The for Read-Ahead in any system (not just InnoDB) is to pre-fetch data before it is really needed (in order to reduce latency and improve performance). The risk, however, is pre-fetching data that isn’t needed. This is wasteful.
InnoDB has two Read-Ahead options: Linear Read-Ahead (designed to speed up workloads that have physically sequential data access) and Random Read-Ahead (designed to help workloads that tend to access the data in the same vicinity but not in a linear order).
Due to potential overhead, only Linear Read-Ahead is enabled by default. You need to enable Random Read-Ahead separately if you want to determine its impact on your workload
Back to the graph in question: we show a number of pages pre-fetched by Linear and Random Read-Aheads to confirm if these are even in use with your workload. We show Number of Pages Fetched but Never Accessed (evicted without access) – shown as both the number of pages and as a percent of pages. If Fetched but Never Accessed is more than 30% or so, Read-Ahead might be producing more waste instead of helping your workload. It might need tuning.
We also show the portion of IO requests that InnoDB Read-Ahead served, which can help you understand the portion of resources spent on InnoDB Read-Ahead
Due to the timing of how InnoDB increments counters, the percentages of IO used for Read-Ahead and pages evicted without access shows up better on larger scale graphs.
I hope you find these graphs helpful. We’ll continue making Percona Monitoring and Management more helpful for troubleshooting database systems and getting better performance!
In this blog post, we’ll look at Percona Monitoring and Management’s pmm-admin list command.
The pmm-admin list command shows all monitoring services you have added using the pmm-admin add command. Starting with version 1.4.0, Percona Monitoring and Management (PMM) also lists external monitoring services when you run pmm-admin list, i.e., those services that monitor the backends not supported out of the box (such as PostgreSQL databases).
In the output, the external monitoring services appear at the bottom:
The tabular output of the pmm-admin list commandJSON Output for Automatic Verification
But there is also another feature of pmm-admin list. If you run this command with the –json parameter, the command gives you a JSON document as output. This option now enables inspecting the monitoring services by computers due to the strict JSON syntax rules. JSON has become a de-facto standard for exchanging data for many tools. The JSON output provided by the pmm-admin list command can be used by configuration management tools such as ansible or chef.
The output is captured as keys and values. The general information about the computer where this pmm-client is installed is given as top-level elements:
You can quickly determine if there are any errors in built-in monitoring services by inspecting the Err top level element in the JSON output. Similarly, the ExternalErr element reports errors on external services:
The JSON parsing friendly version produced by the pmm-admin list commandRepresenting Monitoring Services
Two elements contain lists as their values. The Services top-level element contains a list of documents that represent enabled monitoring services. The ExternalServices element contains a list of documents that represent enabled external monitoring services. Each attribute in the Services and ExternalServices elements provides the same information as a column in the tabular output.
Hope this brief post provides some valuable information regarding new Percona Monitoring and Management 1.4.0 functionality. Let me know about any questions in the comments.
Building on community
Percona is very committed to open source database software. We think of ourselves as unbiased champions of open source database solutions. With that, we also carry a responsibility to the open source database community – whether MySQL®, MongoDB®, ProxySQL or other open source database technology. We’ve seen that, and taken action by hiring a Community Manager.
That’s me. Which is great… For me!
And my job, in a nutshell, is to help to make our community great for you. By building on the good stuff that’s been done in the past and finding ways to do more.
The common thread tying the community together is the sharing of information, experience, and knowledge. Hundreds of you have taken part in Percona Live or Percona Live Europe — thank you for that! Props if you’ve done both. If you’ve proposed a paper (selected or not), presented a session, given a tutorial, staffed a booth or sponsored the event – kudos!
Maybe you’ve benefited from or run sessions at a Percona University (the next one is in Kiev in November and it’s FREE). Or caught up with Percona staff at one of the many tech conferences we attend during the year.
You might have used our code, added to our code, spotted and logged bugs, given feedback or requested new features. Helped out other users in forums, written to question-and-answer sites like Stack Overflow. Maybe you’ve blogged about using Percona software on your own blog, or looked for help on the Percona Database Performance Blog. You might have recommended our software to your company, or a colleague, or a client or a friend. Or even a stranger. Mentioned us in passing in conversation. Read our e-books, watched our webinars, shared a link or reached out to Percona via social media.
All excellent, valuable and much-appreciated contributions to the community.
Ways you can join in
Have a think about these opportunities to shine, share and make the Percona community best-in-class.
Take part in our forum: we really try to keep up, but there are always more questions than we can address. It’s easy to think of the forums as a support queue but honestly, we are MORE than delighted when we have help from you.
You have a passion for a particular subject, or maybe an interesting project to share. How about proposing a webinar or blog post? Contact me if you are interested.
If you haven’t yet done it, make 2018 the year you attend Percona Live. If you’ve done it before, do it again – network with old friends and make some new ones. Get a new t-Shirt. Enjoy the company. The warmth of the welcome and the generosity of the knowledge shared made a big impression on me in Dublin, I’m convinced you’ll find the same.
In-depth knowledge or hardcore learning on-the-job? Don’t forget that the call for papers for Percona Live is opening soon and that speakers get free attendance at the conference. It’s a competitive call, but you’re up for that right? Right!
Don’t want to “do stuff” on the Percona site? Maybe contributing to code or working on the question-and-answer sites is more for you. Or maybe you have a blog already and write about our software and how to use it. If so – thanks again, and please let me have the link!
If you haven’t already, don’t forget to subscribe to our newsletters to get early warning of upcoming webinars, and the latest tech and community news
Have you thought about joining Percona? We’re hiring! Don’t forget, too, that all the contributions you make to online communities – Percona or not – really pay off when you want to demonstrate your knowledge and commitment to future employers or clients. A link is worth a thousand words.
What do you think?
Interested? Ideas or comments? Things you think we should do better? Things that you think are great? Things we used to do that were great and you miss? Things that others do and you wished we did? Things that … well, you get the idea!
Get in touch, or just get stuck in. You might find it rewarding*…
free to email me or message me on Skype.
*I have keys to the swag box …
Percona latest blog poll asks how you currently host applications and databases. Select an option below, or leave a comment to clarify your deployment!
With the increased need for environments that respond more quickly to changing business demands, many enterprises are moving to the cloud and hosted deployments for applications and software in order to offload development and maintenance overhead to a third party. The database is no exception. Businesses are turning to using database as a service (DBaaS) to handle their data needs.
DBaaS provides some obvious benefits:
Offload physical infrastructure to another vendor. It is the responsibility of whoever is providing the DBaaS service to maintain the physical environment – including hardware, software and best practices.
Scalability. You can add or subtract capacity as needed by just contacting your vendor. Have a big event on the horizon? Order more servers!
Expense. Since you no longer have shell out for operational costs or infrastructure upgrades (all handled by the vendor now), you can reduce capital and operation expenses – or at least reasonably plan on what they are going to be.
There are some potential disadvantages to a DBaaS as well:
Network performance issues. If your database is located off-premises, then it can be subject to network issues (or outages) that are beyond your control. These can translate into performance problems that impact the customer experience.
Loss of visibility. It’s harder (though not impossible) to always know what is happening with your data. Decisions around provisioning, storage and architecture are now in the hands of a third party.
Security and compliance. You are no longer totally in control of how secure or compliant your data is when using a DBaaS. This can be crucial if your business requires certain standards to operate in your market (healthcare, for example).
How are you hosting your database? On-premises? In the cloud? Which cloud? Is it co-located? Please answer using the poll below. Choose up to three answers. If you don’t see your solutions, use the comments to explain.
Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.
Thanks in advance for your responses – they will help the open source community determine how databases are being hosted.
In this blog post, I’ll provide some guidance on how to choose the MySQL innodb_log_file_size.
Like many database management systems, MySQL uses logs to achieve data durability (when using the default InnoDB storage engine). This ensures that when a transaction is committed, data is not lost in the event of crash or power loss.
MySQL’s InnoDB storage engine uses a fixed size (circular) Redo log space. The size is controlled by innodb_log_file_size and innodb_log_files_in_group (default 2). You multiply those values and get the Redo log space that available to use. While technically it shouldn’t matter whether you change either the innodb_log_file_size or innodb_log_files_in_group variable to control the Redo space size, most people just work with the innodb_log_file_size and leave innodb_log_files_in_group alone.
Configuring InnoDB’s Redo space size is one of the most important configuration options for write-intensive workloads. However, it comes with trade-offs. The more Redo space you have configured, the better InnoDB can optimize write IO. However, increasing the Redo space also means longer recovery times when the system loses power or crashes for other reasons.
It is not easy or straightforward to predict how much time a system crash recovery takes for a specific innodb_log_file_size value – it depends on the hardware, MySQL version and workload. It can vary widely (10 times difference or more, depending on the circumstances). However, around five minutes per 1GB of innodb_log_file_size is a decent ballpark number. If this is really important for your environment, I would recommend testing it by a simulating system crash under full load (after the database has completely warmed up).
While recovery time can be a guideline for the limit of the InnoDB Log File size, there are a couple of other ways you can look at this number – especially if you have Percona Monitoring and Management installed.
Check Percona Monitoring and Management’s “MySQL InnoDB Metrics” Dashboard. If you see a graph like this:
where Uncheckpointed Bytes is pushing very close to the Max Checkpoint Age, you can almost be sure your current innodb_log_file_size is limiting your system’s performance. Increasing it can provide substantial performance improvements.
If you see something like this instead:
where the number of Uncheckpointed Bytes is well below the Max Checkpoint Age, then increasing the log file size won’t give you a significant improvement.
Note: many MySQL settings are interconnected. While a specific log file size might be good enough for smaller innodb_buffer_pool_size, larger InnoDB Buffer Pool values might warrant larger log files for optimal performance.
Another thing to keep in mind: the recovery time we spoke about early really depends on the Uncheckpointed Bytes rather than total log file size. If you do not see recovery time increasing with a larger innodb_log_file_size, check out InnoDB Checkpoint Age graph – it might be you just can’t fully utilize large log files with your workload and configuration.
Another way to look at the log file size is in context of log space usage:
This graph shows the amount of Data Written to the InnoDB log files per hour, as well as the total size of the InnoDB log files. In the graph above, we have 2GB of log space and some 12GB written to the Log files per hour. This means we cycle through logs every ten minutes.
InnoDB has to flush every dirty page in the buffer pool at least once per log file cycle time.
InnoDB gets better performance when it does that less frequently, and there is less wear and tear on SSD devices. I like to see this number at no less than 15 minutes. One hour is even better.
Getting the innodb_log_file_file size is important to achieve the balance between reasonably fast crash recovery time and good system performance. Remember, your recovery time objective it is not as trivial as you might imagine. I hope the techniques described in this post help you to find the optimal value for your situation!
Percona announces the release of Percona Monitoring and Management 1.3.2. This release only contains bug fixes related to usability.
For install and upgrade instructions, see Deploying Percona Monitoring and Management.
PMM-1529: When the user selected “Today”, “This week”, “This month” or “This year” range in Metrics Monitor and clicked the Query Analytics button, the QAN page opened reporting no data for the selected range even if the data were available.
PMM-1528: In some cases, the page not found error could appear instead of the QAN page after upgrading by using the Upgrade button.
PMM-1498 : In some cases, it was not possible to shut down the virtual machine containing the PMM Server imported as an OVA image.
Other bug fixes in this release: PMM-913, PMM-1215, PMM-1481, PMM-1483, PMM-1507
The Metrics Monitor of Percona Monitoring and Management 1.3.0 (PMM) provides graph descriptions to display more information about the monitored data without cluttering the interface.
Percona Monitoring and Management 1.3.0 is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.
Each dashboard graph in PMM contains a lot of information. Sometimes, it is not easy to understand what the plotted line represents. The metric labels and the plotted data are limited and have to account for the space they can use in dashboards. It is simply not possible to provide additional information which might be helpful when interpreting the monitored metrics.
The new version of the PMM dashboards introduces on-demand descriptions with more details about the metrics in the given graph and about the data.
These on-demand descriptions are available when you hover the mouse pointer over the icon at the top left corner of each graph. The descriptions do not use the valuable space of your dashboard. The graph descriptions appear in small boxes. If more information exists about the monitored metrics, the description contains a link to the associated documentation.
In release 1.3.0 of PMM, Metrics Monitor only starts to use this convenient tool. In subsequent releases, graph descriptions are going to become a standard attribute of each Metrics Monitor dashboard.
One for the road…
The social events at Percona Live Europe provide the community with more time to catch up with old friends and make new contacts. The formal sessions provided lots of opportunities for exchanging notes, experiences and ideas. Lunches and coffee breaks proved to be busy too. Even so, what’s better than chilling out over a beer or two (we were in Dublin after all) and enjoying the city nightlife in good company?
Percona Live Europe made it easy for us to get together each evening. A welcome reception (after tutorials) at Sinnott’s Pub in the heart of the City hosted a lively crowd. The Community Dinner at the Mercantile Bar, another lively city center hostelry, was a sell-out. While our closing reception was held at the conference venue, which had proven to be an excellent base.
Many delegates took the chance to enjoy the best of Dublin’s hospitality late into the night. It’s credit to their stamina – and the fantastic conference agenda – that opening keynotes on both Tuesday and Wednesday were very well attended.
In case you think we might have been prioritizing the Guinness, though, there was the little matter of the lightning talks at the Community Dinner. Seven community-minded generous souls gave up some of their valuable socializing time to share insights into matters open source. Thank you again to Renato Losio of Funambol, Anirban Rahut of Facebook, Federico Razzoli of Catawiki, Dana Van Aken of Carnegie Mellon University, Toshaan Bharvani of VanTosh, Balys Kriksciunas of Hostinger International and Vishal Loel of Lazada.
More about the lightning talks can be seen on the Percona Live Europe website.
Many of the conference treats – coffee, cakes, community dinner – are sponsored and thanks are due once more to our sponsors who helped make Percona Live Europe the worthwhile, enjoyable event that it was.
And so Percona Live Europe drew to a close. Delegates from 43 countries headed home armed with new knowledge, new ideas and new friends. I’ve put together to give a taste of the Percona Live social meetups in this video. Tempted to join us in 2018?