Month: December 2017

Overview of fragmented MySQL InnoDB tables

The major engine for MySQL is InnoDB, it complies with the ACID properties for a transactional database engine. Even if InnoDB is the most recommended engine for MySQL, it has also some caveats. The biggest criticism lies in the amount of disk space used. And even when we remove data, tablespaces don’t shrink.
This post is about those tables that are fragmented.
To find the top 10 of tables with free space (free space means gaps that may happen when for example large delete operations happened or if many pages had to be moved around), a simple query can be run:
SELECT CONCAT(table_schema, ‘.’, table_name) as ‘TABLE’,
ENGINE, CONCAT(ROUND(table_rows / 1000000, 2), ‘M’) ROWS,
CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), ‘G’) DATA,
CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), ‘G’) IDX,
CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), ‘G’) ‘TOTAL SIZE’,
ROUND(index_length / data_length, 2) IDXFRAC, CONCAT(ROUND(( data_free / 1024 / 1024),2), ‘MB’) AS data_free
FROM information_schema.TABLES
ORDER BY data_length + index_length desc LIMIT 10;

2M records table
Now, let’s experiment to better understand this. I’ve created a table (with sysbench) of 2M records. let’s see the output of the preview query related to this new table:
mysql> SELECT CONCAT(table_schema, ‘.’, table_name) as ‘TABLE’,
ENGINE, CONCAT(ROUND(table_rows / 1000000, 2), ‘M’) ROWS,
CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), ‘G’) DATA,
CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), ‘G’) IDX,
CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), ‘G’) ‘TOTAL SIZE’,
ROUND(index_length / data_length, 2) IDXFRAC, CONCAT(ROUND(( data_free / 1024 / 1024),2), ‘MB’) AS data_free
FROM information_schema.TABLES WHERE table_name=’sbtest1′;
+—————-+——–+——-+——-+——-+————+———+———–+
| TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | data_free |
+—————-+——–+——-+——-+——-+————+———+———–+
| sbtest.sbtest1 | InnoDB | 1.92M | 0.41G | 0.00G | 0.41G | 0.00 | 5.00MB |
+—————-+——–+——-+——-+——-+————+———+———–+

We can see that we have 5MB free and that the statistics shows an estimation of the size and the amount of rows.
We can also verify on the filesystem:
[root@mysql1 innodb_ruby-master]# ls -lh /var/lib/mysql/sbtest/sbtest1.ibd
-rw-r—–. 1 mysql mysql 472M Dec 30 12:16 /var/lib/mysql/sbtest/sbtest1.ibd

Our table’s size is indeed 472MB.
I hacked innodb_ruby from Jeremy Cole (a tool that I really appreciate and that I find very useful especially when used to show the lsn heat map), to have the possibility to output
a map of the table showing the used and the free pages.
This is the output for our tables (sbtest.sbtest1):

As you can see the 5MB of data free are mostly at the end (happening when allocating extra extends) but there are also some small gaps (I didn’t check yet why?).
Deleting the first 100k records
Now let’s delete the first 100k records and see what happens:
mysql> delete from sbtest1 order by id limit 100000;
Query OK, 100000 rows affected (4.10 sec)

As innodb_ruby reads from the table space file, it’s mandatory to flush all pages from the Buffer Pool to disk (all the changes must be applied to disk). The best way to achieve this is to
stop mysqld not using the default fast shutdown and restart it:
mysql> set global innodb_fast_shutdown=0; Query OK, 0 rows affected (0.08 sec)
[root@mysql1 mysql]# systemctl restart mysqld

We can check now the if there is some free pages:
+—————-+——–+——-+——-+——-+————+———+———–+
| TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | data_free |
+—————-+——–+——-+——-+——-+————+———+———–+
| sbtest.sbtest1 | InnoDB | 1.92M | 0.41G | 0.00G | 0.41G | 0.00 | 25.00MB |
+—————-+——–+——-+——-+——-+————+———+———–+

And verify this with the output of the page:

We can definitely see where the allocated but unused space is located.
Deleting 200k random records
Let’s check by deleting more records but completely random:
mysql> delete from sbtest1 order by rand() limit 200000;
Query OK, 200000 rows affected (4 min 3.42 sec)

Now let’s have a look at the unused space:
+—————-+——–+——-+——-+——-+————+———+———–+
| TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | data_free |
+—————-+——–+——-+——-+——-+————+———+———–+
| sbtest.sbtest1 | InnoDB | 1.67M | 0.40G | 0.03G | 0.43G | 0.07 | 25.00MB |
+—————-+——–+——-+——-+——-+————+———+———–+

mmm… !? less rows but the data_free is still the same (25MB). What does the output tell ?

We don’t see any difference !?
In fact, this is because the records deleted were certainly stored on a page with others records that were not deleted. So the page is still in use. Not completely but still in use, so not yet
totally free.
Deleting 100k records in the middle
OK… let’s then delete 100k sequential records in the middle of the table:
mysql> delete from sbtest1 where id> 1500000 order by id limit 100000;
Query OK, 100000 rows affected (5.36 sec)

Let’s verify the free space:
+—————-+——–+——-+——-+——-+————+———+———–+
| TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | data_free |
+—————-+——–+——-+——-+——-+————+———+———–+
| sbtest.sbtest1 | InnoDB | 1.67M | 0.40G | 0.03G | 0.43G | 0.07 | 48.00MB |
+—————-+——–+——-+——-+——-+————+———+———–+

This time we have more free space and we can see it:

However the size on disk is still the same (we didn’t gain back those 40MB):
[root@mysql1 mysql]# ls -lh /var/lib/mysql/sbtest/sbtest1.ibd
-rw-r—–. 1 mysql mysql 472M Dec 30 13:03 /var/lib/mysql/sbtest/sbtest1.ibd

Deleting all the records
Now let’s see what happens if we delete all records from our table:
mysql> delete from sbtest1 ;
Query OK, 1600000 rows affected (1 min 36.80 sec)

We can check the free space using the query in Information_Schema:
+—————-+——–+——-+——-+——-+————+———+———–+
| TABLE | ENGINE | ROWS | DATA | IDX | TOTAL SIZE | IDXFRAC | data_free |
+—————-+——–+——-+——-+——-+————+———+———–+
| sbtest.sbtest1 | InnoDB | 0.00M | 0.38G | 0.03G | 0.41G | 0.08 | 463.00MB |
+—————-+——–+——-+——-+——-+————+———+———–+

No more rows and a lot of free space.
Still no change on the filesystem (as expected):
[root@mysql1 mysql]# ls -lh /var/lib/mysql/sbtest/sbtest1.ibd
-rw-r—–. 1 mysql mysql 472M Dec 30 13:09 /var/lib/mysql/sbtest/sbtest1.ibd

Let’s see the table’s illustration:

Optimizing
So we don’t have any records and we use a “lot” of diskpace, which is completely wasted. Can we do something ?
Yes we can optimize the table (the table doesn’t support it but will perform a recreate + analyze). We can so run one of these commands:
mysql> optimize table sbtest1;

or
mysql> alter table sbtest1 engine=innodb;

After such operation, we can finally see the result on the filesystem and on the page’s illustration:
[root@mysql1 mysql]# ls -lh /var/lib/mysql/sbtest/sbtest1.ibd
-rw-r—–. 1 mysql mysql 112K Dec 30 13:14 /var/lib/mysql/sbtest/sbtest1.ibd

Conclusion
In conclusion, if diskpace is a concern for you (or if you are doing physical backups regularly), it could be interesting for you to check the fragmentation of your tables (when you use InnoDB). And if like me (and many others) you like InnoDB, please take a look at innodb_ruby if it’s not yet done

Percona Database Performance Blog Year in Review: Top Blog Posts

Let’s look at some of the most popular Percona Database Performance Blog posts in 2017.
The closing of a year lends itself to looking back. And making lists. With the Percona Database Performance Blog, Percona staff and leadership work hard to provide the open source community with insights, technical support, predictions and metrics around multiple open source database software technologies. We’ve had over three and a half million visits to the blog in 2017: thank you! We look forward to providing you with even better articles, news and information in 2018.
As 2017 moves into 2018, let’s take a quick look back at some of the most popular posts on the blog this year.
Top 10 Most Read
These posts had the most number of views (working down from the highest):
Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads
Straight-up comparisons of different open source databases’ ability to handle today’s workload levels are always popular. Can open source databases cope with millions of queries per second? Many open source advocates would answer “yes.” However, assertions aren’t enough for well-grounded proof. That’s why in this blog post, Sveta Smirnova (Principal Technical  Services Engineer) shared the benchmark testing results from Alexander Korotkov (CEO of Development, Postgres Professional) and herself. The comparative research of PostgreSQL 9.6 and MySQL 5.7 performance is especially valuable for environments with multiple databases.
MySQL vs. MariaDB: Reality Check
Another highly viewed blog is Colin Charles (Chief Technology Evangelist) comparison of MariaDB vs. MySQL (including Percona Server for MySQL). The goal of this blog post is to evaluate, at a higher level, MySQL, MariaDB and Percona Server for MySQL side-by-side to better inform the adoption decision making process. It is largely an unofficial response to published comments from the MariaDB Corporation.
Column Store Database Benchmarks: MariaDB ColumnStore vs. Clickhouse vs. Apache Spark
Alexander Rubin’s (Principal Consultant in Architecture & Projects) blog shares some column store database benchmark results, and compares the query performance of MariaDB ColumnStore v. 1.0.7 (based on InfiniDB), Clickhouse and Apache Spark.The purpose of the benchmark is to see how these three solutions work on a single big server, with many CPU cores and large amounts of RAM. Both systems are massively parallel (MPP) database systems, so they should use many cores for SELECT queries.
Dealing with MySQL Error Code 1215: “Cannot add foreign key constraint”
Our Support customers often come to us with things like “My database deployment fails with error 1215”, “Am trying to create a foreign key and can’t get it working” or “Why am I unable to create a constraint?” Marcos Albe (Principal Support Engineer) looks at how to resolve MySQL error code 1215: “Cannot add foreign key constraint.”
ZFS from a MySQL perspective
This post (first of a series) from Yves Trudeau (Principal Consultant) explores and investigates the use of ZFS with MySQL to see when ZFS can help solve real problems. He examines ZFS and tries to draw parallels with the architecture of MySQL to better understand how ZFS works and behaves.
Overview of Different MySQL Replication Solutions
Although information is plentiful, replication is often incompletely understood, or completely misunderstood. This blog post by Dimitri Vanoverbeke (Solution Engineer) reviews some of the MySQL replication concepts that are part of the MySQL environment (and Percona Server for MySQL specifically). He clarifies some of the misconceptions people have about replication.
Quest for Better Replication in MySQL: Galera vs. Group Replication
Thanks to the Galera plugin, founded by the Codership team, we’ve had the choice between asynchronous and virtually synchronous replication in the MySQL ecosystem for quite a few years already. Moreover, we can choose between at least three software providers: Codership, MariaDB and Percona, each with its own Galera implementation. The situation recently became much more interesting when MySQL Group Replication went into GA (stable) stage in December 2016.
In this post, Przemysław Malkowski (Principal Support Engineer) points out a couple of interesting differences between Group Replication and Galera, which should be helpful to those considering switching from one to another (or if they are planning to test them).
Using NVMe Command Line Tools to Check NVMe Flash Health
Checking the NVMe flash health from the command line for both older and new drives is fairly straightforward. But what does the health information mean? In this blog post, Peter Zaitsev (CEO) investigates both how to check the NVMe flash health on both older and new hardware, as well as a breakdown of the collected information.
ClickHouse: New Open Source Columnar Database
In this blog post, Alexander Rubin (Principal Consultant in Architecture & Projects) decided to try ClickHouse: an open source column-oriented database management system developed by Yandex (it currently powers Yandex.Metrica, the world’s second-largest web analytics platform). He runs the database using Wikipedia page statistics as a data source and compares the results to a previous test of Apache Spark and MySQL.
Setup ProxySQL for High Availability (not a Single Point of Failure)
Percona has had a lot of opportunities to present and discuss a very powerful tool that is more and more used in the architectures supporting MySQL: ProxySQL. In this blog, Marco Tusa (High Availability Practice Manager) looks at how to set up ProxySQL for high availability.
Honorable Mention
MongoDB 3.4 Views
Views are often used in relational databases to achieve both data security and a high level of abstraction, making it easier to retrieve data. In this blog post, Adamo Tonete (Senior Support Engineer) looks at how to set up views in MongoDB 3.4.
Top 10 Most Commented
These posts generated some healthy discussions (not surprisingly, this list overlaps with the first):

Quest for Better Replication in MySQL: Galera vs. Group Replication
MySQL vs. MariaDB: Reality Check
Millions of Queries per Second: PostgreSQL and MySQL’s Peaceful Battle at Today’s Demanding Workloads
ZFS from a MySQL perspective
Three Methods of Installing Percona Monitoring and Management
Hands-On Look at ZFS with MySQL
Open Source Databases on Big Machines: Disk Speed and innodb_io_capacity
MySQL Bug 72804 Workaround: “BINLOG statement can no longer be used to apply query events”
Reinstall MySQL and Preserve All MySQL Grants and Users
InnoDB Page Compression: the Good, the Bad and the Ugly

Posts Worth Revisiting
Don’t miss these great posts that have excellent information on important topics:

Migrating Data from an Encrypted Amazon MySQL RDS Instance to an Encrypted Amazon Aurora Instance
MySQL and MongoDB Alerting with PMM and Grafana
MongoDB 3.6 Sessions Explained
MongoDB Indexing Types: How, When and Where Should They Be Used?
How to Save and Load Docker Images to Offline Servers

Have a great end of the year celebration, and we look forward to providing more great blog posts in 2018.

This Week in Data with Colin Charles 21: Looking Back on 2017 in the Open Source Database Community

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.
The end of the year is upon us, and this column is being penned a little earlier than usual as I enjoy a quick bout of wrapping up for the year.
We’ve had a lot of things happen in the database world. It’s clear that open source is everywhere, the cloud is large, and we’ve seen a lot of interesting movements in our MySQL world, from the standpoint of corporate contributions too. GitHub continues to enhance Orchestrator, and gh-ost keeps on getting better; Facebook has ensured MyRocks is now mainstream in both MariaDB Server and Percona Server for MySQL distributions; Alibaba Cloud and Tencent have been contributing to make MariaDB Server better.
We’ve seen the year end with MySQL 8.0 release candidate ready, as well as MariaDB Server 10.3 Beta. We had MariaDB Server 10.2 go GA in May 2017, and we’ve seen a steady stream of improvements in MySQL & Percona Server 5.7 releases.
Vitess I think is still a bigger deal than it’s made out to be. Slack talking about their use case at Percona Live Dublin was good. ProxySQL is a shining star for proxies.
We’ve seen some database vendors pass on — RethinkDB, with very promising technology, started in 2009, raised $12.2m, but finally ended up at Stripe with the code open and relicensed thanks to the Cloud Native Computing Foundation. We’ve seen Basho (makers of Riak), start in 2008, raised $61.2m, ended up at Bet365 and Riak is all open source now.
We’ve seen MongoDB IPO, MariaDB Corporation raise more money (some reported $98.2m in total), and more money still streams into the database world, like for BlazingDB in the GPU space and so on.
What do I look forward to in 2018? Oracle compatibility and PL/SQL in MariaDB Server 10.3 should be interesting. I’ve heard a lot of buzz around Apache Kafka. I expect we’ll see more manageability in MySQL. And also, fun to note that MMAPv1 in MongoDB has been deprecated before MyISAM in MySQL.
Right before the New Year, it is interesting to look at some proxy statistics of database usage, via the WordPress Statistics. Head down to the MySQL versions, and you’ll note that 5.5 gets 41.9% of users (this could be MySQL/Percona Server for MySQL or MariaDB Server), but you’ll also notice at a close second comes 5.6 at 39.8% of the users (this is only MySQL or Percona Server for MySQL). 5.7 gets 6.4% of the users as the 3rd most popular option, followed by 5.1 at 4.3% of users of WordPress (these folk desperately need to upgrade). 10.0 gets 1.6%, while 10.1 gets 3.8% (these are all MariaDB Server versions only). So the death of MySQL has greatly been exaggerated — people like it, people use it, and I can only imagine if more distributions ship 5.7 or 8.0, this could be a win for MySQL.
Releases

MariaDB Server 10.3 Beta – pay particular attention to the database compatibility enhancements, and a few other enhancements that you may have already seen in other servers. The changes and improvements list is also a good one.

Link List

Jsonb: few more stories about the performance – great benchmark, good methodology, and also here’s the PostgreSQL, MySQL and MongoDB shootout

Mark Callaghan tweets about the Learned Index paper – I’d encourage you to read the thread

Upcoming appearances

FOSDEM 2018 – Brussels, Belgium – February 3-4 2018

SCALE16x – Pasadena, California, USA – March 8-11 2018

Feedback
I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.
 

Monitoring Amazon Aurora with PMM

In this blog, we’ll look at how to monitor Amazon Aurora using Percona Monitoring and Management (PMM).
One impact of the growth of web services such as AWS on DBAs is the need to understand and report on database and application resource usage. DBAs have always had to monitor for performance-related usability issues, which may have led to investment in larger servers. However, services such as Aurora operate pricing models based on usage. Performance issues can be more closely attributed to rising costs, so badly performing systems more visibly affect the bottom line. DBA has become a cost center!
With that in mind, from Percona Monitoring and Management (PMM) 1.5 onwards, we delivered features that provide enhanced support for the performance monitoring of Amazon Aurora instances. “[Aurora] is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine.” (Amazon docs).
Here’s a walkthrough of some of the key features in PMM that could benefit DBAs managing Aurora, and eliminate the need for additional monitoring nodes. The images here, pulled up from our demo system, display a seven-day overview of one of our test instances. You can focus tightly on the past five minutes or pan out to the last five years, depending on your analysis requirements.
Starting with this dashboard summary, which appears at the top of the screen, PMM displays an instant summary on the “health” of your instance.

Engine uptime and CPU Utilization are helpfully highlighted in green, reinforcing a healthy status. Amber or red alerts means pay attention to the instance and whether there have been issues in the recent past.
In this case, we can see that Aurora was last restarted 2.1 weeks ago.
The next three statistics in this section cover the current status of the instance at the time you loaded the dashboard. Aurora is using ~15% of the available CPU resources. Your application is asking approximately ~400 queries per second.There is  370MiB of unused RAM at the current time.
Aurora Costs
The next four graphs are closely aligned with running costs. In creating these graphs, PMM calls on Amazon’s CloudWatch service. This provides up to one million API calls per month as part of your service, without additional charge (December 2017). A refresh of these graphs uses 15 calls, and you can configure the refresh frequency.

Three of these graphs represent CPU Utilization, Network Traffic and Memory Usage. As a DBA you are likely to be familiar with these terms. PMM displays these as a metric series over time.
The fourth graph of CPU Credit Usage, top right, covers usage credit and applies specifically to Aurora MySQL db.t2.small and db.t2.medium instances. There are two parameters in play here. CPUCreditBalance represents the number of CPU credits accumulated during the specified time, allocated to the instance’s virtual CPU.  CPUCreditUsage represents the actual CPU used during the same period. In essence, the gap between the two indicates the capacity of the instance to operate beyond its baseline performance level at a given rate.
Our demo instance, which runs a synthetic workload based on sysbench, is showing a regular usage pattern and accumulating credit. It’s no surprise that we are not bursting our CPU requests. This pattern of use is less likely to apply to a real-life application.
By the way, clicking on any graph in PMM displays it in full screen, so we don’t restrict your view to a quadrant of the screen.
Aurora MySQL
Next up, there’s a set of four graphs representing MySQL specific metrics. These show information that’s important to DBAs as a matter of course. They take on more significance if you’re seeking opportunities to tune applications to minimise costs.
The Percona Database Performance blog has lots of blog posts that cover these metrics in more detail, and we may well revisit them in due course in the context of Aurora MySQL. For this review of PMM, it’s enough to say that each of these would have its place in tuning with a view to improving your overall efficiency.

Summary
Since charges for web services are based on usage, the role of the DBA is in the spotlight. You need to optimize usage, and PMM 1.5 can help you achieve that for Amazon Aurora thanks to these features:

Dedicated Aurora dashboard which offers maximum visibility into key database characteristics
Simple configuration of key Aurora settings via a web interface.
We renamed Amazon RDS OS Metrics to Amazon RDS / Amazon MySQL Metrics. This better reflects the Amazon Data Services that we support.

And did you know that beyond providing PMM, Percona has built a reputation for MySQL and now Amazon Aurora proficiency? We recently worked with Lookout to increase efficiency and more effectively use their Amazon Aurora deployment to save Lookout hundreds of thousands of dollars! If you think you could benefit from this kind of Amazon Aurora or other MySQL assistance, please contact us!

Our Most Popular Database Blog Posts in 2017

As we wrap up our last blog of 2017 we wanted to reflect on what content we have been creating that’s been resonating and generating the most interest with our readers. We will continue to deliver the best technical content we can for MySQL, Galera Cluster, PostgreSQL, MariaDB, and MongoDB in 2018.

Here is some of our most popular content from 2017…

Top Database Blogs for 2017

MySQL on Docker: There are four blogs on this topic which consistently show up in top tens reports but “MySQL Docker Containers: Understanding the Basics” takes the crown. This is the first in our MySQL on Docker blog series and it discusses what a database container is and some tips for getting started with this booming new technology.

Installing Kubernetes Cluster with 3 Minions on CentOS 7 to Manage Pods and Services: This “how-to” blog provides step-by-step instructions on how to deploy on CentOS 7 and also provides a basic understanding of the components. It goes through the process of setting up a Kubernetes Master and Minions (nodes) and concludes with creating pods and services.

How to Bootstrap MySQL or MariaDB Galera Cluster: This blog details how to start MySQL Galera Cluster or MariaDB Cluster, which is different than the way you do it for MySQL server or MySQL Cluster. Called a “cluster bootstrap” this process is an initial setup to introduce a database node as a primary component.

9 Tips for Going in Production with Galera Cluster for MySQL: A blog that was later turned into a webinar, this “how to” post covers nine tips to make sure you’re ready for production before you go live.

How to Set Up Asynchronous Replication from Galera Cluster to Standalone MySQL server with GTID: This blog shows how to utilize GTID to enable hybrid replication (combining Galera and Asynchronous MySQL Replication in the same setup).

Scaling WordPress and MySQL on Multiple Servers for Performance: This blog details how to combine technologies like Percona XtraDB Cluster, GlusterFS, Apache with load balancing to ensure HA WordPress setup.

Avoiding Deadlocks in Galera – Setup HAProxy for Single-Node Writes and Multi-node Reads: While HAProxy is not fully deployed and configurable inside of ClusterControl this blog details how to avoid common challenges when using the technology with Galera Cluster.

Understanding What Has Changed Between MySQL 5.6 and 5.7 Before Upgrading: The newest version of MySQL was released back in 2015 but it is still being upgraded across many systems and this blog details all the changes that occured. It is also a except from our very popular Migrating to MySQL 5.7 Whitepaper.

How to Avoid SST When Adding a New Node to Galera Cluster for MySQL or MariaDB: This blog details how to avoid State Snapshot Transfer (SST) for MySQL Galera Setups when adding additional nodes to your setup.

MySQL Load Balancing with ProxySQL Overview: ProxySQL was one of our major focuses for ClusterControl this year so it’s exciting to see an overview of this exciting new technology making our top list. While this blog explains manual deployment it is now completely integrated in the ClusterControl GUI.

ClusterControl

Single Console for Your Entire Database Infrastructure

Find out what else is new in ClusterControl

Install ClusterControl for FREE

Top Blogs by Technology

While MySQL and MySQL Galera Cluster dominate our most popular content we blog about a lot of different technologies and methodologies on the Severalnines blog. Here are some of the most popular blogs in 2017 for non-MySQL topics.

Top MongoDB Blog – High Availability Log Processing with Graylog, MongoDB and Elasticsearch: Graylog is an open-source log management tool and this blog details how you can use it in conjunction with MongoDB, Elasticsearch and ClusterControl.

Top PostgreSQL Blog – How to Setup Streaming Replication for High Availability: This “how to” blog lets you know step-by-step how to deploy streaming replication for PostgreSQL including using ClusterControl to add nodes.

Top DevOps Blog – DevOps Considerations for Production-ready Database Deployments: If you are a DevOps professional and have suddenly found yourself responsible for the database environments in your setup this blog is key to detailing what you need to know to effectively achieve high availability and ensure your environment operates correctly.

Top ClusterControl Blog – How to Deploy and Manage Multi-Master Replication Setups with ClusterControl 1.4: This blog details the new deployment wizard that was released in ClusterControl 1.4. With this new wizard you can easily deploy and configure a variety of open source database technologies including Master-Master replication topologies.

Top NinesControl Blog – Deploying and Monitoring MySQL and MongoDB Clusters in the Cloud with NinesControl: In this blog we detail the deployment and monitoring features found in NinesControl, a system to easily deploy open sources databases to the cloud of your choice.

If there are some blog topics you would like us to cover in 2018 please list them in the comments below.

Tags: 

MySQL
percona
MariaDB
MongoDB
PostgreSQL
blogs
clustercontrol

MariaDB Holiday Reflections and Gratitude

MariaDB Holiday Reflections and Gratitude
MariaDB Team

Wed, 12/27/2017 – 19:44

As we wrap up 2017, we want to take a moment to reflect on the past year. We’ve seen tremendous support from our community, customers and partners – and for that, we are grateful. Our customers across the global have shared heartwarming stories of their business success and we appreciate the opportunity to be your valued partner.
For us, 2017 was a milestone year. Our progress this year reflects a global theme that brings us forward to new opportunities.

Solutions to global problems: To make it easier for our customers to find solutions specific to their types of workloads, we introduced two complete solutions – MariaDB TX for transactional workloads and MariaDB AX for analytical workloads. The solutions are comprised of our technology products, as well as tools, services and support for an optimal production deployment environment. This year was also highlighted by the GA releases of MariaDB Server 10.2, MariaDB MaxScale 2.1, MariaDB ColumnStore 1.1.

Global event: In April, we held our first annual MariaDB global user conference M|17. Customer from as far away as Asia and Europe traveled to NYC to share success stories, best practices and technical deep dives from the MariaDB team.

Growing reach worldwide: This year, Debian followed the trend of Linux distributions replacing MySQL with MariaDB as the default. MariaDB reaches more than 60 million developers worldwide through its inclusion in every major Linux distribution, as well as a growing presence in the world’s major cloud providers.

Global team to support global customers: We’ve expanded our diverse team with employees in 20+ countries, plus Jon Bakke joined the leadership ranks as Chief Revenue Officer.

Investment from all corners of the world: Additional investment from Alibaba, the European Investment Bank and existing investors continue to fund global growth and product innovation.

We are excited to keep up the momentum in 2018, starting with our second annual MariaDB M|18 user conference in February. We look forward to seeing you in NYC!
Wishing you a happy new year from the team at MariaDB!
 

Happy New Year | Onnellista uutta vuotta | Feliz año nuevo 

Bonne année | Gott Nytt År | Felice Anno Nuovo

Frohes neues Jahr | С Новым годом | 新年快乐 | 良いお年を | 새해 복 많이 받으세요

 

Business
Community

As we wrap up 2017, we want to take a moment to reflect on the past year. We’ve seen tremendous support from our community, customers and partners – and for that, we are grateful. Our customers across the global have shared heartwarming stories of their business success and we appreciate the opportunity to be your valued partner. For us, 2017 was a milestone year. Our progress this year reflects a global theme that brings us forward to new opportunities.

Login
or
Register
to post comments

OSX and case-sensitive file system

I am really angry now.
A few weeks ago, when I was finished my MySQL backend checker I lost about two hours of work because I wasn’t commit anything to git, but I overwrote the working file with one of my doodle files – which file had the same name but with camel case. I had a default APFS filesystem (on High Sierra) – which is not case sensitive. This was a real amateur mistake I admit it, but the damage was done, I had to recreate everything (actually the second time I was way faster, it took around an hour.)

Anyways, I had to reinstall my computer anyhow, so when I get to the point to decide what kind of filesystem I want to use, I haughtily selected ‘CASE SENSITIVE’. “Yes, please give me a good, old-fashioned Linux-like case-sensitive file system.”
I did not use this laptop too much so far (this is a different story, maybe I’ll talk about it later), so it is still not too comfy but before we went to the in-laws I decided to install Steam on it, to bring a few games with me.
And Steam is not starting. It just launches, but don’t display the login form.
After a few minutes of googling it seems, Steam is not working with the case-sensitive file system under OSX and now I am contemplating about I should reinstall this laptop again. Not because of Steam, I can live without it – I do have a desktop PC so if I want to play a game, I can do it on that machine, but I am thinking if Steam is not working, maybe other software won’t work as well, and now it is way less uncomfortable to do this reinstall. Actually, the installer informed me about that, so owe the devil with this work.
But I am still angry about it.
 

The hidepid Options for /proc and Percona XtraDB Cluster Don’t Play Well Together

In this blog, we’ll look at how the hidepid options for /proc and Percona XtraDB Cluster can fight with one another.
One of the things I like about consulting at Percona is the opportunity to be exposed to unusual problems. I recently worked with a customer having issues getting SST to work with Percona XtraDB Cluster. A simple problem you would think. After four hours of debugging, my general feeling was that nothing made sense.
I added a bash trace to the SST script and it claimed MySQL died prematurely:
[ -n ” ]]
+ ps -p 11244
+ wsrep_log_error ‘Parent mysqld process (PID:11244) terminated unexpectedly.’
+ wsrep_log ‘[ERROR] Parent mysqld process (PID:11244) terminated unexpectedly.’
++ date ‘+%Y-%m-%d %H:%M:%S’
+ local readonly ‘tst=2017-11-28 22:02:46’
At the same time, from the MySQL error log MySQL was complaining the SST script died:
2017-11-28 22:02:46 11244 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 –role ‘joiner’ –address ‘172.31.4.179’ –datadir ‘/var/lib/my
sql/’ –defaults-file ‘/etc/my.cnf’ –defaults-group-suffix ” –parent ‘11244’ ” : 32 (Broken pipe)
2017-11-28 22:02:46 11244 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2017-11-28 22:02:46 11244 [ERROR] WSREP: SST script aborted with error 32 (Broken pipe)
2017-11-28 22:02:46 11244 [ERROR] WSREP: SST failed: 32 (Broken pipe)
2017-11-28 22:02:46 11244 [ERROR] Aborting

The Solution
Clearly, something odd was at play. But what? At that point, I decided to try a few operations with the mysql user. Finally, I stumbled onto something:
[root@db-01 mysql]# su mysql –
bash-4.2$ ps fax
PID TTY STAT TIME COMMAND
11901 pts/0 S 0:00 bash –
11902 pts/0 R+ 0:00 _ ps fax
bash-4.2$
There are way more than 100 processes on these servers, so, why can’t the mysql user see them? Of course, the SST script monitors the state of its parent process using “ps”. Look at the bash trace above: 11244 is the mysqld pid. After a little Googling exercise, I found this blog post about the /proc hidepid mount option. Of course, the customer was using this option:
[root@db-02 lib]# mount | grep ‘^proc’
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime,hidepid=2)
I removed the hidepid option using remount, and set hidepid=0 on all the nodes:
mount -o remount,rw,nosuid,nodev,noexec,relatime,hidepid=0 /proc
This simple command solved the issue. The SST scripts started to work normally. A good lesson learned: do not overlook security settings!

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569