Month: June 2016

Rescuing a crashed pt-online-schema-change with pt-archiver

This article discusses how to salvage a crashed pt-online-schema-change by leveraging pt-archiver and executing queries to ensure that the data gets accurately migrated. I will show you how to continue the data copy process, and how to safely close out the pt-online-schema-change via manual operations such as RENAME TABLE and DROP TRIGGER commands. The normal process to recover from a crashed pt-online-schema-change is to drop the triggers on your original table and drop the new table created by the script. Then you would restart pt-online-schema-change. In this case, this wasn’t possible.
A customer recently needed to add a primary key column to a very busy table (with around 200 million rows). The table only had a unique key on one column (called our_id below). The customer had concerns about slave lag, and wanted to ensure there was little or no lag. This, as well as the fact that you can’t add a primary key as an online DDL in MySQL and Percona Server 5.6, meant the obvious answer was using pt-online-schema-change.
Due to the sensitivity of their environment, they could only afford one short window for the initial metadata locks, and needed to manually do the drop swap that pt-online-schema-change normally does automatically. This is where no-drop-triggers and no-swap-tables come in. The triggers will theoretically run indefinitely to keep the new and old tables in sync once pt-online-schema-change is complete. We crafted the following command:pt-online-schema-change
–max-load Threads-running=30
–critical-load Threads_running=55
–check-slave-lag mysql-slave1,mysql-slave2,mysql-slave3
–chunk-index “our_id”
–nocheck-planYou can see some of the specifics of other flags and why we used them in the Percona Toolkit Manual.
Once we ran the command the customer got concerned, as their monitoring tools weren’t showing any work done (which is by design, pt-online-schema-change doesn’t want to hurt your running environment). The customer ran strace -p to verify it was working. This wasn’t a great choice as it crashed pt-online-schema-change.
At this point, we knew that the application (and management) would not allow us to take new metadata locks to create triggers on the table, as we had passed our metadata lock window.
So how do we recover?
First, let’s start with a clean slate. We issued the following commands to create a new table, where __largetable_new is the table created by pt-online-schema-change:CREATE TABLE mynewlargetable LIKE __largetable_new;
RENAME TABLE __largetable_new TO __largetable_old, mynewlargetable TO __largetable_new;
DROP TABLE __largetable_old;Now the triggers on the original table, largetable are updating the new empty table that has our new schema.
Now let’s address the issue of actually moving the data that’s already in largetable to __largetable_new. This is where pt-archiver comes in. We crafted the following command:pt-archiver
–source D=website,t=largetable,i=our_id
–dest D=website,t=__largetable_new
–where “1=1”
–statisticsWe use pt-archiver to slowly copy records non-destructively to the new table based on our_id and WHERE 1=1 (all records). At this point, we periodically checked the MySQL data directory over the course of a day with ls -l to compare table sizes.
Once the table files were close to the same size, we ran counts on the tables. We noticed something interesting: the new table had thousands more records than the original table.
This concerned us. We wondered if our “hack” was a mistake. At this point we ran some verification queries:select min(our_id) from __largetable_new;
select max(our_id) from __largetable_new;
select min(our_id) from largetable;
select max(our_id) from largetable;We learned that there were older records that didn’t exist in the live table. This means that pt-archiver and the DELETE trigger may have missed each other (i.e., pt-archiver was already in a transaction but hadn’t written records to the new table until after the DELETE trigger already fired).
We verified with more queries:SELECT COUNT(*) FROM largetable l WHERE NOT EXISTS (SELECT our_id FROM __largetable_new n WHERE n.our_id=l.our_id);They returned nothing.SELECT COUNT(*) FROM __largetable_new n WHERE NOT EXISTS (SELECT our_id FROM largetable l WHERE n.our_id=l.our_id);Our result showed 4000 extra records in the new table. This shows that we ended up with extra records that were deleted from the original table. We ran other queries based on their data to verify as well.
This wasn’t a huge issue for our application, and it could have been easily dealt with using a simple DELETE query based on the unique index (i.e., if it doesn’t exist in the original table, delete it from the new one).
Now to complete the pt-online-schema-change actions. All we need to do is the atomic rename or drop swap. This should be done as soon as possible to avoid running in a degraded state, where all writes to the old table are duplicated on the new one.RENAME TABLE largetable TO __largetable_old , __largetable_new TO largetable;Then drop the triggers for safety:DROP TRIGGER pt_osc_website_largetable_ins;
DROP TRIGGER pt_osc_website_largetable_upd;
DROP TRIGGER pt_osc_website_largetable_del;At this point it is safer to wait for the old table to clear out of the buffer pool before dropping it, just to ensure there is no impact on the server (maybe a week to be safe). You can check information_schema for a more accurate reading on this:SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE WHERE TABLE_NAME = ‘`website`.`__largetable_old`’;
| count(*) |
| 279175 |
1 row in set (8.94 sec)Once this goes to 0 you can issue:DROP TABLE __largetable_old;

What is SID in Oracle ?

In the doc you’ll find, it’s the Oracle system identifier. Okay, let’s imagine the following Environment: ORACLE_SID=ORA001 init.ora: DB_NAME=DB001 DB_UNIQUE_NAME=UNI001 INSTANCE_NAME=INS001 SERVICE_NAMES=SVC001,SVC002 DB_DOMAIN=EXAMPLE.COM GLOBAL_NAMES=false database: SQL> select * from GLOBAL_NAME; listener.ora: SID_LIST_LISTENER =   (SID_LIST =     (SID_DESC =       (GLOBAL_NAME=GLO001.EXAMPLE.COM)       (SID_NAME=ORA001)     )   ) What is my SID? Actually there is more than one correct answer. In …

Continue reading “What is SID in Oracle ?”

Planets9s – MySQL on Docker: Building the Container Images, Monitoring MongoDB and more

Welcome to this week’s Planets9s, covering all the latest resources and technologies we create around automation and management of open source database infrastructures.

MySQL on Docker: Building the Container Image
Building a docker image for MySQL is essential if you’d like to customize MySQL to suit your needs. In this second post of our ‘MySQL on Docker’ series, we show you two ways to build your own MySQL Docker image – changing a base image and committing, or using Dockerfile. We show you how to extend the Docker team’s MySQL image, and add Percona XtraBackup to it.
Read the blog

Sign up for our webinar on Monitoring MongoDB – Tuesday July 12th
MongoDB offers many metrics through various status overviews or commands, and as MySQL DBA, it might be a little unfamiliar ground to get started with. In this webinar on July 12th, we’ll discuss the most important ones and describe them in ordinary plain MySQL DBA language. We’ll have a look at the open source tools available for MongoDB monitoring and trending. And we’ll show you how to leverage ClusterControl’s MongoDB metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
Sign up for the webinar

StreamAMG chooses ClusterControl to support its online European football streaming
This week we’re delighted to announce a new ClusterControl customer, StreamAMG (Advanced Media Group), Europe’s largest player in online video solutions, helping football teams such as Liverpool FC, Aston Villa, Sunderland AFC and the BBC keep fans watching from across the world. StreamAMG replaced its previous environment, based on a master-slave replication topology, with a multi-master Galera Cluster; and Severalnines’ ClusterControl platform was applied to automate operational tasks and provide visibility of uptime and performance through monitoring capabilities.
Read the story

That’s it for this week! Feel free to share these resources with your colleagues and follow us in our social media channels.
Have a good end of the week,
Jean-Jérôme Schmidt
Planets9s Editor
Severalnines AB
Tags: MySQLdockercontainersgalera clusterMongoDBmonitoring


%a:Oracle数据库的activation ID即RESETLOG_ID。
%F:基于"DBID+时间"确定的唯一名称,格式的形式为c-IIIIIIIIII-YYYYMMDD-QQ,其中IIIIIIIIII 为该数据库的DBID,YYYYMMDD为日期,QQ是一个1~256的序列。


MySQL Password Security Changes for PHP Developers

MySQL 5.7 introduced many new facets to password security. The first thing most notice is that you are assigned a random root password at installation time. You then have to search the log file for this random password, use it to login, and then change it. For the examples on the post I am using a fresh install of 5.7.13 on Oracle Linux 7.1 and was provided with the easy to remember password of nLvQRk7wq-NY which to me looked like I forgot to hit escape when trying to get out of vim. A quick ALTER USER to change the password and you are on your way. Defaults Password Lifetime and Complexity5.7.13 now has the default password lifetime set to 0 or ‘never expire’. My fresh install shows that the value of mysql.user.password_lifetime is set to NULL which means use the server default value. The lifetime is measured in days and stored in the password_last_changed column of the nysql.users table. If the password is expired, you are put into sandbox mode where the only command you can execute is to change the password. That works great for interactive users. But what about your application? It uses a username password pair to talk to the database but it is very unlikely that anyone planned on changing passwords upon expiration. I seriously doubt anyone has set up the exception routine to handle an expired password properly. And if so, how do you notify all involved about this new password — securely. What to doThe best thing would be to set the default password lifetime for accounts used by applications to zero. It simply does not expire. QED & out. But what if your company wants ALL password changed on a regular basis? And they do mean ALL. Earlier there was a listing of the defaults. The test system are set to a password length of eight characters minimum, requires mixed case, requires at least one upper case letter, one special (nonalphanumeric) character, and is of MEDIUM complexity.MEDIUM complexity means that passwords need one numeric, one lower case, one upper case, and one special character. LOW tests the password length only. And STRONG adds a condition that sub strings of the length of four characters or long do not match entries in a specified password file (use to make sure swear words, common names, etcetera are not part of a password). Lets create a dummy account.CREATE USER ‘foobar’@’Localhost’ IDENTIFIED BY ‘Foo@Localhost1’ PASSWORD EXPIRE;Checking the entry in the user table, you will find that the account’s password is expired. For extra credit notice what the authentication string is set to. We can’t have just a password string as some authentication tokens or hashes are not really password. So login as foobar and you will get a notice that the password must be reset before we can do anything else. ALTER USER ‘foobar’@’localhost’ IDENTIFIED By ‘1NewP@assword’; Corporate StandardYour corporate rules may require you to rotate password every N days and set the corresponding complexity. With MySQL 5.7 you can follow what their model is. If you do not have a standard and want to create one, be sure to DOCUMENT well what your standard is and make sure that standard is well known. There are ways to use packages like PAM or LDAP for authentication but that is for another day.

2016 MySQL User Group Leaders Summit

In this post, I’ll share my experience attending the annual MySQL User Group Leaders Summit in Bucharest, Romania.
The MySQL User Group Leaders Summit gathers together as many of the global MySQL user group leaders as possible. At the summit, we discuss further actions on how we can better act for their local communities. This year, it focused primarily on cloud technologies.
As the Azerbaijan MySQL User Group leader, I felt a keen responsibility to go. I wanted to represent our group and learn as much as possible to take back to with me. Mingling and having conversations with other group leaders helps give me more ideas about how to spread the MySQL word!
The Conference
I attended three MySQL presentations:

Guided tour on the MySQL source code. In this session, we reviewed the layout of the MySQL code base, roughly following the query execution path. We also covered how to extend MySQL with both built-in and pluggable add-ons.
How profiling SQL works in MySQL. This session gave an overview of the performance monitoring tools in MySQL: performance counters, performance schema and SYS schema. It also covered some of the details in analyzing MySQL performance with performance_schema.
What’s New in MySQL 5.7 Security. This session presented an overview of the new MySQL Server security-related features, as well as the MySQL 5.6 Enterprise edition tools. This session detailed the shifting big picture of secure deployments, along with all of the security-related MySQL changes.

I thought that the conference was very well organized, with uniformly great discussions. We also participated in some city activities and personal interactions. I even got to see Le Fred!
I learned a lot from the informative sessions I attended. The MySQL source code overview showed me the general paths of MySQL source code, including the most important directories, the most important functions and classes. The session about MySQL profiling instrumentation sessions informed us of the great MySQL profiling improvements. It reviewed some useful tools and metrics that you can use to get info from the server. The last session about MySQL security covered improved defaults, tablespace encryption and authentication plugins.
In conclusion, my time was well spent. Meeting and communicating with other MySQL user group leaders gives me insight into the MySQL community. Consequently, I highly recommend everyone gets involved in your local user groups and attend get-togethers like the MySQL User Group Leaders Summit when you can find the time.
Below you can see some of the pics from the trip. Enjoy!





Percona Server for MongoDB 3.2.7-1.1 is now available

Percona_ServerfMDBLogoVertPercona announces the release of Percona Server for MongoDB 3.2.7-1.1 on June 29, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Percona Server for MongoDB 3.2.7-1.1 is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. Based on MongoDB 3.2.7, it extends MongoDB with MongoRocks and PerconaFT storage […]

TEL/電話+86 13764045638
QQ 47079569