Month: February 2015

Oracle Support必备网络讲座(2015年3月)

主题 描述 日期时间 报名
My Oracle Support 版本15.1新特性 新的My Oracle Support版本15.1带来的变化,以帮助您更有效地与Oracle支持工作。
本讲座是针对My Oracle Support的常规用户,并希望很快了解新功能和改进的客户。此外,本讲座还提供了参考资料,以进一步学习Oracle支持的资源。
3月10日11:00 报名
My Oracle Support 基础

本讲座的目标客户是那些想要学习My Oracle Support门户基本功能的用户。此演示包括功能概述,仪表板定制,搜索内容和修补程序,使用My Oracle Support社区,利用主动式投资组合,设立热门话题警报,以及创建服务请求等。

3月11日11:00 报名
在My Oracle Support中寻找解决方案

“在My Oracle Support寻找解决方案”的讲座是为那些每天处理日常问题和使用Oracle产品而产生的问题的客户设计的。即使对My Oracle Support有经验的用户可能会在本讲座中发现一些有用的技巧和技术。

本讲座旨在为客户获得My Oracle Support的搜索提供专业知识,了解如何为特定的信息需求选择最好的搜索技术,以及参与My Oracle Support的社区提出问题,并找出解决办法推荐的最佳做法。持续时间:30分钟。

3月18日11:00 报名
服务请求流程和最优实践

服务请求要点研讨会的目标是为新的Oracle支持环境的客户,或为那些可能需要了解服务请求创建流程的客户提供Oracle支持的最佳实践和SR管理进修课程。

30分钟的在线讲座提供了My Oracle Support的门户网站概述,专注于服务请求的体验。通过本次讲座,与会者将了解有关如何提交一个良好的服务请求,服务请求严重性级别和状态代码是什么及其含义,以及如何与Oracle支持协作的最佳实践。也将讨论如何将服务请求升级使管理层关注的情况。

3月24日11:00 报名
如何有效获得Oracle支持服务

有效获得支持讲座是针对那些想要了解更多有关Oracle 客户支持环境和那些正在从他们的Oracle支持合同中寻找最大价值的客户。在讲座期间,您将学习如何发现Oracle支持的最佳实践,采用My Oracle Support(MOS)门户网站自带的所有特性和功能,和最大化利用工具和流程的组合,以确保您的系统/软件稳定健康的运行。

3月25日11:00 报名

Australian March Training Offer

Autumn is almost upon us here in Australia so why not hold off  going into hibernation and head into the classroom instead. For March and April only, Rittmanmead courses in Australia* are being offered at significantly discounted prices. Heading up this promotion is the popular TRN202 OBIEE 11g Bootcamp course which will be held in Melbourne, […]

Log Buffer #412, A Carnival of the Vanities for DBAs

This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.
Oracle:
Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization
FULL and NO_INDEX Hints
Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz
Why I’m Excited About Oracle Integration Cloud Service – New Video
Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015
SQL Server:
An article about how we underestimate the power of joins and degrade our query performance by not using proper joins
Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.
How do you develop and deploy your database?
A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.
Error handling with try-catch-finally in PowerShell for SQL Server
MySQL:
MySQL Enterprise Monitor 3.0.20 has been released
MySQL Cluster 7.4 is GA!
Connector/Python 2.1.1 Alpha released with C Extension
Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?
MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

MySQL Replication and GTID-based failover – A Deep Dive into Errant Transactions

For years, MySQL replication used to be based on binary log events – all a slave knew was the exact event and the exact position it just read from the master. Any single transaction from a master may have ended in different binary logs, and in different positions in these logs. It was a simple solution that came with limitations – more complex topology changes could require an admin to stop replication on the hosts involved. Or these changes could cause some other issues, e.g., a slave couldn’t be moved down the replication chain without time-consuming rebuild process (we couldn’t easily change replication from A -> B -> C to A -> C -> B without stopping replication on both B and C). We’ve all had to work around these limitations while dreaming about a global transaction identifier.
GTID was introduced along with MySQL 5.6, and brought along some major changes in the way MySQL operates. First of all, every transaction has an unique identifier which identifies it in a same way on every server. It’s not important anymore in which binary log position a transaction was recorded, all you need to know is the GTID: ‘966073f3-b6a4-11e4-af2c-080027880ca6:4’. GTID is built from two parts – the unique identifier of a server where a transaction was first executed, and a sequence number. In the above example, we can see that the transaction was executed by the server with server_uuid of ‘966073f3-b6a4-11e4-af2c-080027880ca6’ and it’s 4th transaction executed there. This information is enough to perform complex topology changes – MySQL knows which transactions have been executed and therefore it knows which transactions need to be executed next. Forget about binary logs, it’s all in the GTID.
So, where can you find GTID’s? You’ll find them in two places. On a slave, in ‘show slave status;’ you’ll find two columns: Retrieved_Gtid_Set and Executed_Gtid_Set. First one covers GTID’s which were retrieved from the master via replication, the second informs about all transactions which were executed on given host – both via replication or executed locally. 
 
Setting up a Replication Cluster the easy way

We’ll use the Severalnines Configurator to automatically deploy our replication setup. First, you need to point your browser at:
http://www.severalnines.com/replication-configurator/
The first page doesn’t give you too many options so you can as well click on ‘Next’ button.

The next screen contains some options regarding operating system, where the infrastructure will be created and so forth. All of options are explained in details, one thing that may be worth commenting is the ‘Number of MySQL Slaves’. By default, the deploy scripts create a master-master pair connected by semi-sync replication. This is the smallest possible block created when you set the number of slaves to 0. Every slave that you want to create will be connected to this master-master pair.

The third screen is related to the MySQL configuration of the database nodes – you can define how ‘large’ nodes will be in terms of CPU and memory, you can also set up InnoDB buffer pool sizes and predicted workload pattern.
The last screen lets you fill in the IP addresses of the ClusterControl server and the nodes in the replication setup. 
Finally, you need to fill in your email address to which file with deployment scripts will be sent.
 
Deployment
For this blog post, let’s assume we want to create infrastructure on premises. We’ll use couple of Vagrant nodes. Deployment on EC2 may work in a slightly different way that what you see below.
When all nodes are up and running, you need to copy the tarball that you received via email, to the ClusterControl node. Next,  untar it, go to the install directory and execute the deploy.sh script:

$ tar zxf s9s-mysql-56.tar.gz
$ cd s9s-mysql-56/mysql/scripts/install/
$ ./deploy.shAt the beginning of the deploy process you’ll be asked the following:

Can you SSH from this host to all other hosts without password?
Choosing ‘n’ will allow you to setup shared keys. (y/n):If you have passwordless ssh set up already, you can choose ‘y’ here. In other case you’ll be asked to provide passwords for root user on all nodes to generate and distribute ssh keys.
The deployment will continue, and you should have you replication setup up and running after 15 to 20 minutes. We can now  take a look at how GTID works.

Errant transactions –  what is the issue?
As we mentioned at the beginning of this post, GTID’s brought a significant change in the way people should think about MySQL replication. It’s all about habits. Let’s say, for some reason, that an application performed a write on one of the slaves. It shouldn’t have happened but surprisingly, it happens all the time. As a result, replication stops with duplicate key error. There are couple of ways to deal with such problem. One of them would be to delete the offending row and restart replication. Other one would be to skip the binary log event and then restart replication. 

mysql> STOP SLAVE SQL_THREAD; SET GLOBAL sql_slave_skip_counter = 1; START SLAVE SQL_THREAD;Both ways should bring replication back to work, but they may introduce data drift so it is necessary to remember that slave consistency should be checked after such event (pt-table-checksum and pt-table-sync works well here).
If a similar problem happens while using GTID, you’ll notice some differences. Deleting the offending row may seem to fix the issue, replication should be able to commence. The other method, using sql_slave_skip_counter won’t work at all – it’ll return an error. Remember, it’s now not about binlog events, it’s all about GTID being executed or not.
Why deleting the row only ‘seems’ to fix the issue? One of the most important things to keep in mind regarding GTID is that a slave, when connecting to the master, checks if it is missing any transactions which were executed on the master. These are called errant transactions. If a slave finds such transactions, it will execute them. Let’s assume we ran following SQL to clear an offending row:

mysql> DELETE FROM mytable WHERE id=100;Let’s check show slave status:

Master_UUID: 966073f3-b6a4-11e4-af2c-080027880ca6
Retrieved_Gtid_Set: 966073f3-b6a4-11e4-af2c-080027880ca6:1-29
Executed_Gtid_Set: 84d15910-b6a4-11e4-af2c-080027880ca6:1,
966073f3-b6a4-11e4-af2c-080027880ca6:1-29,
And see where the 84d15910-b6a4-11e4-af2c-080027880ca6:1 comes from:

mysql> SHOW VARIABLES LIKE ‘server_uuid’\G
*************************** 1. row ***************************
Variable_name: server_uuid
Value: 84d15910-b6a4-11e4-af2c-080027880ca6
1 row in set (0.00 sec)As you can see, we have 29 transactions that came from the master, UUID of 966073f3-b6a4-11e4-af2c-080027880ca6 and one that was executed locally. Let’s say that at some point we failover and the master (966073f3-b6a4-11e4-af2c-080027880ca6) becomes a slave. It will check its list of executed GTID’s and will not find this one: 84d15910-b6a4-11e4-af2c-080027880ca6:1. As a result, the related SQL will be executed:

mysql> DELETE FROM mytable WHERE id=100;This is not something we expected… If, in the meantime, the binlog containing this transaction would be purged on the old slave, then the new slave will complain after failover:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ‘The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.’
 
How to detect errant transactions?
MySQL provides two functions which come in very handy when you want to compare GTID sets on different hosts.
GTID_SUBSET() takes two GTID sets and checks if the first set is a subset of the second one.
 Let’s say we have following state.
Master:

mysql> show master status\G
*************************** 1. row ***************************
File: binlog.000002
Position: 160205927
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1,
ab8f5793-b907-11e4-bebd-080027880ca6:1-2
1 row in set (0.00 sec)Slave:

mysql> show slave status\G
[…]
Retrieved_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1
Executed_Gtid_Set: 8a6962d2-b907-11e4-bebc-080027880ca6:1-153,
9b09b44a-b907-11e4-bebd-080027880ca6:1,
ab8f5793-b907-11e4-bebd-080027880ca6:1-4We can check if the slave has any errant transactions by executing the following SQL:

mysql> SELECT GTID_SUBSET(‘8a6962d2-b907-11e4-bebc-080027880ca6:1-153,ab8f5793-b907-11e4-bebd-080027880ca6:1-4’, ‘8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2’) as is_subset\G
*************************** 1. row ***************************
is_subset: 0
1 row in set (0.00 sec)Looks like there are errant transactions. How do we identify them? We can use another function, GTID_SUBTRACT()

mysql> SELECT GTID_SUBTRACT(‘8a6962d2-b907-11e4-bebc-080027880ca6:1-153,ab8f5793-b907-11e4-bebd-080027880ca6:1-4’, ‘8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2′) as mising\G
*************************** 1. row ***************************
mising: ab8f5793-b907-11e4-bebd-080027880ca6:3-4
1 row in set (0.01 sec)Our missing GTID’s are ab8f5793-b907-11e4-bebd-080027880ca6:3-4 – those transactions were executed on the slave but not on the master.
 
How to solve issues caused by errant transactions?
There are two ways – inject empty transactions or exclude transactions from GTID history.
To inject empty transactions we can use the following SQL:

mysql> SET gtid_next=’ab8f5793-b907-11e4-bebd-080027880ca6:3′;
Query OK, 0 rows affected (0.01 sec)

mysql> begin ; commit;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

mysql> SET gtid_next=’ab8f5793-b907-11e4-bebd-080027880ca6:4′;
Query OK, 0 rows affected (0.00 sec)

mysql> begin ; commit;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

mysql> SET gtid_next=automatic;
Query OK, 0 rows affected (0.00 sec)This has to be executed on every host in the replication topology that does not have those GTID’s executed. If the master is available, you can inject those transactions there and let them replicate down the chain. If the master is not available (for example, it crashed), those empty transactions have to be executed on every slave. Oracle developed a tool called mysqlslavetrx which is designed to automate this process.
Another approach is to remove the GTID’s from history:
Stop slave:

mysql> STOP SLAVE;Print Executed_Gtid_Set on the slave:

mysql> SHOW MASTER STATUS\GReset GTID info:

mysql> RESET MASTER;Set GTID_PURGED to a correct GTID set. based on data from SHOW MASTER STATUS. You should exclude errant transactions from the set:

mysql> SET GLOBAL GTID_PURGED=’8a6962d2-b907-11e4-bebc-080027880ca6:1-153, 9b09b44a-b907-11e4-bebd-080027880ca6:1, ab8f5793-b907-11e4-bebd-080027880ca6:1-2’;Start slave:

mysql> START SLAVE\GIn every case, you should verify consistency of your slaves using pt-table-checksum and pt-table-sync (if needed) – errant transaction may result in a data drift.
Blog category: DB OpsTags: errant transactionfailovergtidMariaDBMySQLreplication

3 handy tools to remove problematic MySQL processes

DBAs often encounter situations where they need to kill queries to ensure there are no long-running queries on a MySQL server that would impact performance. Long-running queries can be the result of many factors. Fortunately, Percona Server contains some handy tools to remove problematic MySQL processes. I will highlight all of the tools via some […]

The post 3 handy tools to remove problematic MySQL processes appeared first on MySQL Performance Blog.

Baron Schwartz Presents on Time-Series Data at Scale13X

This past weekend we had the pleasure of attending Scale13X in Los Angeles. It was full of great conversations with solid people, and it was nice to escape the winter cold for a couple days.

Baron Schwartz presented on VivdCortex’s approach to building a time-series database in MySQL. The talk is titled Scaling VividCortex’s Big Data Systems on MySQL, and the recording and video can be seen here. If you are not automaticaly directed, Baron’s presentation begins at 5:49:00.

Below are the slides and a brief description to pique your interest.

In this talk, Baron Schwartz discusses VividCortex’s unique time-series data requirements and implementation, including:

How we built a solution using MySQL and additional components such as Redis
Why we needed more than just MySQL to meet the requirements
The good and bad aspects of our architecture
Performance statistics
Thoughts for the future of our time-series data architecture

You will leave the talk with a greater understanding of MySQL’s capabilities related to time-series data.

Baron has presented a webinar on this before, and you can register for a copy of it here.

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569