Month: December 2016

Percona Live Featured Tutorial with Øystein Grøvlen — How to Analyze and Tune MySQL Queries for Better Performance

Welcome to another post in the series of Percona Live featured tutorial speakers blogs! In these blogs, we’ll highlight some of the tutorial speakers that will be at this year’s Percona Live conference. We’ll also discuss how these tutorials can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!
In this Percona Live featured tutorial, we’ll meet Øystein Grøvlen, Senior Principal Software Engineer at Oracle. His tutorial is on How to Analyze and Tune MySQL Queries for Better Performance. SQL query performance plays a big role in application performance. If some queries execute slowly, these queries or the database schema may need tuning. I had a chance to speak with Øystein and learn a bit more about the MySQL query tuning:
Percona: How did you get into database technology? What do you love about it?
Øystein: I got into database technology during my Ph.D. studies. I got in touch with a research group in Trondheim, Norway, that did research on highly available distributed database systems. I ended up writing a thesis on query processing in such database systems.
What I love most about my job on the MySQL Optimizer Team is that it involves a lot of problem-solving. Why is a query so slow? What can we do to improve it? I have always been very interested in sports results and statistics. Working with query execution times gives me much of the same feeling. Searching for information is another interest of mine, and that is really what query execution is about.
Percona: What impacts database performance the most?
Øystein: From my point of view – mainly concerned with the performance of read-only queries – the most important performance metric is how much data needs to be accessed in order to answer a query. For update-intensive workloads, it is often about concurrency issues. For SELECT statements, the main thing is to not access more data than necessary.
Users should make sure to design their database schema so that the database system can efficiently access the needed data. This includes creating the right indexes. As MySQL developers, we need to develop the right algorithms to support efficient retrieval. We also need to provide a query optimizer that can pick the best query execution plan.
Of course, there are other performance aspects that are important. Especially if your data cannot fit in your database buffer pool. In that case, the order in which you access the data becomes more important. The best query plan when your data is disk-bound is not necessarily the same as when all data is in memory.
Percona: Your tutorial is called “How to Analyze and Tune MySQL Queries for Better Performance.” What are the most recent MySQL updates that help with tuning queries?
Øystein: I think the biggest improvements came in MySQL 5.6, with increased observability through performance schema and new variants of
EXPLAIN (Structured EXPLAIN (JSON format) and visual EXPLAIN in MySQL Workbench). We also added Optimizer Trace, which gives insight into how the optimizer arrived at a certain query plan. All this made it easier to identify queries that need tuning, understand how a query is executed and what might be done to improve it.
In MySQL 5.7, we added a new syntax for optimizer hints, and provided a lot of new hints that can be used to influence the optimizer to change a non-optimal query plan. We also provided a query rewrite plugin that makes it possible to tune queries even when it is not possible to change the application.
MySQL 5.7 also came with improvements to EXPLAIN. It is now possible to get the query plan for a running query, and Structured EXPLAIN shows both estimated total query cost and the cost per table. A more experimental feature allows you to provide your own cost constants to the optimizer.  This way, you can configure the optimizer to better suit your particular system.
For MySQL 8.0 we are continuing to improve tunability by adding more optimizer hints.  At the same time, we are working hard on features that will reduce the need for tuning. Histograms and awareness of whether data is in memory or on disk make the optimizer able to pick better query plans.
Percona: What do you want attendees to take away from your tutorial session? Why should they attend?
Øystein: While the query optimizer in most cases will come up with a good query plan, there are some cases where it won’t generate the most optimal query plan. This tutorial will show how you can identify which queries need tuning, how you can further investigate the issues and what types of tuning options you have for different types of queries. By attending this tutorial, you will learn how to improve the performance of applications through query tuning.
Percona: What are you most looking forward to at Percona Live?
Øystein: I am looking forward to interacting with MySQL users, discussing the query performance issues they might have, and learning how I can help with their issues.
You can find out more about Øystein Grøvlen and his work with databases at his blog, or follow him on Twitter: @ogrovlen. Want to find out more about Øystein and MySQL query optimization? Register for Percona Live Data Performance Conference 2017, and see his tutorial How to Analyze and Tune MySQL Queries for Better Performance. Use the code FeaturedTalk and receive $30 off the current registration price!
Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.
The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Checking disk activity using iostat

I often use a monitoring system to look at disk activity,
but sometimes it’s nice to have a CLI tool to get stats in a different format. I use
iostat(1) for that. iostat has a bunch of different options, but I usually stick to
iostat -dxy.

Here’s the description for each option (from the man page):

-d Display the device utilization report.
-x Display extended statistics.
-y Omit first report with statistics since system boot, if displaying multiple records at given interval.

Example output:

$ iostat -dxy 1
Linux 4.4.0-53-generic 12/28/2016 _x86_64_ (1 CPU)

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 9.90 0.00 39.60 8.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 9.90 0.00 39.60 8.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 10.10 0.00 40.40 8.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 20.79 0.00 11.88 0.00 130.69 22.00 0.00 0.00 0.00 0.00 0.00 0.00

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 83.84 0.00 420.20 10.02 0.23 2.80 0.00 2.80 0.10 0.81

Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.00 0.00 10.00 0.00 40.00 8.00 0.00 0.00 0.00 0.00 0.00 0.00

The 1 in the example is the interval. It’s optional, but if you don’t provide it, iostat will
show you stats since system boot, which isn’t very useful. You can also pass an optional count
after the interval. For example, iostat -dxy 3 2 will print 3-second averages twice and exit.

By the way, if you want to read up on %util (the last column), check out this blog post:
Why %util number from iostat is meaningless for MySQL capacity planning.
The comments are good too.

FOSDEM’17 MySQL & Friends Devroom, the schedule

This year, the MySQL & Friends Devroom will be held on Saturday 4th February 2017.
After having received a lot of submission (the highest amount ever!), with MySQL Community Team, we decided to organize and extra MySQL day that will take place at ICAB Incubator, 4 Rue des Pères Blancs in Brussels (same location as the MySQL Community Dinner). More to come on this day in a future blog post.
So back to the MySQL & Friends Devroom, the Committee high rated the following talks:
Event
Speakers
Start
End

Saturday

MySQL & Friends DevroomWelcome
Frédéric Descamps
10:00
10:30
Optimizing MySQL without SQL or touching my.cnf
Maxim Bublis
10:30
10:50
Applying profilers to MySQLFrom PMP to perf, and why performance_schema is not a replacement in all cases
Valerii Kravchuk
11:00
11:20
Instrumenting plugins for Performance Schema
markleith
11:30
11:50
sysbench 1.0: teaching an old dog new tricks
Alexey Kopytov
12:05
12:25
Introducing gh-ost: GitHub’s triggerless, painless schema migrations for MySQL
Shlomi Noach
12:35
12:55
Autopsy of an automation disaster
Jean-François Gagné
13:05
13:25
Honeypot your databaseAnd easy method to detect if you’ve been hacked
Georgi Kodinov (Joro)
13:35
13:55
The Proxy Wars – MySQL Router, ProxySQL, MariaDB MaxScale
Colin Charles
14:05
14:25
Painless MySQL HA, Scalability and Flexibility with Ansible, MHA and ProxySQL
Miklos Mukka Szel
14:35
14:55
Inexpensive Datamasking for MySQL with ProxySQLdata anonymization for developers
Frédéric Descamps, René Cannaò
15:05
15:25
Speeding Up Analytical Queries in MariaDBCTE’s and Window Functions Highlight
Vicențiu Ciorbaru
15:35
15:55
Data Analytics with MySQL, Apache Spark and Apache Drill
Alexander Rubin
16:05
16:25
Group Replication: A Journey to the Group Communication Core
Alfranio
16:35
16:55
Fine tuning Group Replication for performance
Vitor Oliveira
17:05
17:25
Mix ‘n’ Match Async and Group Replication for Advanced Replication Setups
Pedro Gomes
17:35
17:55
MyRocks: the RocksDB storage engine for MySQL
Mark Callaghan
18:05
18:25
MyRocks in production at Facebook
Yoshinori Matsunobu
18:30
18:50
 
Update: After each talk, you have 5 mins for questions and we have a 5 mins buffer to swap speaker and video recording.

Using Percona XtraBackup on a MySQL Instance with a Large Number of Tables

In this blog post, we’ll find out how to use Percona XtraBackup on a MySQL instance with a large number of tables.
As of Percona Xtrabackup 2.4.5, you are required to have enough open files to open every single InnoDB tablespace in the instance you’re trying to back up. So if you’re running innodb_file_per_table=1, and have a large number of tables, you’re very likely to see Percona XtraBackup fail with the following error message:InnoDB: Operating system error number 24 in a file operation.
InnoDB: Error number 24 means ‘Too many open files’
InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
InnoDB: File ./sbtest/sbtest132841.ibd: ‘open’ returned OS error 124. Cannot continue operation
InnoDB: Cannot continue operation.If you run into this issue, here is what you need to do:
Find out how many files you need:
root@ts140i:~# find /var/lib/mysql/ -name “*.ibd” | wc -l
1000005
I would add at least another 1000 to this number for system tablespace and other miscellaneous open file needs. You might want to go even higher to accommodate for a growing number of tables.
Check the maximum number of files you can keep open in the system. If this number is too small Percona Xtrabackup might monopolize the open files in the system, causing other processes to fail when they try to open files. This can cause MySQL Server to crash, and other processes to fail.
root@ts140i:/mnt/data/backup# cat /proc/sys/fs/file-max
3262006
If you need to, here is how to  increase the number:
sysctl -w fs.file-max=5000000
echo “fs.file-max=5000000” >> /etc/sysctl.conf
Increase the limit on the number of files the Percona XtraBackup process can open:
The best way to do this is using –open-files-limit option. For example, you can specify the following in your my.cnf:
[xtrabackup]
open-files-limit=2000000
Alternatively, you can pass it as a command-line option, or run ulimit -n 2000000 before running the backup command.
You need to be sure your user account has permissions to set open files limit this high. If you are doing backups under the “root” user, it shouldn’t be a problem. Otherwise, you might need to adjust the limits in  /etc/security/limits.conf:
mysql hard nofile 2000000
mysql soft nofile 2000000
Specifying a “soft” limit in this file eliminates the need to run ulimit before Percona XtraBackup, or specifying it in the configuration.
There is one more possible limit to overcome. Even running as a root user, you might get the following error message:
root@ts140i:/mnt/data/backup# ulimit -n 2000000
-su: ulimit: open files: cannot modify limit: Operation not permitted
If this happens, you might need to increase the kernel limit on the number of processes any can have:
pz@ts140i:~$ cat /proc/sys/fs/nr_open
1048576
The limit I have on this system is slightly above 1 million. You can increase it using the following:
sysctl -w fs.nr_open=2000000
echo “fs.nr_open=2000000” >> /etc/sysctl.confWith these configuration adjustments, you should be able to use Percona XtraBackup to backup MySQL instances containing millions of tables without problems.
What if you can’t allow Percona XtraBackup to open that many files? Then there is the option –close-files that won’t normally require increasing the limit to the number of open files. Using this option, however, might cause the backup corruption if you’re doing DDL operations during the backup.
From where does this strange limitation requiring you to keep all tablespaces open come? It comes from this issue. In some cases, DDL operations such as RENAME TABLE might cause the wrong file to be copied, and unable to be caught up by replying to InnoDB redo logs. Keeping the file open clearly shows which file corresponds to a given tablespace at the start of a backup process, and gets handled correctly.
This problem is not unique to Percona XtraBackup. If anything, Percona Xtrabackup goes the extra mile to ensure database backups are safe.  For comparison, MySQL Enterprise Backup 4.0  simply states:  
“Do not run the DDL operations ALTER TABLE, TRUNCATE TABLE, OPTIMIZE TABLE, REPAIR TABLE, RESTORE TABLE or CREATE INDEX while a backup operation is going on. The resulting backup might become corrupted.”

MSSQL to MySQL Data migration using MySQL workbench 6.3

Recently I was testing data migration from MSSQL to MySQL using MySQL Workbench. My aim was to include data with all datatype available in MSSQL for migration. In this following blog post will see data migration prerequisites, migration steps and few common errors.
About MySQL Workbench
Prerequisite
DATA SOURCE ODBC Configuration
Required privileges on MS-SQL DB
DB Servers Data Types Mapping
Sample MS-SQL data
Migration using MySQL Workbench
Data and Log analysis

About MySQL Workbench
MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, Data migration and much more. MySQL Workbench is available on Windows, Linux and Mac OS X.
When a supported RDBMS product is being migrated, the MySQL Workbench Migration Wizard will automatically convert as much information as it can, but you may still be required to manually edit the automatically migrated schema for difficult cases, or when the default mapping is not as desired.
Generally speaking, only table information and its data are automatically converted to MySQL. Code objects such as views, stored procedures, and triggers, are not. But supported RDBMS products will be retrieved and displayed in the wizard. You can then manually convert them, or save them for converting at a later time.
The following MS-SQL versions are currently tested and supported by the MySQL Workbench Migration Wizard.
Microsoft SQL Server 2000, 2005, 2008, 2012

Prerequisite
Download and install MySQL Workbench GUI tool.
The MySQL Workbench Migration Wizard uses ODBC to connect to a source database, except for MySQL. You will need the ODBC driver installed that corresponds to the database you want to migrate from.
Preparation
To be able to migrate from Microsoft SQL Server, ensure the following:
The source SQL Server instance is running, and accepts TCP connections.
You know the IP and port of the source SQL server instance. If you will be migrating using a Microsoft ODBC driver for SQL Server (the default in Windows), you will need to know the host and the name of the SQL Server instance.
Make sure that the SQL Server is reachable from where you will be running MySQL Workbench. More specifically, check the firewall settings.
Make sure that the user account has proper privileges to the database that will be migrated.

DATA SOURCE ODBC Configuration
In order to set up a connectivity between MSSQL and MySQL, We need to configure DATA SOURCE drivers ODBC with MS-SQL connection information.
Add new System data source.

Required privileges on MS-SQL DB
CONNECT SQL
VIEW ANY DATABASE
VIEW ANY DEFINITION
If proper permission not given then at the migration it will throw warning as follow,

DB Servers Data Types Mapping
Refer:
https://dev.mysql.com/doc/workbench/en/wb-migration-database-mssql-typemapping.html
Available datatype in MS-SQL:
https://msdn.microsoft.com/en-us/library/ms187752.aspx

Sample MS-SQL data
Sample table and data with different MS-SQL datatype

— integer data types

CREATE TABLE dbo.int_table
(
MyBigIntColumn bigint
,MyIntColumn  int
,MySmallIntColumn smallint
,MyTinyIntColumn tinyint
);

ALter table int_table add MyBitColumn bit;

ALter table int_table add CONSTRAINT bit_def default 1  for MyBitColumn;

INSERT INTO dbo.int_table VALUES (9223372036854775807, 214483647,32767,255);

update int_table SET MyBitColumn=1;

Alter table int_table alter column MyBitColumn bit not null;

— decimal and numeric Data datatypes

CREATE TABLE dbo.num_table
(
MyDecimalColumn decimal(5,2)
,MyNumericColumn numeric(10,5)
);

INSERT INTO dbo.num_table VALUES (123, 12345.12);

Alter table num_table add MyMoneycolumn money;

Alter table num_table add MysmallMoneycolumn smallmoney;

INSERT INTO num_table (MyMoneyColumn,MysmallMoneycolumn) values(20.098,45.68);

alter table num_table add c_real real,c_float float (32);

INSERT INTO num_table (c_real,c_float) values(2.0,43.67897);

— datetime datatype

create table date_table(
ID               VARCHAR(4)         NOT NULL,
First_Name         VARCHAR(20),
Last_Name          VARCHAR(20),
Start_Date         DATE,
End_Date           DATE,
c_time             Time,
Salary             Money,
City               VARCHAR(20),
Description        VARCHAR(80),
c_datetime             datetime,
cs_smalldatetime    smalldatetime,
c_datetime2         datetime2
)

ALTER TABLE date_table ADD CONSTRAINT cc_datetime DEFAULT GETDATE() FOR c_datetime;

ALTER TABLE date_table ADD CONSTRAINT cc_time DEFAULT convert(time, getdate()) FOR c_time;

ALTER TABLE date_table ADD CONSTRAINT cc_startdate DEFAULT convert(date,getdate()) FOR start_date;

ALTER TABLE date_table ADD CONSTRAINT cc_enddate DEFAULT convert(date,getdate()) FOR end_date;

— prepare data

insert into date_table(ID,  First_Name, Last_Name,Salary,  City, Description,cs_smalldatetime,c_datetime2)

values (’01’,’Jason’,    ‘Martin’, 1234.56, ‘Toronto’,  ‘Programmer’,’2007-05-08 12:35:00′,’2007-05-08 12:35:29. 1234567′);

insert into date_table(ID,  First_Name, Last_Name,Salary,  City, Description,cs_smalldatetime,c_datetime2)

values(’02’,’Alison’,   ‘Mathews’, 2234.78, ‘Vancouver’,’Tester’,’2016-07-08 12:36:00′,’2006-07-08 12:36:29. 1234567′); 

— char,varchar,nvarchar,tinyint,int,text datatypes

CREATE TABLE [dbo].[Employee_2](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](150) NULL,
[empid] [int] NOT NULL,
[age] [tinyint] NOT NULL,
[gender] VARCHAR(10) NOT NULL CHECK (gender IN(‘male’, ‘Female’, ‘Unknown’))
);

Alter table Employee_2 add primary key (Id);

— Create a nonclustered index on a table or view

CREATE INDEX i1 ON dbo.Employee_2 (Name);

–Create a clustered index on a table and use a 3-part name for the table

CREATE CLUSTERED INDEX ci1 ON Employee_2([Id]);

CREATE UNIQUE INDEX pk1 ON dbo.Employee_2 (Id);

— Create a nonclustered index with a unique constraint on 3 columns and specify the sort order for each column

CREATE UNIQUE INDEX ui1 ON dbo.Employee_2 (name DESC, empid ASC, age DESC);

INSERT INTO Employee_2 (Name,empid,age,gender) values (‘lalit’,268981,27,’male’);

INSERT INTO Employee_2 (Name,empid,age,gender) values (‘harsh’,268982,28,’male’);

INSERT INTO Employee_2 (Name,empid,age,gender) values (‘jina’,268983,27,’female’);

INSERT INTO Employee_2 (Name,empid,age,gender) values (‘xyz’,268984,32,’Unknown’);

ALTER table employee_2 add emp_notes2 text;

update employee_2 SET emp_notes2=’test data Migration from mssql- mysql’;

CREATE TABLE Persons
(
P_Id int NOT NULL,
Lastname varchar(40),
Firstname varchar(40) NOT NULL,
Address varchar(100),
City char(50),
PRIMARY KEY (P_Id)
);

INSERT INTO Persons values(1,’c’,’lalit’,’IT park’,’PUNE’);

CREATE TABLE Orders
(
O_Id int NOT NULL,
OrderNo int NOT NULL,
P_Id int,
PRIMARY KEY (O_Id),
CONSTRAINT fk_PerOrders FOREIGN KEY (P_Id)
REFERENCES Persons(P_Id)
);

INsert INTO orders values(1,2234,1);

create table binary_table (c_binary binary , c_varbinary varbinary (max) ,c_image image);

INSERT INTO binary_table (c_varbinary)
values (convert(VARBINARY(max),44));

Few extra tables MS-SQL Table:
[dbo].[Altiris, Inc_$Item] table with 20686 rows

CREATE TABLE [dbo].[Altiris, Inc_$Item](
[No_] [varchar](20) NOT NULL,
[Description] [varchar](100) NOT NULL,
[Description 2] [varchar](50) NULL,
[Blocked] [tinyint] NOT NULL,
[Last Date Modified] [datetime] NULL,
[Inactive] [tinyint] NOT NULL,
[Fixed Node] [tinyint] NOT NULL,
[Minimum Nodes] [int] NOT NULL,
[Maximum Nodes] [int] NOT NULL,
[Tax Group Code] [varchar](10) NOT NULL,
[Current Price List] [tinyint] NOT NULL,
[PrimeKey] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [primeKey_FK1] PRIMARY KEY CLUSTERED
(
[PrimeKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

[dbo].[Employee]

CREATE TABLE [dbo].[Employee](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](150) NULL,
CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

[dbo].[CUSTOMERS]

CREATE TABLE [dbo].[CUSTOMERS](
[ID] [int] NOT NULL,
[NAME] [varchar](20) NOT NULL,
[AGE] [int] NOT NULL,
[ADDRESS] [char](25) NULL,
[SALARY] [decimal](18, 2) NULL,
PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

 
[dbo].[Products]

CREATE TABLE [dbo].[Products](
[ProductID] [int] NOT NULL,
[ProductName] [varchar](25) NOT NULL,
[Price] [money] NULL,
[ProductDescription] [text] NULL,
[c_time] [datetime] NOT NULL,
PRIMARY KEY CLUSTERED
(
[ProductID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Products] ADD  CONSTRAINT [DF_YourTable]  DEFAULT (getdate()) FOR [c_time]
GO

Note: Insert appropriate data in above tables for respective datatypes.

Migration using MySQL Workbench

MS-SQL Connection test:
NOTE:  Connection method ‘ODBC Data Source’ option will cause for following error. To avoid such error at the time of data migration use Connection method as ‘ODBC Data Source [FreeTDS]’

MySQL Connection test:
Fetching schema:
Schema selection and DB naming settings:
Reverse Engineering:
Source object list:
Migration:
Manual editing: ( If needed)
NOTE: Will Fix these warnings in next steps.
Target creation options:
Create schemas:
Create target result:
As we have seen warnings for date_table as ‘Defaults value CONVERT(……) is not supported’. In MySQL we cannot use functions in with default, to tackle this problem will create trigger on table for insert operation as follows,

CREATE TRIGGER before_insert_date_table
before INSERT ON date_table
FOR EACH ROW
SET new.Start_Date = curdate() , new.End_Date= curdate() , new.c_time= curtime();

The DEFAULT value clause in a data type specification indicates a default value for a column. With one exception, the default value must be a constant; it cannot be a function or an expression. This means, for example, that you cannot set the default for a date column to be the value of a function such as NOW() or CURRENT_DATE. The exception is that you can specify CURRENT_TIMESTAMP as the default for TIMESTAMP and DATETIME columns.
https://dev.mysql.com/doc/refman/5.7/en/data-type-defaults.html
Another table that we need to fix is  Employee_2
MS-SQL Employee_2 table column definition:

[gender] VARCHAR(10) NOT NULL CHECK (gender IN(‘male’, ‘Female’, ‘Unknown’))

MySQL Employee_2 table column definition:
Appropriate datatype for ‘gender’ column, Let change it:

`gender` enum(‘Male’,’Female’,’Unknown’) NOT NULL default ‘Unknown’,

Data transfer setup:
Bulk data transfer:

Data and Log analysis
Migration Report:

————————————————————————————

MySQL Workbench Migration Wizard Report

Source: Microsoft SQL Server 12.0.4100

Target: MySQL 5.7.15

————————————————————————————
— Migration

— Summary

Number of migrated schemas: 1

— mysql_migration

Source Schema:   mysql_migration

– Tables:             11

– Triggers:           0

– Views:              4

– Stored Procedures:  3

– Functions:          0

— Migration Issues

– mysql_migration

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– CUSTOMERS

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– NAME

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– ADDRESS

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Products

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– ProductName

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– ProductDescription

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– c_time

note  Default value is getdate(), so type was changed from DATETIME to TIMESTAMP

– Employee

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Name

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Employee_2

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Name

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– gender

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– emp_notes

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– emp_notes2

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Persons

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Lastname

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Firstname

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Address

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– City

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Orders

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– int_table

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– MyBitColumn

note  Source column type BIT was migrated to TINYINT(1)

– num_table

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– date_table

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– ID

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– First_Name

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Last_Name

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Start_Date

warning  Default value CONVERT([date],getdate(),0) is not supported

– End_Date

warning  Default value CONVERT([date],getdate(),0) is not supported

– c_time

warning  Default value CONVERT([time],getdate(),0) is not supported

– City

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Description

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– c_datetime

note  Default value is getdate(), so type was changed from DATETIME to TIMESTAMP

– binary_table

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Altiris, Inc_$Item

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– No_

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Description

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Description 2

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

– Tax Group Code

note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci

— Object Creation Issues

— Migration Details

4.1. Table mysql_migration.CUSTOMERS (CUSTOMERS)

Columns:

– ID INT

– NAME VARCHAR(20)

– AGE INT

– ADDRESS CHAR(25)

– SALARY DECIMAL(18,2)

Foreign Keys:

Indices:

– PRIMARY (ID)

4.2. Table mysql_migration.Products (Products)

Columns:

– ProductID INT

– ProductName VARCHAR(25)

– Price DECIMAL(19,4)

– ProductDescription LONGTEXT

– c_time TIMESTAMP  CURRENT_TIMESTAMP

Foreign Keys:

Indices:

– PRIMARY (ProductID)

4.3. Table mysql_migration.Employee (Employee)

Columns:

– Id INT

– Name VARCHAR(150)

Foreign Keys:

Indices:

– PRIMARY (Id)

4.4. Table mysql_migration.Employee_2 (Employee_2)

Columns:

– Id INT

– Name VARCHAR(150)

– empid INT

– age TINYINT UNSIGNED

– gender VARCHAR(10)

– emp_notes LONGTEXT

– emp_notes2 LONGTEXT

Foreign Keys:

Indices:

– PRIMARY (Id)

– ci1 (Id)

– i1 (Name)

– pk1 (Id)

– ui1 (Name, empid, age)

4.5. Table mysql_migration.Persons (Persons)

Columns:

– P_Id INT

– Lastname VARCHAR(40)

– Firstname VARCHAR(40)

– Address VARCHAR(100)

– City CHAR(50)

Foreign Keys:

Indices:

– PRIMARY (P_Id)

4.6. Table mysql_migration.Orders (Orders)

Columns:

– O_Id INT

– OrderNo INT

– P_Id INT

Foreign Keys:

– fk_PerOrders (P_Id) ON Persons (P_Id)

Indices:

– PRIMARY (O_Id)

4.7. Table mysql_migration.int_table (int_table)

Columns:

– MyBigIntColumn BIGINT

– MyIntColumn INT

– MySmallIntColumn SMALLINT

– MyTinyIntColumn TINYINT UNSIGNED

– MyBitColumn TINYINT(1)  1

Foreign Keys:

Indices:

4.8. Table mysql_migration.num_table (num_table)

Columns:

– MyDecimalColumn DECIMAL(5,2)

– MyNumericColumn DECIMAL(10,5)

– MyMoneycolumn DECIMAL(19,4)

– MysmallMoneycolumn DECIMAL(10,4)

– c_real FLOAT(24,0)

– c_float DOUBLE

Foreign Keys:

Indices:

4.9. Table mysql_migration.date_table (date_table)

Columns:

– ID VARCHAR(4)

– First_Name VARCHAR(20)

– Last_Name VARCHAR(20)

– Start_Date DATE

– End_Date DATE

– c_time TIME(6)

– Salary DECIMAL(19,4)

– City VARCHAR(20)

– Description VARCHAR(80)

– c_datetime TIMESTAMP  CURRENT_TIMESTAMP

– cs_smalldatetime DATETIME

– c_datetime2 DATETIME(6)

Foreign Keys:

Indices:

4.10. Table mysql_migration.binary_table (binary_table)

Columns:

– c_binary BINARY(1)

– c_varbinary LONGBLOB

– c_image LONGBLOB

Foreign Keys:

Indices:

4.11. Table mysql_migration.Altiris, Inc_$Item (Altiris, Inc_$Item)

Columns:

– No_ VARCHAR(20)

– Description VARCHAR(100)

– Description 2 VARCHAR(50)

– Blocked TINYINT UNSIGNED

– Last Date Modified DATETIME(6)

– Inactive TINYINT UNSIGNED

– Fixed Node TINYINT UNSIGNED

– Minimum Nodes INT

– Maximum Nodes INT

– Tax Group Code VARCHAR(10)

– Current Price List TINYINT UNSIGNED

– PrimeKey INT

Foreign Keys:

Indices:

– PRIMARY (PrimeKey)

— Data Copy

– `mysql_migration`.`Employee`

Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]

– `mysql_migration`.`binary_table`

Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]

– `mysql_migration`.`num_table`

Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]

– `mysql_migration`.`Products`

error  `mysql_migration`.`Products`:malloc(1073741824) failed for blob transfer buffer          error  `mysql_migration`.`Products`:Failed copying 2 rows            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]

– `mysql_migration`.`int_table`

Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]

– `mysql_migration`.`Orders`

Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]

– `mysql_migration`.`Altiris, Inc_$Item`

Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]

– `mysql_migration`.`Employee_2`

Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]          error  `mysql_migration`.`Employee_2`:Could not allocate 1073741824 bytes for row buffer column of emp_notes2 ntext 252          error  `mysql_migration`.`Employee_2`:Failed copying 4 rows          error  `mysql_migration`.`Employee_2`:Could not allocate 1073741824 bytes for row buffer column of emp_notes2 ntext 252          error  `mysql_migration`.`Employee_2`:Failed copying 4 rows            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]

– `mysql_migration`.`CUSTOMERS`

Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]

– `mysql_migration`.`date_table`

Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]

– `mysql_migration`.`Persons`

Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]

Validate data in MySQL database and All set !!

GNW05 – Extending Databases with Hadoop video (plus GNW06 dates)

In case you missed this webinar, here’s a 1.5h holiday video about how Gluent “turbocharges” your databases with the power of Hadoop – all this without rewriting your applications 🙂 Also, you can already sign up for the next webinar here: … Continue reading

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569