Month: September 2015

Prepping your MySQL indexes for a character set change

When changing a MySQL table or column to a new character set that uses more bytes than the old character set, you need to first check if any schema changes are needed to accomodate the change. For example, changing character sets from latin1 to utf8 is an increase from 1 to 3 bytes, and changing from utf8 to utf8mb4 is an increase from 3 to 4 bytes. The MySQL reference manual has a helpful page with some details on this, but I want to add some examples to show how this schema prep can be accomplished.

There are three different types of length limits to take into consideration:

Index
Column
Row

In this post I will focus on index length limits, and I’ll save columns and rows for future posts. Read on for details.

Index Length Limits

The specific limits depend on which MySQL version and storage engine is used, and whether innodb_large_prefix is enabled (assuming MySQL 5.5 or higher). If you want to learn from about innodb_large_prefix you may want to read this post. A common example here is an index on a varchar(255) column in the utf8 character set without innodb_large_prefix. That index uses 765+2=767 bytes, so it conforms to the 767 byte limit per column of an InnoDB index without innodb_large_prefix. If I change that column to utf8mb4 then the index requires 1022 bytes, which causes the DDL to fail.

Here’s an example:

“`
mysql> create table if not exists varchar_test (

-> id int auto_increment primary key,
-> str varchar(255)
-> ) engine = InnoDB DEFAULT CHARSET=utf8;

Query OK, 0 rows affected (0.08 sec)

mysql> alter table varchar_test add index str_index (str);
Query OK, 0 rows affected (0.15 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table varchar_test DEFAULT CHARSET=utf8mb4;
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table varchar_test modify column str varchar(255) CHARACTER SET utf8mb4;
ERROR 1709 (HY000): Index column size too large. The maximum column size is 767 bytes.
“`

For the record, here are the index length limits I need to worry about in a character set change:

InnoDB indexes are limited to 3072 bytes
MyISAM indexes are limited to 1000 bytes
Without innodb_large_prefix, InnoDB index columns are limited to 767 bytes

In order to identify all of the indexes that I need to change, I run the following two queries on information_schema. (I tried to add the correct size for all relevant data types, but in case I forgot a data type or a new data type is added I put in a large default value to return false positives so I can go in and fix the query). These queries can be easily modified for a change to another character set such as utf8, utf16, utf32, etc.

Query #1:

“`
— Show me all index columns that will exceed the length limit if I change to utf8mb4
set @new_character_set = ‘utf8mb4’;

select table_schema, table_name, engine, index_name, column_name, column_type,
sub_part, index_column_length_in_bytes
from
(
select t.table_Schema,t.table_name,t.engine,s.index_name,c.column_name,
c.column_type,c.data_type,c.character_maximum_length,c.character_octet_length,
s.sub_part,c.character_set_name,cs.maxlen,
coalesce(
(
case
when s.sub_part is null then (cs.maxlen * c.character_maximum_length)
else (cs.maxlen * s.sub_part)
end
),
(case (c.data_type)
when ‘tinyint’ then 1
when ‘smallint’ then 2
when ‘mediumint’ then 3
when ‘int’ then 4
when ‘bigint’ then 8
when ‘decimal’ then 4
when ‘float’ then 4
when ‘year’ then 1
when ‘date’ then 3
when ‘datetime’ then 8
when ‘time’ then 3
when ‘timestamp’ then 4
else 1000000
end)) as index_column_length_in_bytes
from information_schema.statistics s
inner join information_schema.columns c on c.table_schema = s.table_schema
and c.table_name = s.table_name
and c.column_name = s.column_name
inner join information_schema.tables t on t.table_schema = c.table_schema
and t.table_name = c.table_name
inner join information_schema.character_sets cs on cs.character_set_name = @new_character_set
where s.index_type != ‘FULLTEXT’
) sub_query
where index_column_length_in_bytes > (
select case
when engine = ‘MyISAM’ then 1000
when engine = ‘InnoDB’ and (select max(innodb_large_prefix) from (

select variable_value=’ON’ as innodb_large_prefix
from information_schema.global_variables
where variable_name = ‘innodb_large_prefix’
union
select 0 as innodb_large_prefix
) sub_query) = 1 then 3072

else 767
end
)
order by table_schema, table_name, index_name;
“`

Query #2:

“`
— show me all indexes that will exceed the length limit if I change to utf8mb4
set @new_character_set = ‘utf8mb4’;

select table_schema, table_name, engine, index_name, columns_in_index, index_length_in_bytes
from
(
select t.table_schema,t.table_name,t.engine,s.index_name,
group_concat(concat(s.column_name,
case
when s.sub_part is null
then ”
else concat(‘(‘,s.sub_part,’)’)
end
) order by s.seq_in_index) as columns_in_index,
sum(coalesce(
(
case
when s.sub_part is null
then (cs.maxlen * c.character_maximum_length)
else (cs.maxlen * s.sub_part)
end
),
(
case (c.data_type)
when ‘tinyint’ then 1
when ‘smallint’ then 2
when ‘mediumint’ then 3
when ‘int’ then 4
when ‘bigint’ then 8
when ‘decimal’ then 4
when ‘float’ then 4
when ‘year’ then 1
when ‘date’ then 3
when ‘datetime’ then 8
when ‘time’ then 3
when ‘timestamp’ then 4
else 1000000
end
)
)) as index_length_in_bytes
from information_schema.statistics s
inner join information_schema.columns c on c.table_schema = s.table_schema
and c.table_name = s.table_name
and c.column_name = s.column_name
inner join information_schema.tables t on t.table_schema = c.table_schema
and t.table_name = c.table_name
inner join information_schema.character_sets cs on cs.character_set_name = @new_character_set
where s.index_type != ‘FULLTEXT’
group by t.table_schema,t.table_name,t.engine,s.index_name
) sub_query
where index_length_in_bytes > (
select case
when engine = ‘MyISAM’ then 1000
else 3072
end
)
order by table_schema, table_name, index_name;
“`

Once I identify the affected indexes, I need to resolve each one either by shortening the sub-part of one or more columns in the index, removing columns from the index, decreasing the length of the column, or deciding not to change the character set of the relevant tables/columns.

Using Docker to Visualize MySQL Performance Schema

Last week, I was pleased to present at Percona Live Amsterdam 2015 regarding the importance of Performance Schema and how to begin approaching visualizing the data that is available to diagnose performance issues and fine tune your MySQL environment. You can download the slides from the Percona Live conference page.The session highlighted using the ELK (Elasticsearch+Logstash+Kibana) stack to assist visualizing event data, in addition to graphite+graphana to visualize time-series data.As many people who attend conferences are aware, the availability of internet access is sketchy at best. To combat this, the visualization stack that I created was strictly local utilizing Docker, specifically the Docker Toolbox.The docker-compose.yml file that creates the stack is fairly straightforward:mysql_data:
container_name: mysql_data
image: tianon/true
volumes:
– /var/lib/mysql
– /var/lib/mysql-files

mysql:
container_name: mysql
environment:
– MYSQL_ROOT_PASSWORD=plam15
– MYSQL_DATABASE=world
build: mysql
volumes_from:
– mysql_data
volumes:
– “./mysql/config:/etc/mysql/conf.d”
expose:
– “3306”
ports:
– “3306:3306”

sysbench:
container_name: sysbench
build: ./sysbench
links:
– “mysql”

elasticsearch_data:
container_name: elasticsearch_data
image: tianon/true
volumes:
– /usr/share/elasticsearch/data

elasticsearch:
container_name: elasticsearch
build: elasticsearch
volumes_from:
– elasticsearch_data
command: -Des.node.name=”PLE15″
ports:
– “9200:9200”

graphite:
container_name: graphite
image: kamon/grafana_graphite
ports:
– “8000:80”
– “8126:8126”

logstash:
container_name: logstash
build: logstash
volumes:
– “./logstash/scripts:/opt/logstash/scripts”
environment:
– ES_PROXY_HOST=$DOCKER_HOST
links:
– mysql
– graphite
– “elasticsearch:es”
ports:
– “9292:9292”
Since the session was about the upcoming changes to the Performance Schema in MySQL 5.7, I chose to use MySQL’s official 5.7 Docker image:mysql_data:
container_name: mysql_data
image: tianon/true
volumes:
– /var/lib/mysql
– /var/lib/mysql-files

mysql:
container_name: mysql
environment:
– MYSQL_ROOT_PASSWORD=ple15
– MYSQL_DATABASE=world
build: mysql
volumes_from:
– mysql_data
volumes:
– “./mysql/config:/etc/mysql/conf.d”
expose:
– “3306”
ports:
– “3306:3306”
Pay attention to two aspects of this output.First, I am creating a mysql_data container. Using a Data Volume Container is highly recommended when working with any data that should be persisted and not baked into the Docker image.Second, I am building the container using the `build` command. Throughout the entire docker-compose.yml file, I am adding additional functionality to base containers provided by the Docker Hub. For the MySQL container, the Dockerfile looks like this:FROM mysql:5.7

ADD world_innodb.sql /docker-entrypoint-initdb.d/world.sql

EXPOSE 3306
CMD [“mysqld”]
The rest of the docker-compose file follows similar guidelines, and I won’t explain each aspect here. You can find the Dockerfiles for each component of the stack in the dtest/visualize-mysql github repository.To bring the stack up, the steps are straightforward:Install Docker Toolbox on your machineCreate the docker-machine:$ docker-machine create -d virtualbox \
–virtualbox-memory “2048” –virtualbox-cpu-count “2” \
visualize
$ eval $(docker-machine env visualize)Clone the dtest/visualize-mysql repository and change into the parent directory.$ git clone git@github.com:dtest/visualize-mysql.git
$ cd visualize-mysqlBring the environment up:$ docker-compose up -dVerify that the containers are running, noting that the data volume containers should have exited with status 0:$ docker-compose ps
Name Command State Ports
———————————————————————————————————————
elasticsearch /docker-entrypoint.sh -Des … Up 0.0.0.0:9200->9200/tcp, 9300/tcp
elasticsearch_data /true Exit 0
graphite /usr/bin/supervisord Up 0.0.0.0:8000->80/tcp, 8125/udp, 0.0.0.0:8126->8126/tcp
logstash /app/bin/boot Up 0.0.0.0:9292->9292/tcp
mysql /entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp
mysql_data /true Exit 0
sysbench /etc/entrypoint.sh -d Up
You can then point your browser to the Kibana and Graphana dashboards. The graphana dashboard requires login, and the default credentials are admin/admin.The dashboards for the presentation are not automatically loaded, so you can find some basic dashboards in the dashboards directory of the repository. Unfortunately, I did not export them before discarding the machine so have lost some of the fine-tuning changes I had made. I do intend to recreate them and even try out the newer versions of the ELK stack available.The goal here is to show how easy it is to setup a local environment using Docker and existing images on the Docker Hub to get started with your own projects. Discover more about our expertise in MySQL.

Introduction to User-Defined Aggregate Functions

OK, let’s speed past the easy bits, the parts we all already know: standard aggregate functions. Aggregate functions, unlike regular functions, take values from multiple rows as their input. The category includes those aggregate functions that are so ordinary they’re almost invisible – SUM, COUNT, MAX – and a couple that most of us never use – such as APPROX_COUNT_DISTINCT…. Continue Reading →

Press Release: Severalnines’ ClusterControl helps BT Expedite global expansion for its retail customers

BT’s retail arm Expedite uses ClusterControl platform to scale MySQL and achieve agility on BT Cloud
 
Stockholm, Sweden and anywhere else in the world – 30 September 2015 – Severalnines, the provider of database automation and management software, today announced its latest customer, BT Expedite, BT’s specialist retail arm hosting its customers’ eCommerce applications and omni-channel marketing operations.

Expedite helps 100 leading retailers in 170 countries worldwide, including some of the largest UK retail chains such as Primark, WHSmith, Warehouse and Jigsaw deliver effective customer service online by offering end-to-end IT managed services. As customers are going online first to search, select and purchase goods, the modern day retail store needs an IT system which can manage traffic spikes on eCommerce platforms, especially during big retail events like Cyber Monday and Christmas in the UK. 
The challenge was to find a flexible, cloud-based solution for Expedite customers to cope with increased traffic to web, content and blog sites for when shoppers review blogs and forums during the buying process. Traditional approaches to MySQL high availability could not manage data transactions at a large scale enough for the Expedite IT team, so they decided to set up an online review process for a new solution.
Expedite found Severalnines’ ClusterControl platform offering both scalability and high availability (HA) for cloud-based database applications, with favourable user reviews. During the ClusterControl trial, Expedite’s IT team with access rights could set up a database cluster within 15 minutes. It also helped quickly optimise its eCommerce platform, which in turn showed an increase in conversion rates in a highly competitive retail industry.
Based on customer feedback, Expedite wanted to increase online security and resilience as online shopping becomes more popular. Severalnines provided Expedite with auto-recovery for data along with repair and database failure detection alerts.

ClusterControl currently helps Expedite to manage six MySQL database clusters. A year since deployment, ClusterControl has given Expedite’s IT team a transparent overview of database performance on one platform. Using ClusterControl has also led to greater productivity for the Expedite IT team thanks to the automation of previously manual operations which now take minutes rather than days. 
Dominic Day, Head of Managed Hosting at BT Expedite, stated: “ClusterControl has surpassed our expectations. It was a perfect fix to time-consuming issues, especially the scalability and availability of the blog section of our customers’ websites. We are confident with big retail events around the corner that our systems can withstand the growth of traffic fuelled by consumer decisions. Vinay Joosery and his Severalnines team were superb on giving us advice on how to maximise the potential of ClusterControl and our database platforms. My team can now spend more time on creating and delivering innovative customer services.”
Vinay Joosery, Severalnines CEO, said: “Providing the right omni-channel experience through social, mobile, online and offline is where retailers are heading to, and an agile infrastructure that is cost-efficient is a key part of that. We have enjoyed working with the innovative team at BT to put in place a fully automated and cross data-centre database infrastructure on BT Cloud.”
          
 About Severalnines
Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability.
Severalnines’ products are used by developers and administrators of all skills levels to provide the full ‘deploy, manage, monitor, scale’ database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 7,000 deployments to date via its popular online database configurator. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/company

About BT Expedite
BT Expedite is the retail specialist division within BT. Its mission is to make it easy for retailers to serve their customers. BT Expedite’s solutions enable customer engagement and operational efficiency to enable retailers to thrive in today’s competitive UK environment and grow operations internationally. BT Expedite’s services span every area of multi-channel retailing: eCommerce, store, planning, sourcing, merchandising, CRM; and these are underpinned by BT’s world-class hosting and network infrastructure. It offers fully managed solutions so retailers can focus on what they do best.
BT Expedite currently works with some of the UK’s top retailers, including: WHSmith, Primark, Supergroup, Pets at Home and Mothercare.
For more information, visit www.btexpedite.com.
Blog category: Company NewsTags: announcementsclustercontrolhigh availabilityMariaDBMongoDBMySQLPostgreSQL

MySQL Group Replication plugin issues with 5.7.8-rc2

Well the new things are cool to test and learn. But it is not possible every time, to have smooth testing process. if you can’t get right software on your hand.
So there are great articles about this new plugin here you can follow all of them: Group Replication Topics
I must say that, plugin available for download as source code and as for Oracle Linux 6 x86_64 from -> labs.mysql.com
So if you want to try it with MySQL 5.7.8-rc2 on Ubuntu you should compile it with MySQL. But you will not be able to compile due to well known ‘boost’ dependency -> see BUG #78600
UPDATE to BUG 78600:
Pedro Gomes clarified the reason of failed compiling process. There must not be any “spaces” between cmake options specified with -DMYSQL_SERVER_CMAKE_ARGS=””. So we have working cmake command as follows:
cmake .. -DMYSQL_SERVER_SRC_DIR=”/home/sh/Sandboxes/mysql-5.7.8-rc” -DMYSQL_SERVER_CMAKE_ARGS=”-DMYSQL_DATADIR=/opt/mysql-5.7.8-grouprepl/datadir;-DSYSCONFDIR=/opt/mysql-5.7.8-grouprepl;-DWITH_SSL=system;-DMYSQL_TCP_PORT=3307;-DMYSQL_UNIX_ADDR=/opt/mysql-5.7.8-grouprepl/mysqld.sock;-DDEFAULT_CHARSET=utf8;-DDEFAULT_COLLATION=utf8_general_ci;-DWITH_DEBUG=1;-DCOMPILATION_COMMENT=’Group Replication enabled MySQL-5.7.8′;-DOPTIMIZER_TRACE=1;-DWITH_ZLIB=system;-DWITH_VALGRIND=1;-DCMAKE_C_FLAGS=-DHAVE_purify;-DCMAKE_CXX_FLAGS=-DHAVE_purify;-DDOWNLOAD_BOOST=1;-DWITH_BOOST=/home/sh/Sandboxes”

But even you have Oracle Linux 6 x86_64 and even you follow topics: 1. Corosync Guide 2. Getting Started with MySQL Group Replication
You will likely get “Segmentation Fault” after running START GROUP_REPLICATION:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffc8ef2700 (LWP 3294)]
0x00007ffff625d532 in _int_malloc () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install corosynclib-1.4.7-2.el6.x86_64
(gdb) bt
#0 0x00007ffff625d532 in _int_malloc () from /lib64/libc.so.6
#1 0x00007ffff625e991 in malloc () from /lib64/libc.so.6
#2 0x00007ffff6acf0bd in operator new(unsigned long) () from /usr/lib64/libstdc++.so.6
#3 0x00007fffc8f5b2ab in Certifier::Certifier (this=0x7fffbc051d20)
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/certifier.cc:209
#4 0x00007fffc8f6bb36 in Certification_handler::initialize (this=0x7fffbc0507e0)
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/handlers/certification_handler.cc:32
#5 0x00007fffc8f634d7 in configure_pipeline (pipeline=0x7fffbc051a80, handler_list=0x7fffbc0506f0, num_handlers=3)
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/pipeline_factory.cc:143
#6 0x00007fffc8f6364d in get_pipeline (pipeline_type=<value optimized out>, pipeline=0x7fffbc051a80)
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/pipeline_factory.cc:30
#7 0x00007fffc8f554b0 in Applier_module::setup_applier_module (this=0x7fffbc051920, pipeline_type=STANDARD_GROUP_REPLICATION_PIPELINE, reset_logs=false,
stop_timeout=31536000, group_sidno=2)
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/applier.cc:99
#8 0x00007fffc8f63eca in configure_and_start_applier_module ()
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/plugin.cc:761
#9 0x00007fffc8f657df in plugin_group_replication_start ()
at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/plugin.cc:289
#10 0x00000000016927c9 in Group_replication_handler::start (this=0x378f2d0) at /home/full_datadir/mysql-5.7.8-rc/sql/rpl_group_replication.cc:56
#11 0x0000000001692b81 in group_replication_start () at /home/full_datadir/mysql-5.7.8-rc/sql/rpl_group_replication.cc:192
#12 0x000000000149739c in mysql_execute_command (thd=0x7fffbc007180) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:3029
#13 0x000000000149e15b in mysql_parse (thd=0x7fffbc007180, parser_state=0x7fffc8ef14a0) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:5255
#14 0x0000000001492dec in dispatch_command (thd=0x7fffbc007180, com_data=0x7fffc8ef1cd0, command=COM_QUERY)
at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:1272
#15 0x00000000014917f3 in do_command (thd=0x7fffbc007180) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:852
#16 0x00000000015c9042 in handle_connection (arg=0x36c1a50) at /home/full_datadir/mysql-5.7.8-rc/sql/conn_handler/connection_handler_per_thread.cc:300
#17 0x0000000001c56c41 in pfs_spawn_thread (arg=0x37d1990) at /home/full_datadir/mysql-5.7.8-rc/storage/perfschema/pfs.cc:2178
#18 0x00007ffff7bc79d1 in start_thread () from /lib64/libpthread.so.0
#19 0x00007ffff62ccb6d in clone () from /lib64/libc.so.6

For further reading refer to related fresh BUG report -> #78627
It is really cool to help community to improve Open Source projects. So go on and test by your own, report interesting things and ofcourse contact with community members. They are all open to discussion
The post MySQL Group Replication plugin issues with 5.7.8-rc2 appeared first on Azerbaijan MySQL UG.

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569