Month: March 2017

Bonanza Cuts Load in Half with VividCortex

Working with our users at Bonanza earlier this week, we saw their team demonstrate a great example of how monitoring insights can lead to a relatively simple — but impactful —  MySQL system tweak. In this case, the adjustment Bonanza made resulted in huge improvements to their total query time.
By looking at the mysql.innodb.queued_queries metric in VividCortex, it became clear to Bonanza’s team there was an issue within InnoDB that was preventing otherwise runnable threads from executing. Often, when queries begin to queue, it’s indicative of a problem; it’s a good idea to regularly look for states like queuing, pending, or waiting as signs of potential issues. In this case, the innodb_thread_concurrency parameter had been configured to 8. Once VividCortex revealed the mysql.innodb.queued_queries metric, the parameter was changed to 0 (self governing).
The fix was implemented at about 5:35 pm on 3/28/2017. In the VividCortex chart below, you can see where queries cease queuing (because they’ve started executing faster and more efficiently). Note how the orange line drops off almost immediately.

This second chart shows how in the hour after the fix, SELECT total time dropped by over 50% compared to the hour previous. That is 9.59 fewer hours that the system spent executing queries — or 9.59 hours of extra CPU time available. Average latency went from 1.36 ms all the way down to 664.6 μs.

This is a view of the query itself, from 4:35 pm to 6:35 pm. Note the overall decrease at 5:35 pm.

Great work by the Bonanza team, and thank you for sharing!
Read the Free eBookEstimating CPU Per Query with Weighted Linear Regression

MySQL DevOps First Step: Revision Control

MySQL environments are notorious for being understaffed – MySQL is everywhere, and an organization is lucky if they have one full-time DBA, as opposed to a developer or sysadmin/SRE responsible for it.
That being said, MySQL is a complex program and it’s useful to have a record of configuration changes made. Not just for compliance and auditing, but sometimes – even if you’re the only person who works on the system – you want to know “when was that variable changed?” In the past, I’ve relied on the timestamp on the file when I was the lone DBA, but that is a terrible idea.
I am going to talk about configuration changes in this post, mostly because change control for configuration (usually /etc/my.cnf) is sorely lacking in many organizations. Having a record of data changes falls under backups and binary logging, and having a record of schema changes is something many organizations integrate with their ORM, so they are out of scope for this blog post.
Back to configuration – it is also helpful for disaster recovery purposes to have a record of what the configuration was. You can restore your backup, but unless you set your configuration properly, there will be problems (for example, an incompatible innodb_log_file_size will cause MySQL not to start).
So, how do you do this? Especially if you have no time?
While configuration management systems like chef, puppet and cfengine are awesome, they take setup time. If you have them, they are gold – use them! If you do not have them, you can still do a little bit at a time and improve incrementally.
If you really are at the basics, get your configurations into a repository system. Whether you use rcs, cvs, subversion or git (or anything else), make a repository and check in your configuration. The configuration management systems give you bells and whistles like being able to make templates and deploying to machines.
It is up to you what your deployment process is – to start, something like “check in the change, then copy the file to production” might be good enough, for a start – remember,  we’re taking small steps here. It’s not a great system, but it’s certainly better than not having any revision control at all!
A great system will use some kind of automated deployment, as well as monitoring to make sure that your running configuration is the same as your configuration file (using <A HREF=”https://www.percona.com/doc/percona-toolkit/3.0/pt-config-diff.html”>pt-config-diff). That way, there are no surprises if MySQL restarts.
But having a great system is a blog post for another time.

Webinar Replay and Q&A: Load balancing MySQL & MariaDB with ProxySQL & ClusterControl

Thanks to everyone who participated in our recent webinar on how to load balance MySQL and MariaDB with ClusterControl and ProxySQL!
This joint webinar with ProxySQL creator René Cannaò generated a lot of interest … and a lot of questions!
We covered topics such as ProxySQL concepts (with hostgroups, query rules, connection multiplexing and configuration management), went through a live demo of a ProxySQL setup in ClusterControl (try it free) and discussed upcoming ClusterControl features for ProxySQL.
These topics triggered a lot of related questions, to which you can find our answers below.
If you missed the webinar, would like to watch it again or browse through the slides, it is available for viewing online.
Watch the webinar replay
You can also join us for our follow-up webinar next week on Tuesday, April 4th 2017. We’re again joined by René and will be discussing High Availability in ProxySQL.
Sign up for the webinar on HA in ProxySQL
Webinar Questions & Answers
Q. Thank you for your presentation. I have a question about connection multiplexing: does ProxySQL ensure that all statements from start transaction to commit are sent through the same backend connection?
A. This is configurable.
A small preface first: at any time, each client’s session can have one or more backend connections associated with it. A backend connection is associated to a client when a query needs to be executed, and normally it returns immediately back to the connection pool. “Normally” means that there are circumstances when this doesn’t happen. For example, when a transaction starts, the connection is not returned anymore to the connection pool until the transaction completes (either commits or rollbacks). This means that all the queries that should be routed to the same hostgroup where the transaction is running, are guaranteed to run in the same connection.
Nonetheless, by default, a transaction doesn’t disable query routing. That means that while a transaction is running on one connection to a specific hostgroup and this connection is associated with only that client, if the client sends a query destinated to another hostgroup, that query could be sent to a different connection.
Whatever the query could be sent to a different connection or not based on query rules is configurable by the value of mysql_users.transaction_persistent:

0 = queries for different hostgroup can be routed to different connections while a transaction is running;
1 = query routing will be disabled while the transaction is running.

The behaviour is configurable because it depends on the application. Some applications require that all the queries are part of the same transaction, other applications don’t.
Q. What is the best way to set up a ProxySQL cluster? The main concern here is configuration of the ProxySQL cascading throughout the cluster.
A. ProxySQL can be deployed in numerous ways.
One typical deployment pattern is to deploy a ProxySQL instance on every application host. The application would then connect to the proxy using very low latency connection via Unix socket. If the number of application hosts increase, you can deploy a middle-layer of 3-5 ProxySQL instances and configure all ProxySQL instances from application servers to connect via this middle-layer. Configuration management, typically, would be handled using Puppet/Chef/Ansible infrastructure orchestration tools. You can also easily use home-grown scripts as ProxySQL’s admin interface is accessible via MySQL command line and ProxySQL reconfiguration can be done by issuing a couple of SQL statements.
Q. How would you recommend to make the ProxySQL layer itself highly available?
There are numerous methods to achieve this.
One common method is to deploy a ProxySQL instance on every application host. The application would then connect to the proxy using very low latency connection via Unix socket. In such a deployment there is no single point of failure as every application host connects to the ProxySQL installed locally.
When you implement a middle-layer, you will also maintain HA as 3-5 ProxySQL nodes would be enough to make sure that at least some of them are available for local proxies from application hosts.
Another common method of deploying a highly available ProxySQL setup is to use tools like keepalived along with virtual IP. The application will connect to VIP and this IP will be moved from one ProxySQL instance to another if keepalived detects that something happened to the “main” ProxySQL.
Q. How can ProxySQL use the right hostgroup for each query?
A. ProxySQL route queries to hostgroups is based on query rules – it is up to the user to build a set of rules which make sense in their environment.
Q. Can you tell us more about query mirroring?
A. In general, the implementation of query mirroring in ProxySQL allows you to send traffic to two hostgroups.
Traffic sent to the “main” hostgroup is ensured to reach it (unless there are no hosts in that hostgroup); on the other hand, mirror hostgroup will receive traffic on a “best effort” basis – it should but it is not guaranteed that the query will indeed reach the mirrored hostgroup.
This limits the usefulness of mirroring as a method to replicate data. It is still an amazing way to do load testing of new hardware or redesigned schema. Of course, mirroring reduces the maximal throughput of the proxy – queries have to be executed twice so the load is also twice as high. The load is not split between the two, but duplicated.
Q. And what about query caching?
Query cache in ProxySQL is implemented as a simple key->value memory store with Time To Live for every entry. What will be cached and for how long – this is decided on the query rules level. The user can define a query rule matching a particular query or a wider spectrum of them. To identify query results set in cache, ProxySQL uses query hash along with information about user and schema.
How to set TTL for a query? The simplest answer is: to the maximum value of replication lag which is acceptable for this query. If you are ok to read stale data from slave, which is lagging 10 seconds, you should be fine reading stale data from cache when TTL is set to 10000 milliseconds.
Q. Connection limit to backends?
A. ProxySQL indeed implements a connection limit to backend servers. The maximum number of connections to any backend instance is defined in mysql_servers table.
Because the same backend server can be present in multiple hostgroups, it is possible to define the maximum number of connections per server per hostgroup.
This is useful for example in the case of a small set of connections where specific long running queries are queued without affecting the rest of the traffic destinated to the same server.
Q. Regarding the connection limit from the APP: are connections QUEUED?
A. If you reach the mysql-max_connections, further connections will be rejected with the error “Too many connections”.
It is important to remember that there is not a one-to-one mapping between application connections and backend connections.
That means that:

Access to the backends can be queued, but connections from the application are either accepted or rejected.
A large number of application connections can use a small number of backend connections.

Q. I haven’t heard of SHUN before: what does it mean?
A. SHUN means that the backend is temporarily marked as non-available but ProxySQL will attempt to connect to it after mysql-shun_recovery_time_sec seconds
Q. Is query sharding available across slaves?
A. Depending on the meaning of sharding, ProxySQL can be used to perform sharding across slaves. For example, it is possible to send all traffic for a specific set of tables to a set of slaves (in a hostgroup). Splitting the slaves into multiple hostgroups and performing query sharding accordingly is possible to improve performance, as each slave won’t read from disk data from tables for which it doesn’t process any query.
Q. How do you sync the configuration of ProxySQL when you have many instances for H.A ?
A. Configuration management, typically, would be handled using Puppet/Chef/Ansible infrastructure orchestration tools. You can also easily use home-grown scripts as ProxySQL’s admin interface is accessible via MySQL command line and ProxySQL reconfiguration can be done by issuing a couple of SQL statements.
Q. How flexible or feasible it is to change the ProxySQL config online, eg. if one database slave is down, how is that handled in such a scenario ?
A. ProxySQL configuration can be changed at any time; it’s been designed with such level of flexibility in mind.
‘Database down’ can be handled differently, it depends on how ProxySQL is configured. If you happen to rely on replication hostgroups to define writer and reader hostgroups (this is how ClusterControl deploys ProxySQL), ProxySQL will monitor state of read_only variable on both reader and writer hostgroups and it will move hosts as needed.
If master is promoted by external tools (like ClusterControl, for example), read_only values will change and ProxySQL will detect a topology change and it will act accordingly. For a standard “slave down” scenario there is no required action from the management system standpoint – without any changes in read_only value ProxySQL will just detect that the host is not available and it will stop sending queries to it, re-executing on other members of the hostgroup those queries which didn’t complete on dead slave.
If we are talking about a setup not using replication hostgroups then it is up to the user and their scripts/tools to implement some sort of logic and reconfigure ProxySQL on runtime using admin interface. Slave down, though, most likely wouldn’t require any changes.
Q. Is it somehow possible to SELECT data from one host group into another host group?
A. No, at this point it is not possible to execute cross-hostgroup queries.
Q. What would be RAM/Disk requirements for logs , etc?
A. It basically depends on the amount of log entries and how ProxySQL log is verbose in your environment. Typically it’s neglectable.
Q. Instead of installing ProxySQL on all application servers, could you put a ProxySQL cluster behind a standard load balancer?
A. We see no reason why not? You can put whatever you like in front of the ProxySQL – F5, another layer of software proxies – it is up to you. Please keep in mind, though, that every layer of proxies or load balancers adds latency to your network and, as a result, to your queries.
Q. Can you please comment on Reverse Proxy, whether it can be used in SQL or not?
A. ProxySQL is a Reverse Proxy. Contrary to a Forward Proxy (that acts as an intermediary that simply forwards requests), a Reverse Proxy processes clients’ requests and retrieves data from servers. ProxySQL is a Reverse Proxy: clients send requests to ProxySQL, that will understand the request, analyze it, and decide what to do: rewrite, cache, block, re-execute on failure, etc.
Q. Does the user authentication layer work with non-local database accounts, e.g. with the pam modules available for proxying LDAP users to local users?
A. There is no direct support for LDAP integration but, as configuration management in ProxySQL is a child’s play, it is really simple to put together a script which will pull the user details from LDAP and load them into ProxySQL. You can use cron to sync it often. All ProxySQL needs is a username and password hash in MySQL format – this is enough to add a user to ProxySQL.
Q. It seems like the prescribed production deployment includes many proxies – are there any suggestions or upcoming work to address how to make configuration changes across all proxies in a consistent manner?
A. At this point it is recommended to leverage configuration management tools like Chef/Ansible/Puppet to manage ProxySQL’s configuration.
Watch the webinar replay
You can also join us for our follow-up webinar next week on Tuesday, April 4th 2017. We’re again joined by René and will be discussing High Availability in ProxySQL.
Sign up for the webinar on HA in ProxySQL

Tags: 

MySQL
MariaDB
proxysql
clustercontrol
load balancing

Performance Evaluation of SST Data Transfer: With Encryption (Part 2)

SST Data TransferIn this blog post, we’ll look at the performance of SST data transfer using encryption. In my previous post, we reviewed SST data transfer in an unsecured environment. Now let’s take a closer look at a setup with encrypted network connections between the donor and joiner nodes. The base setup is the same as the previous […]

Experiments with MySQL 5.7’s Online Buffer Pool Resize

One of the interesting features introduced in MySQL 5.7 is that innodb_buffer_pool_size is a dynamic variable (since 5.7.5, to be more specific). Yet, past experience tells us that just because a variable is dynamic, it does not make it is safe to change it on the fly.
To find out how safe this new feature is, I measured throughput on a synthetic workload (sysbench 1.0 running the oltp script) as I made changes to this variable. In this post, I will show the results that came through.
 
The Environment
For my tests, I used a Google Cloud Compute instance of type n1-standard-4 (that is 4 vCPUs and 15 GB of memory) with 25 GB of persistent ssd. The dataset was about 9.2 GB (on disk, Innodb, no compression, 40M rows), with a smaller version of almost 1 GB (160k rows) for a specific test.
As mentioned earlier, the workload was sysbench 1.0’s oltp script.
 
The Experiments
The goal of the experiment was to measure what impact (if any) changing innodb_buffer_pool_size dynamically has on the workload, measured in terms of throughput (transactions per second).
After some tests to find a suitable run time, I decided to do the following for each test:

restore the data directory from a backup, so all runs had the same data and a cold buffer,
run sysbench for 240 seconds, reporting stats every second, and
change innodb_buffer_pool_size after 120 seconds.

Here’s how the variable was modified:

The ‘normal’ configuration is 4GB
For the ‘increased’ tests, it was modified to 8GB
For the ‘decreased’ tests, to 2GB

Innodb’s log file size was set to 1 GB, and no other changes were made to the default configuration. I was not going for benchmark results here; I simply wanted to find out how safe it would be to do this in production.
Let me start by showing what happens when I left sysbench to run its oltp script for 240 seconds, with no changes made to the variable:
 
The Results
Let me start by showing what happens when I just left sysbench to run its oltp script for 240 seconds, with no changes made to the variable:

We can see some periodic drops in tps that improve with time, and which go away if sysbench is left to run for about 600 seconds, but, again, I just wanted to get an idea of the safety of changing the Buffer Pool’s size on a live system.  In the end, this baseline was good enough to let me run several tests in a short amount of time.
Let’s see what happens now when, at second 120, the BP’s size is reduced from 4 to 2 GBs:

We see a very clear drop around the time of the change, and then an expected drop in tps. You may be wondering why I tested a change that I knew would result in poor performance, and that’s a valid question. In my experience, people make mistakes when tuning Innodb. I’ve witnessed this several times and know this to be an incredibly realistic scenario. I think it is interesting to know, besides the expected result of less tps, what happens when the change is actually made. Looking at sysbench’s output (you can find all the files, along with the scripts I used, here) we see that the drop started at second 121 and lasted until about 130, where tps started to stabilize again. I think that’s pretty good. Remember, we are talking about a variable that required a service restart in previous versions, and nothing is worse for throughput than mysqld not running at all. With that in mind, a few seconds of reduced performance seems like an improvement to me.
Here is what happens when, given the same start, the BP size is increased to 8GB at second 120:

There is another drop, but it seems shorter, and honestly, I probably wouldn’t have noticed it in the graph if it wasn’t for that lonely dot near the bottom. Looking at the raw output, we can see the impact is seen only on second 121. I think this is very good news. Again, compared with what we had before, this means that, at least on this controlled experiment, we were able to increase the BP’s size with very little production impact.
Another increase example, in this case, from 2 to 8 GB, which I have labelled as ‘increase needed’ in my scripts because titles are a kind of name, and naming things is one of the hardest problems in computing:

The drop is also measured just on second 121, and tps improves significantly starting on second 122, so this makes me even more optimistic about this feature.
Let’s now see what happens when we decrease the BP while running on the small dataset:

My goal here was to try and simulate what may happen when we dynamically reduced an oversized BP because, for example, someone copied the configuration from a standalone production MySQL to a shared hosted test instance with small datasets, (which is something I have also seen done). In this case, the drop is right at second 120, and then it goes back to normal.
Finally, what happens when the BP size is reduced, and then this change is rolled back after 20 seconds?

A Quick Look at the Drops in Throughput
We have seen that, in all cases, there is a drop in throughput when this variable is changed, with varying length depending on the circumstances. I took a quick look at what happens then via pt-pmp, and found out that, during the drops, most threads are waiting on the trx_commit_complete_for_mysql function. This function is found on file trx0trx.cc of Innodb and is described by a comment as flushing the log to disk (if required). So what happens if we change this on a read-only workload? It turns out there is still a short drop in throughput, and this time most threads are waiting at the buf_page_make_young function, which moves a page to the start of the BP’s LRU list. In both cases, the ‘root’ wait is for internal mutexes: one protecting writes to the log in the oltp workload’s case, and one protecting the buffer pool in the read only workload’s case.
Conclusion
I think this new feature was much needed.  It’s a welcome addition to MySQL’s capabilities, and while, yes, it can have some impact in your workload (even if it is read-only), it must be compared to what happened before –  a service restart was needed.

How to setup MaxScale with MariaDB Galera Cluster

This post is just following up my previous blog post which describes how to setup 3-nodes MariaDB Galera Cluster with MySQL-Sandbox on single server.
 

Today, I’ll try to explain how we can setup MariaDB MaxScale over the Galera Cluster. Before I move ahead, I would like to explain about MaxScale little bit.
MariaDB MaxScale is a database proxy that enables horizontal database scaling while maintaining a fast response to client applications. You can implement MaxScale on either MySQL Replication or Galera cluster. With MySQL Replication, you can either use Read/Write Splitting or Connection routing and same with Galera Cluster. You can get more information here about this product.
https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-14/setting-up-maxscale/
https://mariadb.com/products/mariadb-maxscale
So here, I’m going to setup MaxScale with Read/Write Splitting on MariaDB Galera Cluster. I’m using Ubuntu and 3-node Galera Cluster setup which is running on single server.
MariaDB [(none)]> show global status like ‘wsrep_cluster_size%’;
+——————–+——-+
| Variable_name | Value |
+——————–+——-+
| wsrep_cluster_size | 3 |
+——————–+——-+
1 row in set (0.00 sec)
Download and Install MaxScale  :  https://mariadb.com/downloads/maxscale
nilnandan@ubuntu:~/MariaDB$ wget https://downloads.mariadb.com/MaxScale/2.0.5/ubuntu/dists/xenial/main/binary-amd64/maxscale-2.0.5-1.ubuntu.xenial.x86_64.deb
… 
2017-03-27 11:01:29 (1.86 MB/s) – ‘maxscale-2.0.5-1.ubuntu.xenial.x86_64.deb’ saved [3739198/3739198]

nilnandan@ubuntu:~/MariaDB$ sudo dpkg -i maxscale-2.0.5-1.ubuntu.xenial.x86_64.deb 
Selecting previously unselected package maxscale.
(Reading database … 216604 files and directories currently installed.)
Preparing to unpack maxscale-2.0.5-1.ubuntu.xenial.x86_64.deb …
Unpacking maxscale (2.0.5) …
Setting up maxscale (2.0.5) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for libc-bin (2.23-0ubuntu7) …
nilnandan@ubuntu:~/MariaDB$
Configure Maxscale :  

To make MaxScale work, first we have to configure maxscale.cnf file. There are 5 sections in this,

1. Global Parameters
2. Service
3. Listener
4. MySQL Monitor
5. Maxadmin Configuration

Here, You can set all MySQL related global parameters with 1st section.
“A service represents the database service that MaxScale offers to the clients”
“A listener defines a port and protocol pair that is used to listen for connections to a service.”
“MySQL Monitor modules are used by MaxScale to internally monitor the state of the backend databases in order to set the server flags for each of those servers”
“Maxadmin is client utility which connects with MaxScale, run commands and check status of cluster”
You can get more information here about this sections. But it should be like this,

Client  <-> Listener <-> Service <-> Galera Nodes

For the service and MySQL Monitor, we have to create separate MySQL users with permissions. But here, I’m using one MySQL user maxscale with all permissions as this is testing server. If you want to check what permissions are needed then you can visit this page.   Setting up MaxScale

Here is my configuration file:
root@ubuntu:~# cat /etc/maxscale.cnf
# Global parameters
[maxscale]
threads=4

# Service definitions
[Read-Write Service]
type=service
router=readwritesplit
servers=dbnode1, dbnode2, dbnode3
user=maxscale
passwd=msandbox

# Listener definitions for the services
[Read-Write Listener]
type=listener
service=Read-Write Service
protocol=MySQLClient
port=4006
socket=/tmp/galeramaster.sock

# Server definitions
[dbnode1]
type=server
address=127.0.0.1
port=19222
socket=/tmp/mysql_sandbox19222.sock
protocol=MySQLBackend
priority=1

[dbnode2]
type=server
address=127.0.0.1
port=19223
socket=/tmp/mysql_sandbox19223.sock
protocol=MySQLBackend
priority=2

[dbnode3]
type=server
address=127.0.0.1
port=19224
socket=/tmp/mysql_sandbox19224.sock
protocol=MySQLBackend
priority=3

# Monitor for the servers
[Galera Monitor]
type=monitor
module=galeramon
servers=dbnode1, dbnode2, dbnode3
user=maxscale
passwd=msandbox
monitor_interval=10000
use_priority=true
# This service enables the use of the MaxAdmin interface
# MaxScale administration guide:
# https://github.com/mariadb-corporation/MaxScale/blob/master/Documentation/Reference/MaxAdmin.md

[MaxAdmin Service]
type=service
router=cli

[MaxAdmin Listener]
type=listener
service=MaxAdmin Service
protocol=maxscaled
port=6603
socket=default
root@ubuntu:~#

For more details about this settings, you can visit this page : MaxScale Read/Write Splitting With Galera Cluster

Start MaxScale from root with command : service maxscale start
root@ubuntu:~# ps -ef | grep maxscale
maxscale 10532 1 0 14:57 ? 00:00:00 /usr/bin/maxscale –user=maxscale
root 10547 10405 0 14:57 pts/21 00:00:00 grep –color=auto maxscale
root@ubuntu:~#

If any error occurs, you can check here:  /var/log/maxscale 

When you’ll start maxscale with service command, by default it will use –user=maxscale. This is linux user which will not have any /home dir.Due to that, maxadmin command will not work and give error like

root@ubuntu:~# maxadmin
Unable to connect to MaxScale at /tmp/maxadmin.sock: Connection refused
root@ubuntu:~#
To make it simple, I would suggest to change maxscale.service file and update user to root rather than maxscale
root@ubuntu:~# cat /lib/systemd/system/maxscale.service

[Service]

ExecStartPre=/usr/bin/install -d /var/run/maxscale -o maxscale -g maxscale
#ExecStart=/usr/bin/maxscale –user=maxscale
ExecStart=/usr/bin/maxscale –user=root

root@ubuntu:~#

after that you’ll be able to login and run commands with maxadmin and check the health/status of cluster nodes.
root@ubuntu:~# maxadmin
MaxScale>
MaxScale> list servers
Servers.
——————-+—————–+——-+————-+——————–
Server | Address | Port | Connections | Status
——————-+—————–+——-+————-+——————–
dbnode1 | 127.0.0.1 | 19222 | 0 | Master, Synced, Running
dbnode2 | 127.0.0.1 | 19223 | 0 | Slave, Synced, Running
dbnode3 | 127.0.0.1 | 19224 | 0 | Slave, Synced, Running
——————-+—————–+——-+————-+——————–
MaxScale>
MaxScale> list services
Services.
————————–+———————-+——–+—————
Service Name | Router Module | #Users | Total Sessions
————————–+———————-+——–+—————
Read-Write Service | readwritesplit | 2 | 2
MaxAdmin Service | cli | 3 | 4
————————–+———————-+——–+—————

MaxScale> list listeners
Listeners.
———————+——————–+—————–+——-+——–
Service Name | Protocol Module | Address | Port | State
———————+——————–+—————–+——-+——–
Read-Write Service | MySQLClient | * | 4006 | Running
Read-Write Service | MySQLClient | /tmp/galeramaster.sock | 0 | Running
MaxAdmin Service | maxscaled | * | 6603 | Running
MaxAdmin Service | maxscaled | default | 0 | Running
———————+——————–+—————–+——-+——–

In “list servers” output, you can see that MaxAdmin defines Master or Slave servers in status column. This thing you can manage by priority. You can see in configuration file that I’ve set “use_priority=true” in [Galera Monitor] sections and set “priority=1,2 or 3” in [dbnode] sections. Priority=1 will be master here.

How to connect to Galera through MaxScale:
root@ubuntu:# mysql -umaxscale -p –socket=/tmp/galeramaster.sock
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 12642
Server version: 10.0.0 2.0.5-maxscale MariaDB Server
..
MySQL [(none)]>
MySQL [(none)]> show global status like ‘wsrep_cluster_size%’;
+——————–+——-+
| Variable_name | Value |
+——————–+——-+
| wsrep_cluster_size | 3 |
+——————–+——-+
1 row in set (0.01 sec)
So this is how you can setup MaxScale over Galera Cluster. There many more things which needs to be explained related to MaxScale like how to monitor the status, how we can put any node in maintenance mode etc but as this become already very long post, I’ll try to explain these things in my next blog post.

Oracle Database 12c Release 2 New Feature – Application Containers

One of the new multitenancy related features in Oracle 12c Release 2 is Application Containers. In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs […]

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569