Month: August 2016

How to Stream Change Data through MariaDB MaxScale using CDC API

Wed, 2016-08-31 15:12Massimiliano PintoIn the previous two blog posts, I introduced Data Streaming with MariaDB MaxScale and demonstrated how to configure MariaDB Master and MariaDB MaxScale for Data Streaming. In this post, I will review how to use the Change Data Capture (CDC) protocol to request streaming data from MariaDB MaxScale.

CDC Protocol

CDC is a new protocol introduced in MariaDB MaxScale 2.0 that allows clients to authenticate and register for CDC events. The new protocol is to be used in conjunction with AVRO router which currently converts binlog events into AVRO records. The CDC protocol is used by clients to request and receive change data events through MariaDB MaxScale either in AVRO or JSON format.

Protocol Phases

There are three phases of the protocol.

1. Connection and Authentication

Client connects to MariaDB MaxScale CDC protocol listener.

The authentication starts when the client sends the hexadecimal representation of the username concatenated with a colon (:) and the SHA1 of the password.

Example: For the user foobar with a password of foopasswd, client should send the following hexadecimal string

foobar:SHA1(foopasswd) ->  666f6f6261723a3137336363643535253331

Then MaxScale returns OK on success and ERR on failure.

2. Registration

Client sends UUID and specifies the output format (AVRO or JSON) for data retrieval.

Example: The following message from the client registers the client to receive AVRO change data events

REGISTER UUID=11ec2300-2e23-11e6-8308-0002a5d5c51b, TYPE=AVRO

Then MaxScale returns OK on success and ERR on failure.

3. Change Data Capture Commands

Request Change Data Events

REQUEST-DATA DATABASE.TABLE[.VERSION] [GTID]

This command fetches data from a specified table in a database and returns the output in the requested format (AVRO or JSON). Data records are sent to clients and if new AVRO versions are found (e.g. mydb.mytable.0000002.avro) the new schema and data will be sent as well.

The data will be streamed until the client closes the connection.

Clients should continue reading from network in order to automatically gets new events.

Examples:
REQUEST-DATA db1.table1
REQUEST-DATA dbi1.table1.000003
REQUEST-DATA db2.table4 0-11-345

Example output in JSON, note the AVRO schema and database records (“id” is the only column in table1) at the end:

{“fields”: [{“type”: “int”, “name”: “domain”}, {“type”: “int”, “name”: “server_id”}, {“type”: “int”, “name”: “sequence”}, {“type”: “int”, “name”: “event_number”}, {“type”: “int”, “name”: “timestamp”}, {“type”: {“type”: “enum”, “symbols”: [“insert”, “update_before”, “update_after”, “delete”], “name”: “EVENT_TYPES”}, “name”: “event_type”}, {“type”: “int”, “name”: “id”}], “type”: “record”, “namespace”: “MariaDB MaxScaleChangeDataSchema.avro”, “name”: “ChangeRecord”}                    
{“event_number”: 1, “event_type”: “insert”, “domain”: 0, “sequence”: 225, “server_id”: 1, “timestamp”: 1463037556, “id”: 1111}
{“event_number”: 4, “event_type”: “insert”, “domain”: 0, “sequence”: 225, “server_id”: 1, “timestamp”: 1463037563,  “id”: 4444}

Note: When a table has been altered a new schema is generated by the AVRO converter and a new AVRO file comes with that schema.

Request Change Data Event Statistics
QUERY-LAST-TRANSACTION

Returns JSON only with last GTID, timestamp and affected tables.

Example output:

{“GTID”: “0-1-178”, “events”: 2, “timestamp”: 1462290084, “tables”: [“db1.tb1”, “db2.tb2”]}

Last GTID could then be later used in a REQUEST-DATA query.

QUERY-TRANSACTION 0-14-1245

Returns JSON events from the specified GTID, the commit timestamp and affected tables:

Example output:
{“GTID”: “0-14-1245”, “events”: 23, “timestamp”: 1462291124, “tables”: [“db1.tb3”]}

 

How to Quickly Connect to the MaxScale CDC Protocol

The avrorouter comes with an example client program, cdc, written in Python 3. This client can connect to MaxScale configured with the CDC protocol and the avrorouter.

Before using this client, you will need to install the Python 3 interpreter and add users to the service with thecdc_users script. For more details about the user creation, please refer to the CDC Protocol and CDC Users documentation.

The output of #cdc –help provides a full list of supported options and a short usage description of the client program.

-bash-4.1$ python3 cdc –help
usage: cdc [–help] [-h HOST] [-P PORT] [-u USER] [-p PASSWORD]
[-f {AVRO,JSON}] [-t READ_TIMEOUT]
FILE [GTID]

CDC Binary consumer

positional arguments:
FILE Requested table name in the following format:
DATABASE.TABLE[.VERSION]
GTID Requested GTID position

optional arguments:
–help show this help message and exit
-h HOST, –host HOST Network address where the connection is made
-P PORT, –port PORT Port where the connection is made
-u USER, –user USER Username used when connecting
-p PASSWORD, –password PASSWORD
Password used when connecting
-f {AVRO,JSON}, –format {AVRO,JSON}
Data transmission format
-t READ_TIMEOUT, –timeout READ_TIMEOUT
Read timeout

Here is an example of how to query data in JSON using the cdc Python script. It queries the table test.mytable for all change records.

# cdc –user=myuser –password=mypasswd –host=127.0.0.1 –port=4001 test.mytable

The AVRO binary output will look like this:

Objavro.codenullavro.schema?{“type”:”record”,”name”:”ChangeRecord”,”namespace”:”MariaDB MaxScaleChangeDataSchema.avro”,”fields”:[{“name”:”domain”,”type”:{“type”:”int”}},{“name”:”server_id”,”type”:{“type”:”int”}},{“name”:”sequence”,”type”:{“type”:”int”}},{“name”:”event_number”,”type”:{“type”:”int”}},{“name”:”timestamp”,”type”:{“type”:”int”}},{“name”:”event_type”,”type”:{“type”:”enum”,”name”:”EVENT_TYPES”,”symbols”:[“insert”,”update_before”,”update_after”,”delete”]}},{“name”:”id”,”type”:{“type”:”int”}}]}??7)??.?v?r???????
??7)??.?v?r???????
???7)??.?v?r???????
???7)??.?v?r????●?
??䗏?

Note: Each client could request data from one AVRO file (it contains one schema only).

In order to get all the events in a transaction that affects N tables, N connections (one per each table) are needed.

The application stays in the read loop and it will gets additional events as long as they come in. All clients are notified with new events.

Assume a transaction affects two tables, tbl1 and tbl2 in db1:

BEGIN;
INSERT into db1.tbl1 (id) VALUES (111);
​INSERT into db1.tbl2 (id) VALUES (222);
COMMIT;

The user should start two clients:

# cdc –user=myuser –password=mypasswd –host=127.0.0.1 –port=4001 db1.tbl1
# cdc –user=myuser –password=mypasswd –host=127.0.0.1 –port=4001 db1.tbl2

 

Example Client (python code):

#!/usr/bin/env python3

import time
import json
import re
import sys
import socket
import hashlib
import argparse
import subprocess
import selectors
import binascii
import os

# Read data as JSON
def read_json():
decoder = json.JSONDecoder()
rbuf = bytes()
ep = selectors.EpollSelector()
ep.register(sock, selectors.EVENT_READ)

while True:
pollrc = ep.select(timeout=int(opts.read_timeout) if int(opts.read_timeout) > 0 else None)
try:
buf = sock.recv(4096, socket.MSG_DONTWAIT)
rbuf += buf
while True:
rbuf = rbuf.lstrip()
data = decoder.raw_decode(rbuf.decode(‘ascii’))
rbuf = rbuf[data[1]:]
print(json.dumps(data[0]))
except ValueError as err:
sys.stdout.flush()
pass
except Exception:
break

# Read data as Avro
def read_avro():
ep = selectors.EpollSelector()
ep.register(sock, selectors.EVENT_READ)

while True:
pollrc = ep.select(timeout=int(opts.read_timeout) if int(opts.read_timeout) > 0 else None)
try:
buf = sock.recv(4096, socket.MSG_DONTWAIT)
os.write(sys.stdout.fileno(), buf)
sys.stdout.flush()
except Exception:
break

parser = argparse.ArgumentParser(description = “CDC Binary consumer”, conflict_handler=”resolve”)
parser.add_argument(“-h”, “–host”, dest=”host”, help=”Network address where the connection is made”, default=”localhost”)
parser.add_argument(“-P”, “–port”, dest=”port”, help=”Port where the connection is made”, default=”4001″)
parser.add_argument(“-u”, “–user”, dest=”user”, help=”Username used when connecting”, default=””)
parser.add_argument(“-p”, “–password”, dest=”password”, help=”Password used when connecting”, default=””)
parser.add_argument(“-f”, “–format”, dest=”format”, help=”Data transmission format”, default=”JSON”, choices=[“AVRO”, “JSON”])
parser.add_argument(“-t”, “–timeout”, dest=”read_timeout”, help=”Read timeout”, default=0)
parser.add_argument(“FILE”, help=”Requested table name in the following format: DATABASE.TABLE[.VERSION]”)
parser.add_argument(“GTID”, help=”Requested GTID position”, default=None, nargs=’?’)

opts = parser.parse_args(sys.argv[1:])

sock = socket.create_connection([opts.host, opts.port])

# Authentication
auth_string = binascii.b2a_hex((opts.user + “:”).encode())
auth_string += bytes(hashlib.sha1(opts.password.encode(“utf_8”)).hexdigest().encode())
sock.send(auth_string)

# Discard the response
response = str(sock.recv(1024)).encode(‘utf_8’)

# Register as a client as request Avro format data
sock.send(bytes((“REGISTER UUID=XXX-YYY_YYY, TYPE=” + opts.format).encode()))

# Discard the response again
response = str(sock.recv(1024)).encode(‘utf_8’)

# Request a data stream
sock.send(bytes((“REQUEST-DATA ” + opts.FILE + (” ” + opts.GTID if opts.GTID else “”)).encode()))

if opts.format == “JSON”:
read_json()
elif opts.format == “AVRO”:
read_avro()

Additional example applications can be found here:

https://github.com/mariadb-corporation/MaxScale/tree/2.0/server/modules/routing/avro

Up next in this blog series, Markus Makela will show you how to use the CDC API in a Kafka Producer.

Relevant Links

Data Streaming with MariaDB MaxScale

Configuring MariaDB Master and MariaDB MaxScale for Data Streaming Service

Tags: Big DataDeveloperHowtoMaxScaleMySQL
About the Author

Massimiliano is a Senior Software Solutions Engineer working mainly on MaxScale. Massimiliano has worked for almost 15 years in Web Companies playing the roles of Technical Leader and Software Engineer. Prior to joining MariaDB he worked at Banzai Group and Matrix S.p.A, big players in the Italy Web Industry. He is still a guy who likes too much the terminal window on his Mac. Apache modules and PHP extensions skills are included as well.

HA with MySQL Group Replication and ProxySQL

After having played with MySQL Group Replication and HAProxy, it’s time to show you how easy it’s to setup MySQL HA with ProxySQL.
ProxySQL is a high performance open source proxy for MySQL. It has many features that invite you to discover on proxysql.com and on github.
If you remember, I wrote in my last post that it is recommended to use Group Replication with only one WRITER group member. As it is the preferred architecture, I will show you how to achieve this using ProxySQL. With ProxySQL, you don’t need to have two different interfaces to split reads and writes.
In fact, when you use ProxySQL, you have a much larger amount of options to route your queries. In production, the smart DBA, will identify the queries that would be better to move away from the writer member. ProxySQL allows you to route the queries using regexp (this is what I am using here for demo purpose) but also digest of your queries. This is what I would recommend to use in production.
So in summary about how to define the best routing, you should configure ProxySQL to send all the traffic to only one member. Then check and determine which are the expensive read statements in “`stats_mysql_query_digest“` and create rules to send those queries using their digest to the READ hostgroup.
Let’s go back to our architecture for today. This is how it looks like :

 
So we have two hostgroups :

one (in red in the diagram) for writes and reads not matching our rule(s), where only one node is “active” and in case there is a “problem”, ProxySQL will route to another one.
one (in green) for all the queries matching our rule(s), where all members are active and get the requests

Let me show you how to configure those groups in ProxySQL:
[root@mysql1 ~]# mysql -u admin -padmin -h 127.0.0.1 -P 6032
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 0
Server version: 5.5.30 (ProxySQL Admin Module)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql1 mysql> \R Admin>
PROMPT set to ‘Admin> ‘

Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,’192.168.90.2′,3306);
Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,’192.168.90.3′,3306);
Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,’192.168.90.4′,3306);

Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (2,’192.168.90.2′,3306);
Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (2,’192.168.90.3′,3306);
Admin> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (2,’192.168.90.4′,3306);

So here we can see we will use the same Groups members that are part of our “cluster”.
Admin> select * from mysql_servers;
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+
| hostgroup_id | hostname     | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms |
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+
| 1            | 192.168.90.2 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |
| 1            | 192.168.90.3 | 3306 | ONLINE      | 1      | 0           | 1000            | 0                   | 0       | 0              |
| 1            | 192.168.90.4 | 3306 | ONLINE     | 1      | 0           | 1000            | 0                   | 0       | 0              |
| 2            | 192.168.90.2 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |
| 2            | 192.168.90.3 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |
| 2            | 192.168.90.4 | 3306 | ONLINE       | 1      | 0           | 1000            | 0                   | 0       | 0              |
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+

Easy isn’t it ?
Now we will add a scheduler that will use a script that verifies our MySQL InnoDB Cluster (Group Replication). The script is available on my github: here and you can put it in “`/var/lib/proxysql/“`
Admin> INSERT INTO scheduler(id,interval_ms,filename,arg1,arg2,arg3,arg4, arg5)
VALUES (1,’10000′,’/var/lib/proxysql/proxysql_groupreplication_checker.sh’,’1′,’2′,’1′,’0′,’/var/lib/proxysql/proxysql_groupreplication_checker.log’);

Let’s save and load the scheduler:
Admin> SAVE SCHEDULER TO DISK;

Admin> LOAD SCHEDULER TO RUNTIME;

What are those values in arg1 to arg5 ?

arg1 is the hostgroup_id for write
arg2 is the hostgroup_id for read
arg3 is the number of writers we want active at the same time
arg4 represents if we want that the member acting for writes is also candidate for reads
arg5 is the log file

So now we can see that the script modified the status of the members :
Admin> select * from mysql_servers;
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+
| hostgroup_id | hostname     | port | status       | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms |
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+
| 1            | 192.168.90.2 | 3306 | ONLINE       | 1      | 0           | 1000            | 5                   | 0       | 0              |
| 1            | 192.168.90.3 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 5                   | 0       | 0              |
| 1            | 192.168.90.4 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 5                   | 0       | 0              |
| 2            | 192.168.90.2 | 3306 | OFFLINE_SOFT | 1      | 0           | 1000            | 10                  | 0       | 0              |
| 2            | 192.168.90.3 | 3306 | ONLINE       | 1      | 0           | 1000            | 10                  | 0       | 0              |
| 2            | 192.168.90.4 | 3306 | ONLINE       | 1      | 0           | 1000            | 10                  | 0       | 0              |
+————–+————–+——+————–+——–+————-+—————–+———————+———+—————-+
It’s time to add some routing rules to be able to use those hostgroups. If you don only the first hostgroup will be used.
Admin> insert into mysql_query_rules (active, match_pattern, destination_hostgroup, apply)
values (1,”^SELECT”,2,1);

We will route all queries starting by select (this is not a recommendation of course a we will also send to hostgroup 2 all SELECT… FOR UPDATE, for example).
Admin> LOAD MYSQL QUERY RULES TO RUNTIME;
You can now save to disk the setup as it works as expected and it was easy and quick to setup:
Admin> save MYSQL SERVERS TO DISK;

Admin> SAVE MYSQL QUERY RULES TO DISK;
This is a demo in video :

Database Challenges and Innovations. Interview with Jim Starkey

“Isn’t it ironic that in 2016 a non-skilled user can find a web page from Google’s untold petabytes of data in millisecond time, but a highly trained SQL expert can’t do the same thing in a relational database one billionth the size?.–Jim Starkey.
I have interviewed Jim Starkey. A database legend, Jim’s career as an entrepreneur, architect, and innovator spans more than three decades of database history.
RVZ
Q1. In your opinion, what are the most significant advances in databases in the last few years?
Jim Starkey: I’d have to say the “atom programming model” where a database is layered on a substrate of peer-to-peer replicating distributed objects rather than disk files. The atom programming model enables scalability, redundancy, high availability, and distribution not available in traditional, disk-based database architectures.
Q2. What was your original motivation to invent the NuoDB Emergent Architecture?
Jim Starkey: It all grew out of a long Sunday morning shower. I knew that the performance limits of single-computer database systems were in sight, so distributing the load was the only possible solution, but existing distributed systems required that a new node copy a complete database or partition before it could do useful work. I started thinking of ways to attack this problem and came up with the idea of peer to peer replicating distributed objects that could be serialized for network delivery and persisted to disk. It was a pretty neat idea. I came out much later with the core architecture nearly complete and very wrinkled (we have an awesome domestic hot water system).
Q3. In your career as an entrepreneur and architect what was the most significant innovation you did?
Jim Starkey: Oh, clearly multi-generational concurrency control (MVCC). The problem I was trying to solve was allowing ad hoc access to a production database for a 4GL product I was working on at the time, but the ramifications go far beyond that. MVCC is the core technology that makes true distributed database systems possible. Transaction serialization is like Newtonian physics – all observers share a single universal reference frame. MVCC is like special relativity, where each observer views the universe from his or her reference frame. The views appear different but are, in fact, consistent.
Q4. Proprietary vs. open source software: what are the pros and cons?
Jim Starkey: It’s complicated. I’ve had feet in both camps for 15 years. But let’s draw a distinction between open source and open development. Open development – where anyone can contribute – is pretty good at delivering implementations of established technologies, but it’s very difficult to push the state of the art in that environment. Innovation, in my experience, requires focus, vision, and consistency that are hard to maintain in open development. If you have a controlled development environment, the question of open source versus propriety is tactics, not philosophy. Yes, there’s an argument that having the source available gives users guarantees they don’t get from proprietary software, but with something as complicated as a database, most users aren’t going to try to master the sources. But having source available lowers the perceived risk of new technologies, which is a big plus.
Q5. You led the Falcon project – a transactional storage engine for the MySQL server- through the acquisition of MySQL by Sun Microsystems. What impact did it have this project in the database space?
Jim Starkey: In all honesty, I’d have to say that Falcon’s most important contribution was its competition with InnoDB. In the end, that competition made InnoDB three times faster. Falcon, multi-version in memory using the disk for backfill, was interesting, but no matter how we cut it, it was limited by the performance of the machine it ran on. It was fast, but no single node database can be fast enough.
Q6. What are the most challenging issues in databases right now?
Jim Starkey: I think it’s time to step back and reexamine the assumptions that have accreted around database technology – data model, API, access language, data semantics, and implementation architectures. The “relational model”, for example, is based on what Codd called relations and we call tables, but otherwise have nothing to do with his mathematic model. That model, based on set theory, requires automatic duplicate elimination. To the best of my knowledge, nobody ever implemented Codd’s model, but we still have tables which bear a scary resemblance to decks of punch cards. Are they necessary? Or do they just get in the way?
Isn’t it ironic that in 2016 a non-skilled user can find a web page from Google’s untold petabytes of data in millisecond time, but a highly trained SQL expert can’t do the same thing in a relational database one billionth the size?. SQL has no provision for flexible text search, no provision for multi-column, multi-table search, and no mechanics in the APIs to handle the results if it could do them. And this is just one a dozen problems that SQL databases can’t handle. It was a really good technical fit for computers, memory, and disks of the 1980’s, but is it right answer now?
Q7. How do you see the database market evolving?
Jim Starkey: I’m afraid my crystal ball isn’t that good. Blobs, another of my creations, spread throughout the industry in two years. MVCC took 25 years to become ubiquitous. I have a good idea of where I think it should go, but little expectation of how or when it will.
Qx. Anything else you wish to add?
Jim Starkey: Let me say a few things about my current project, AmorphousDB, an implementation of the Amorphous Data Model (meaning, no data model at all). AmorphousDB is my modest effort to question everything database.
The best way to think about Amorphous is to envision a relational database and mentally erase the boxes around the tables so all records free float in the same space – including data and metadata. Then, if you’re uncomfortable, add back a “record type” attribute and associated syntactic sugar, so table-type semantics are available, but optional. Then abandon punch card data semantics and view all data as abstract and subject to search. Eliminate the fourteen different types of numbers and strings, leaving simply numbers and strings, but add useful types like URL’s, email addresses, and money. Index everything unless told not to. Finally, imagine an API that fits on a single sheet of paper (OK, 9 point font, both sides) and an implementation that can span hundreds of nodes. That’s AmorphousDB.
————
Jim Starkey invented the NuoDB Emergent Architecture, and developed the initial implementation of the product. He founded NuoDB [formerly NimbusDB] in 2008, and retired at the end of 2012, shortly before the NuoDB product launch.
Jim’s career as an entrepreneur, architect, and innovator spans more than three decades of database history from the Datacomputer project on the fledgling ARPAnet to his most recent startup, NuoDB, Inc. Through the period, he has been
responsible for many database innovations from the date data type to the BLOB to multi-version concurrency control (MVCC). Starkey has extensive experience in proprietary and open source software.
Starkey joined Digital Equipment Corporation in 1975, where he created the Datatrieve family of products, the DEC Standard Relational Interface architecture, and the first of the Rdb products, Rdb/ELN. Starkey was also software architect for DEC’s database machine group.
Leaving DEC in 1984, Starkey founded Interbase Software to develop relational database software for the engineering workstation market. Interbase was a technical leader in the database industry producing the first commercial implementations of heterogeneous networking, blobs, triggers, two phase commit, database events, etc. Ashton-Tate acquired Interbase Software in 1991, and was, in turn, acquired by Borland International a few months later. The Interbase database engine was released open source by Borland in 2000 and became the basis for the Firebird open source database project.
In 2000, Starkey founded Netfrastructure, Inc., to build a unified platform for distributable, high quality Web applications. The Netfrastructure platform included a relational database engine, an integrated search engine, an integrated Java virtual machine, and a high performance page generator.
MySQL, AB, acquired Netfrastructure, Inc. in 2006 to be the kernel of a wholly owned transactional storage engine for the MySQL server, later known as Falcon. Starkey led the Falcon project through the acquisition of MySQL by Sun Microsystems.
Jim has a degree in Mathematics from the University of Wisconsin.
For amusement, Jim codes on weekends, while sailing, but not while flying his plane.
——————
Resources
– NuoDB Emergent Architecture (.PDF)
– On Database Resilience. Interview with Seth Proctor, ODBMs Industry Watch, March 17, 2015
Related Posts
– Challenges and Opportunities of The Internet of Things. Interview with Steve Cellini, ODBMS Industry Watch, October 7, 2015
– Hands-On with NuoDB and Docker, BY MJ Michaels, NuoDB. ODBMS.org– OCT 27 2015
– How leading Operational DBMSs rank popularity wise? By Michael Waclawiczek– ODBMS.org · JANUARY 27, 2016
– A Glimpse into U-SQL BY Stephen Dillon, Schneider Electric, ODBMS.org-DECEMBER 7, 2015
– Gartner Magic Quadrant for Operational DBMS 2015
Follow us on Twitter: @odbmsorg
##

Webinar Thursday, September 1 – MongoDB Security: A Practical Approach

Percona MySQL and MongoDB WebinarsPlease join David Murphy as he presents a webinar Thursday, September 1 at 10 am PDT (UTC-7) on MongoDB Security: A Practical Approach. (Date changed*) This webinar will discuss the many features and options available in the MongoDB community to help secure your database environment. First, we will cover how these features work and how to […]

MySQL Sharding with ProxySQL

MySQL Sharding with ProxySQLThis article demonstrates how MySQL sharding with ProxySQL works. Recently a colleague of mine asked me to provide a simple example on how ProxySQL performs sharding. In response, I’m writing this short tutorial in the hope it will illustrate ProxySQL’s sharding functionalities, and help people out there better understand how to use it. ProxySQL is […]

Percona Live Europe Discounted Pricing and Community Dinner!

Get your Percona Live Europe discounted tickets now, and sign up for the community dinner.
The countdown is on for the annual Percona Live Europe Open Source Database Conference! This year the conference will be taking place in the great city of Amsterdam October 3-5. This three-day conference will focus on the latest trends, news and best practices in the MySQL, MongoDB, PostgreSQL and other open source databases, while tackling subjects such as analytics, architecture and design, security, operations, scalability and performance. Percona Live provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs.
With breakout sessions, tutorial sessions and keynote speakers, there will certainly be no lack of content.
Advanced Rate Registration ENDS September 5, so make sure to register now to secure the best price possible.
As it is a Percona Live Europe conference, there will certainly be no lack of FUN either!!!!
As tradition holds, there will be a Community Dinner. Tuesday night, October 4, Percona Live Diamond Sponsor Booking.com hosts the Community Dinner at their very own headquarters located in historic Rembrandt Square in the heart of the city. After breakout sessions conclude, attendees are picked up right outside of the venue and taken to booking.com’s headquarters by canal boats! This gives all attendees the opportunity to play “tourist” while viewing the beauty of Amsterdam from the water. Attendees are dropped off right next to Booking.com’s office (return trip isn’t included)! The Lightning Talks for this year’s conference will be featured at the dinner.
Come and show your support for the community while enjoying dinner and drinks! The first 50 people registered for the dinner get in the doors for €10 (after that the price goes to €15 euro). Space is limited so make sure to sign up ASAP!
So don’t forget, register for the conference and sign up for the community dinner before space is gone! See you in Amsterdam!

MariaDB 10.1.17 and MariaDB Galera Cluster 10.0.27 now available

The MariaDB project is pleased to announce the immediate availability of MariaDB 10.1.17 and MariaDB Galera Cluster 10.0.27. See the release notes and changelogs for details on these releases. Download MariaDB 10.1.17 Release Notes Changelog What is MariaDB 10.1? MariaDB APT and YUM Repository Configuration Generator Download MariaDB Galera Cluster 10.0.27 Release Notes Changelog What […]
The post MariaDB 10.1.17 and MariaDB Galera Cluster 10.0.27 now available appeared first on MariaDB.org.

From Stockholm to Paris via Amsterdam: Polyglot Persistence Meetups across Europe

Thanks to everyone who joined us last week in Stockholm at Spotify HQ: it was great to meet fellow Polyglots and discuss the talks by Radovan Zvoncek, Spotify: Storage at Spotify and Jim Dowling, Scientist & Lecturer: Polyglot metadata for Hadoop. We’ll be sure to schedule the next meetup soon!

Tomorrow, August 31st, we’ll be back in Amsterdam, hosted again by Booking.com in their HQ. Thank you for having us again.
The agenda includes talks by Art van Scheppingen, Severalnines, on 9 DevOps Tips for going in production with your open source databases as well as by fellow Polyglot member Yegor Andreenko, who will give us an introduction to ClickHouse – an open-source column-oriented database management system that allows users to generate analytical data reports in real time.

You can still join tomorrow of course if you’re in Amsterdam by signing up on the meetup.com page.
And finally for now, we’ll also be in Paris on September 13th for a joint meetup with Percona and the MySQL User Group there.
The agenda here includes talks by Giuseppe Maxia, VMware, on MySQL Operations in Docker, Sveta Smirnova, Percona, on introducing new SQL syntax and improving performance with preparse Query Rewrite Plugins and Vinay Joosery, Severalnines, on Polyglot Persistence for MongoDB, PostgreSQL and MySQL DBAs.

We’ll be meeting at Logmatic.io in 75015 Paris for an evening of interesting talks, discussions and drinks/nibbles. Simply follow this meetup link to sign up.
We are working on further dates as well as we continue this meetup series, so look out for future updates and locations.
See you there!

Tags: meetuppolyglot persistenceMySQLMongoDBPostgreSQLperconabooking.comspotify

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569