Author: Dave Stokes

Finding Values with JSON_CONTAINS

There was an interesting but hard to read post on StackOverflow about how ‘insert select delete’ data from a MySQL JSON data type column.  The first line of the writer’s problem is a little confusing ‘
In order to record user mac_address and count mac_address to restrict user login’s pc or notebook to control user available max for example (it’s work)’ but the examples reveled more about what was desired.
The idea was to track MAC address used by various users and the author of the question was wondering how to up data a JSON Array of values with JSON_INSERT.  INSERT is for inserting and the better choice would be JSON_ARRAY_APPEND or JSON_ARRAY_INSERT.   
But what caught my eye was the second question: Select sql command for json column ? could be example? If I want to check whether mac value exists ‘c84wr8492eda’ 

Well, here comes a shameless plug for my Book MySQL and JSON – A Practical Programming Guide  as it details how to do this sort of thing.  What is desired is a certain value (c84wr8492eda) and we can find that easily enough.  We know the key that needs to be searched (mac) and the desired MAC address. 
MySQL> select json_contains(auth_list,json_quote(‘c84wr8492eda’),’$.mac’) from users;+————————————————————-+| json_contains(auth_list,json_quote(‘c84wr8492eda’),’$.mac’) |+————————————————————-+| 1 |+————————————————————-+

A better code snipped would be SELECT id FROM users WHERE JSON_CONTAINS(auth_list,JSON_QUOTE(‘c84wr8492eda’),’$,mac’) = 1; as you will probably be acting on the  ‘id’ field with the matching MAC address.   

You can find answers to problems like this in my hands book available from and other book sellers.

The third question ‘Delete sql command for json column ? Could be example? if I want to delete a item where mac value is ‘c84wr8492eda” was also unclear.  Delete the entire record or delete the MAC address from the JSON column?   Ah, the joys of StackOverflow.

DELETE FROM users WHERE JSON_CONTAINS(auth_list,JSON_QUOTE(‘c84wr8492eda’),’$,mac’) = 1;  would remove the entire row.  But what about pruning the one item out of the array?  Well there is not a JSON_ARRAY_REMOVE_ONE_ITEM function.    I would want to get the values for auth_list into a string, removing the desired MAC address, and then using JSON_SET to rewrite the column.  But if you have other options, please let me know!

Not a Fan of Redhat RPMs Today or Why No Follow Up for RH/Centos/Fedora from Last Blog

I received a lot of good feedback Building the PHP MySQL XDevAPI PECL Extension on MySQL 8.0.11 and PHP 7.2 for the MySQL Document Store including a few folks asking if I could document that it takes to get the MySQL X DevAPI working with an RPM based Linux distro.Well I’d really like to.  But I can’t.Redhat Linux 4
I still remember getting my copy of Redhat Linux 4.0 (not RHEL — no enterprise thoughts in those days)  It was January 1997 and I installed it the week before Rehaht 4.1 came out.  I thought that RPMs were much better than the old ‘unzip the tar file;./configure; make install’ circus. I thought Redhat was pretty cool. Heck I even became a RHCE.  

Then I found the Debian variants easier to work with and more up to date.  My not so humble opinion is that Ubuntu is the best all around Linux distro around. But there are a lot of RPM based systems out there and I need to be able to show how to get the MySQL X Devapi working on them.  But it ain’t easy.

Step 1 Install CentOS 7

I had not installed CentOS in some time and it installed fairly nicely or as nice as Ubuntu.   So no problem there,

Step 2 What is that Deb package Called in RPM speak?

Here is where the wagon goes off the rails. TO get the developer tools you must sudo yum -y groupinstall  ‘Development Tools’.  Then you get to download the OpenSSL tarball ’cause there is not one RPM for CentOS 7 (considering its popularity I was gobsmacked to discover this lacking). Next was loading the protobuf-devel package.  

But when I try to run the PECL install mysql_xdevapi I run into a compiler error.  Well, it is Friday afternoon and my frustration/inexperience with recent RPM software is telling me to call it a week.

I will try again later.  If you are more up to speed on RPMs and want to provide guidance for me please do so.  If not have a good weekend!

MonoDB versus MySQL Document Store Command Comparisons II

Last time I was stumped by the MongoDB $gt: operator.  I wanted to look for restaurants in a certain Manhattan burough OR in a zipcode greater than a certain zipcode.  Well, I was getting different results between Mongo and MySQL.To > or Not To >, That Is the Query
Lets say we have three records with the same key but the values are 1, 2, and “3”. Yup, you got it two numerics and one string.  I would expect schema less data to be free flowing, not typed, and pretty much a free for all.  Whoops. Bad assumption on my part for Mongo use.I added three JSON documents into Mongo as can be seen below:

Our three documents with the values of 1, 2, & “3” in Mongo
And the same data into the MySQL Document Store:

Our test data in the MySQL Document Store with the values of 1, 2, and “3”

The Search Now search for items with a value greater than 1. MySQL handles this by returning two records:

MySQL says there are two documents where the value of item is greater than 1
However Mongo differs:

Mongo says ther is only one document with the value of item great than 1

What Happened?Mongo’s reference manual under db.collection.find informs us that ‘comparison operators only perform comparisons on documents where the BSON type of the target field matches the type of the query operand.’ and silly me was trying to compare a numeric to a string.Now you can enforce data types in a Mongo collection by using schema validation.  But I thought the beauty of schema less data was that you did not have to normalize the data.Am I picking nits? Well, I have been pushing data around in computers for close to four decades now and wish I had a nickel (or other small coin of similar value in another curency) for every time I had to ‘wash’ my data to switch it from Type X to Type Y or reshuffle dates or even add/remove thousands separators.  To me schema less means that “3” and 3 are equal. At least until they are cast to a type.
Next Time
I have been getting some very good feedback on this subject and will delve into this more.

More Porting Data from MongoDB to the MySQL Document Store

Last time we looked at moving a JSON data set from MongoDB to the MySQL Document Store.  Let’s move another and then see how to investigate this date.  We will use the primer-dataset.json that contains data on restaurants around New York City.Loading Data
The loading of the JSON data set was covered last time but here is the gist. The first step is to fire up the MySQL Shell and login to the server.

Here a new schema is created and then a new collection
 We need a new schema for this data and the example shows one created as nyeats.  The within that new schema a collection is created with the name restaurants.

Then we switch to Python mode to load the data with a simple program
Switching to Python mode, a simple program reads the data from a file and loads the collection.What types of RestaurantsWe can quickly look at all the restaurants but it may be easier to start out looking at the types of cuisine. And it would be nice to see the numbers of each type.The result set:[    {        “$.cuisine”: “Polynesian”,         “count(‘*’)”: 1    },    {        “$.cuisine”: “Café/Coffee/Tea”,         “count(‘*’)”: 2    },    {        “$.cuisine”: “Cajun”,         “count(‘*’)”: 7    },…    {        “$.cuisine”: “Latin (Cuban, Dominican, Puerto Rican, South & Central American)”,         “count(‘*’)”: 850    },    {        “$.cuisine”: “Other”,         “count(‘*’)”: 1011    }]85 documents in set (0.7823 sec)The big surprise for me was the 1,011 other restaurants after seeing a rather inclusive list of cuisine styles.

What cuisine types are available and their numbers?

Feel like Red Beans and RiceSo lets narrow our search down and look for some Cajun food.  But since we are health conscious we will want to check the health department ratings on the restaurants.

And we can see the names of the restaurants with their latest health department grades.

Next time we will dig deeper into our NYC restaurants

Porting Data From MongoDB to MySQL Document Store in TWO Easy Steps

Porting data from MongoDB to the MySQL Document Store is very easy.  The example I will use is an example data set from the good folks at Mongo named zips.json that contains a list of US Postal Codes and can be found at for your downloading pleasure.I copied the file into the Downloads directory on my Unbuntu laptop and then fired up the new MySQL Shell.  After login, I created a new schema creatively named zips with session.createSchema(‘zips’).  When then set the db object to this new schema with the command \use zips.

Creating a new schema named ‘zips’ and then informing the system that I wish to use this new schema as the db object

Now it is time to populate the schema with a collection for holding documents. The collection is named zip and is created with db.createCollection(‘zip’) The next step is to read the zips.json file into the collection using Python

We need to create a new collection named zip in the schema we just created and then switch to Python mode to read in the data line by line and store it as a document in the zip collection.

You might want to go back and read the wonderful presentation and scripts by Giuseppe Maxia on loading MongoDB data into MySQL at as he originated this very useful bit of code.One thing that helps is that this data set has a _id field that the MySQL Document Store will grab and use as the InnoDB primary key.  I will have to brush up on my Python to extract another value from data sets to use for the _id field.And we can now perform NoSQL searches on the data.

A NoSQL search of the data in the zip collection

Or SQL. First we have to change to SQL node with \sql and then search for the same record with SELECT * FROM zip WHERE _id=’01010′;

A SQL search of the data in the zip table

If you have questions about porting data from MongoDB into the MySQL Document Store or the Document Store in general, please drop me a line.

Zero to DBA Hero Track at Southeast Linuxfest

This year the Zero to DBA Hero track at the Southeas  Llinuxfest expands to a second day.  The event is free to attend but it helps if you pre register.  Here is the agenda of that you will see in Charlotte June 8th, 9th, and 10th.
MySQL Talks at SELF – Zero to DBA Hero Track Linode Ballroom


9am 20 years of MySQL, 20 years of PHP, and 10 Years of SELF — What the heck has been going on?
Dave Stokes

10:15 Introducing the MySQL Document Store Charles Bell, PhD

11:30 Performance Analysis and Troubleshooting Methodologies for Databases
Peter Zaitsev

1:30 MySQL Without the SQL — Oh My!
Dave Stokes

4:00 Introducing MySQL InnoDB Cluster Charles Bell, PhD


9am MySQL/Percona Server/MariaDB Server Security Features Overview
Colin Charles

10:15 ZFS and MySQL on Linux, The Sweet Spots
Jervin Real

11:30 The MySQL Ecosystem – Understanding It, Not Running Away From It!!
Colin Charles

1:30 MySQL Troubleshooting and Performance Optimization with PMM
Peter Zaitsev

2:45 MySQL Windowing Functions
Dave Stokes

4:00 Understanding the Licensing of Your Database Stack
Colin Charles


9am Copyright, Open Source, the Free Market, and the History of MySQL
Alexander Rubin
MySQL is again proud to be a platinum sponsor and please drop by the MySQL booth to talk about MySQL 8, Group Replication, and the MySQL Document Store,

When Your JSON Key is Numeric

There was an interesting question on on extracting values from a JSON data type column in a MySQL database.  What caught my eye was the the keys for the key/value pairs were numeric. In particular the author of the question only wanted values for the key named 74.  The sample data was fairly simple.{ “70” : “Apple”, “71” : “Peach”, “74” : “Kiwi” }I thought SELECT JSON_EXTRACT(column, ‘$.74’) FROM table; should work but it did not. There was a complaint about an invalid path expression.It turns out that you need to make the second argument in the function ‘$.”74″‘ or SELECT JSON_EXTRACT(column,’$.”74″‘) FROM table; File this under something to remember for later. 🙂

MySQL Document Store Document IDs.

Yesterday I was presenting on the MySQL Document Store and was asked if the _id fields created by the server as an InnoDB primary key is a UUID.  I knew that it was not a UUID but I had to hit the documentations ( to find out what the document ID really is — a very interesting piece of information.The Details If you are inserting a document lacking a _id key, the server generates a value. The _id is 32 bits of a unique prefix (4 bytes), a time stamp (8 bytes), and serial number (16 bytes). The prefix is assigned by the InnoDB Cluster to help ensure uniqueness across a cluster. The timestamp is the encoded startup time of the server.  The serial numbers uses the auto increment offset and auto increment increment server variables .  From the manual page:
This document ID format ensures that:
The primary key value monotonically increments for inserts originating from a single server instance, although the interval between values is not uniform within a table.
When using multi-primary Group Replication or InnoDB cluster, inserts to the same table from different instances do not have conflicting primary key values; assuming that the instances have the auto_increment_* system variables configured properly.

PropertiesOnce set, the _id can not be set to another value. Inserting your own value overrides the server assignment. And if you attempt to insert a document with the same _id you the server will generate a duplicate primary key error.The _id values must be always increasing and sequential for optimal InnoDB performance. The server will keep track of theses numbers across restarts.The generated _id values for each table/collection  are unique across instances to avoid primary key conflicts and minimize transaction certification in multi-primary Group Replication or InnoDB cluster environments.Required?So, you are adding a document to a collection and you get an ERROR: 5115!  That means in the following cast that the _id key/value pair is needed:JS> db.foobar.add(-> {-> Data: “This is a test!”-> }-> )->ERROR: 5115: Document is missing a required fieldJS> db.foobar.add(
{ Data: “This is a test!” ,
-> _id: “first”
)Query OK, 1 item affected (0.0076 sec)

Caching SHA-2 (or 256) Pluggable Authentication for MySQL 8

If you are like me and you spend chilly spring evenings relaxing by the fire, reading the manual for the upcoming MySQL 8 release, you may have seen Caching SHA-2 Pluggable Authentication in section There are now TWO SHA-256 plugsins for MySQL 8 for hashing user account passwords and no, I do not know what the title of the manual pages says SHA-2 when it is SHA-256.  We have sha256_password for basic SHA-256 authentication and  caching_sha2_password that adds caching for better performance.The default plugin is caching_sha2_password has three features not found in its non caching brother. The first is, predictably, a cache for faster authentication for repeat customers to the database. Next is a RSA-based password exchange that is independent of the SSL library you executable is linked. And it supports Unix socket-files and shared-memory protocols — so sorry named pipe fans.If you have been testing the release candidate and use older clients or older libmysqlclient you may have seen Authentication plugin ‘caching_sha2_password’ is not supported or some other similar message. You need updated clients to work with the updated server.  Old clients used the old MySQL native password password not the new chaching_ha2_password as the default.When upgrading from 5,7,21 to the 8 GA version, existing accounts are not upgraded,  But if you are starting with a fresh install you get the chaching_sha2_password in your mysql.user entry.   I am sure this will catch someone so please take care. And this goes for new replication servers too! 

TEL/電話+86 13764045638
QQ 47079569