Month: February 2017

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB MMAPv1

Percona Monitoring and Management (PMM)This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In this blog post, I hope to cover some areas to watch with Percona Monitoring and Management (PMM) when running MMAPv1. The graph examples from this article are from the MMAPv1 dashboard that will be released for the first time in PMM […]

SQLskills SQL101: Dealing with SQL Server corruption

As Kimberly mentioned last week, SQLskills is embarking on a new initiative to blog about basic topics, which we’re calling SQL101. We’ll all be blogging about things that we often see done incorrectly, technologies used the wrong way, or where there are many misunderstandings that lead to serious problems. If you want to find all of our SQLskills SQL101 […]

The post SQLskills SQL101: Dealing with SQL Server corruption appeared first on Paul S. Randal.

Part 3: sysbench, a large server and small database

This is part 3 of my performance report for sysbench and MyRocks. For this test I use a large server (24 cores, 48 threads, fast NVMe SSD). A small server was used for part 1 and part 2.This test is done with a small and cached database – 8M rows. InnoDB is a great choice for such a workload. I am not surprised that MyRocks and TokuDB are not great at this workload. The value for MyRocks and TokuDB is better compression and less write-amplification and that usually requires a larger database and the benefit grows when the database doesn’t fit in RAM. But early evaluators might use sysbench so I want to understand how they perform in this setup.tl;drThe common pattern was InnoDB was faster than MyRocks and MyRocks was faster than TokuDB. InnoDB in MySQL 5.7.10 is much faster than in MySQL 5.6.26 on tests that are write-heavy or have long range scans. But InnoDB in 5.7 isn’t strictly faster than in 5.6 even at high concurrency.MyRocks suffers on tests with longer range scans but did pretty good for point queries.TokuDB was the slowest for all of the workloads except read-write with –oltp-range-size=10000 and read-only with –oltp-range-size in 100 and 10000.Percona fixed a bug in TokuDB that I reported. Thank you.It can be interesting to compare this result with the result from the smaller servers I use at home. The smaller server has a slower CPU (core i3) and slower SSD. Unfortunately I need to repeat tests on my work servers using the latest version of sysbench.DetailsI used version of sysbench that is a few months old with an older version of my scripts. For my home servers I upgraded to modern sysbench and I will soon do that at work. I will also upgrade from MySQL 5.7.10 to a more recent version and that should make a famous MySQL support guru happy.The test server has 24 cores, 48 threads, 2 sockets and 256gb of RAM. The storage is from multiple NVMe SSDs. I tested 4 engines — myrocks-5635 is MyRocks from FB MySQL merged to upstream MySQL 5.6.35, innodb-5635 is InnoDB from upstream MySQL 5.6.35, innodb-5710 is InnoDB from upstream MySQL 5.7.10 and tokudb-5717 is TokuDB from Percona Server 5.7.17-11. The my.cnf files are here for MyRocks, InnoDB-5.7 and InnoDB-5.6. In all cases the binlog is enabled and fsync is disabled for both the storage engine redo log and the binlog.Tests are run for 1 to 128 concurrent clients. When describing the results I divide that intolow concurrency (<= 8 clients), medium concurrency (16 to 40 clients) and high concurrency (>= 48 clients).  The test was run at each concurrency level for 180 seconds for read-heavy tests and 300 seconds for write-heavy tests. The sysbench process is run on the same host as mysqld.Tests are run in this order:prepare – this isn’t a test, it creates and loads the tables. There were 8 tables with 1M rows per table. The sysbench table has one secondary index.update-only and secondary index maintenance is requiredupdate-only and secondary index maintenance is not requiredread-write with –oltp-range-size=100 and then optionally with –oltp-range-size=10000read-only with –oltp-range-size in 10, 100, 1000 and 10000. I don’t report results for –oltp-range-size=1000point queriesinsert-onlyResultsFor the performance summaries below I use “X >> Y >> Z” to mean X is faster than Y and Y is faster than Z. I did not include graphs for one or both of 1) that is a lot more work and 2) I want some readers to take the time and think about the numbers. Unfortunately the formatting is not great for a wide table.Update-only with secondary index maintenanceThe numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads2895    10363   20118   34017   38720   42138   44736   44646   44169   43633   43104   42690   42442   myrocks-56354029    8373    15522   24563   32448   37061   40251   42522   44219   44631   45123   45762   45808   innodb-56354254    8792    17020   30524   48443   60623   67052   68773   69232   70556   72134   72696   74222   innodb-57102145    4026     7344   13575   21222   27807   30039   29541   29106   27964   26493   25435   23210   tokudb-5717Throughput:1 to 8 clients – MyRocks >> InnoDB-5.7 >> InnoDB-5.6 >> TokuDB. I don’t know why MyRocks was lousy for 1 client but OK for 2+.16 to 40 clients – InnoDB-5.7 >> MyRocks >> InnoDB-5.6 >> TokuDB48+ clients – InnoDB-5.7 >> MyRocks, InnoDB-5.6 >> TokuDBUpdate-only without secondary index maintenanceThe numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads2901   10696    20983   37078   43756   47549   50627   52753   52318   51690   50853   50003   49074   myrocks-56356384   12625    23847   40501   51505   53024   53587   53523   53316   52483   51632   51276   51117   innodb-56355618   11060    21361   38810   58197   69717   74286   75519   75545   76520   77335   78117   79152   innodb-57102260    4470     8206   15322   26552   33024   37739   41461   44358   44415   41899   39626   35341   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDB.  I don’t know why MyRocks was lousy for 1 client but OK for 2+.16 to 40 clients – InnoDB-5.7 >> InnoDB-5.6 >> MyRocks >> TokuDB.48+ clients – InnoDB-5.7 >> InnoDB-5.6 >> MyRocks >> TokuDBRead-write with –oltp-range-size=100The numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads4798    9955    20426   46361   98537  132010  153726  165477  167280  166549  168513  166807  166722   myrocks-56356391   12591    27534   61157  112516  146970  172971  193107  196981  191683  190707  191579  190615   innodb-56356143   13051    27216   57447  108786  145127  169957  192548  204655  205408  205278  205629  206721   innodb-57105157    9009    17931   42903   76633   95322  108896  114310  114650  113619  112220  108055  103181   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.7, InnoDB-5.6 >> MyRocks >> TokuDB16 to 40 clients – InnoDB-5.7, InnoDB-5.6 >> MyRocks >> TokuDB48+ clients – InnoDB-5.7 >> InnoDB-5.6 >> MyRocks >> TokuDBRead-write with –oltp-range-size=10000The numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads 210     412      774    2091    4077    5288    5568    6409    6885    6235    6675    6543    5889   myrocks-5635 306     612     1216    2403    4713    6439    7398    8098    8416    8397    8393    8364    8247   innodb-5635 448     888     1758    3462    6699    8770    9724   10302   10482   10448   10330   10382   10360   innodb-5710 360     722     1409    2713    5218    6932    7529    7851    7973    7954    7856    7822    7675   toku5717.none.100gThroughput:1 to 8 clients – InnoDB-5.7 >> TokuDB >> InnoDB-5.6 >> MyRocks16 to 40 clients – InnoDB-5.7 >> TokuDB >> InnoDB-5.6 >> MyRocks48+ clients – InnoDB-5.7 >> InnoDB-5.6 >> TokuDB >> MyRocksRead-only with –oltp-range-size=10The numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads5659   12266    25778   61366  128290  175337  204504  230534  246007  262875  269596  272178  274313   myrocks-56356491   13838    30028   72355  144523  194622  230314  265632  298481  300703  302924  303685  305162   innodb-56356052   12700    27143   63307  129096  177797  209601  243368  273429  272255  272563  275147  276691   innodb-57105391   11261    23935   55313  113932  164953  195290  223643  249579  247840  246926  247897  247045   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDB16 to 40 clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDB48+ clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDBRead-only with –oltp-range-size=100The numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads4031    8429    17417   38579   82542  116461  134208  150897  165211  166491  164694  166504  166625   myrocks-56355375   11273    23474   54099  110142  151311  177268  202868  225283  228292  229257  229828  231068   innodb-56355383   11292    23629   54050  110015  152462  176974  201112  223618  223737  224161  226272  226484   innodb-57104804   10062    20795   47474   98569  138827  159757  180406  198294  198680  198103  198374  196718   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.6, InnoDB-5.7 >> TokuDB >> MyRocks16 to 40 clients – InnoDB-5.6, InnoDB-5.7 >> TokuDB >> MyRocks48+ clients – InnoDB-5.6, InnoDB-5.7 >> TokuDB >> MyRocksRead-only with –oltp-range-size=10000The numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads 161     325      634    1238    2438    3385    3807    4093    4300    4261    4239    4214    4135   myrocks-5635 242     484      961    1900    3729    5099    5857    6439    6748    6663    6562    6589    6500   innodb-5635 357     708     1407    2788    5354    6984    7689    8095    8246    8222    8145    8138    8071   innodb-5710 301     606     1203    2365    4583    6057    6582    6885    6992    6908    6862    6802    6772   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.7 >> TokuDB >> InnoDB-5.6 >> MyRocks16 to 40 clients – InnoDB-5.7 >> TokuDB >> InnoDB-5.6 >> MyRocks48+ clients – InnoDB-5.7 >> TokuDB >> InnoDB-5.6 >> MyRocksPoint-queryThe numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads6784   14328    31277   80458  166882  222750  265885  306227  341095  354327  349912  345317  349403   myrocks-56357236   15236    33569   87991  177791  236463  278423  319764  362604  371093  374380  371092  375698   innodb-56356802   14177    30860   78950  163909  222998  265156  307198  345832  354835  362774  366825  367057   innodb-57106249   12951    27608   67805  142064  194320  230720  264171  294814  306298  308177  309542  310391   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.6 >> MyRocks >> InnoDB-5.7 >> TokuDB16 to 40 clients – InnoDB-5.6 >> MyRocks, InnoDB-5.7 >> TokuDB48+ clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDBInsert-onlyThe numbers in the table are the QPS for 1 to 128 threads.   1       2        4       8      16      24      32      40      48      64      80      96     128   threads5254    16950   31561   41074   47720   52063   54066   53899   53900   53725   53343   53098   52278   myrocks-56355753    11506   36083   57235   62591   62469   62577   61899   61131   60592   59080   56895   52913   innodb-56355291    15657   33358   55385   79656   90812   97735   99944  100714  101967  103374  104934  106194   innodb-57102975     5754    9033   17695   29039   37341   38410   35859   33792   30053   27362   25028   21275   tokudb-5717Throughput:1 to 8 clients – InnoDB-5.6 >> InnoDB-5.7 >> MyRocks >> TokuDB16 to 40 clients – InnoDB-5.7 >> InnoDB-5.6 >> MyRocks >> TokuDB48+ clients – InnoDB-5.7 >> InnoDB-5.6 >> MyRocks >> TokuDB

MySQL Connector/J 5.1.41 has been released

Dear MySQL Users,
MySQL Connector/J 5.1.41, a maintenance release of the production 5.1
branch has been released. Connector/J is the Type-IV pure-Java JDBC
driver for MySQL.
Version 5.1.41 is suitable for use with many MySQL server versions,
including 4.1, 5.0, 5.1, 5.4 and 5.5.
MySQL Connector Java is available in source and binary form from the
Connector/J download pages at
http://dev.mysql.com/downloads/connector/j/5.1.html
and mirror sites as well as Maven-2 repositories.
MySQL Connector Java (Commercial) is already available for download on the
My Oracle Support (MOS) website. This release will be available on eDelivery
(OSDC) in next month’s upload cycle.
As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.
MySQL Connector/J 5.1.41 includes the following general bug fixes and
improvements, also available in more detail on
http://dev.mysql.com/doc/relnotes/connector-j/en/news-5-1-41.html
Changes in MySQL Connector/J 5.1.41 (2017-02-28)
Version 5.1.41 is a maintenance release of the production 5.1
branch. It is suitable for use with MySQL server versions
5.5, 5.6, and 5.7. It supports the Java Database Connectivity
(JDBC) 4.2 API.
Bugs Fixed
* Connections failed with MySQLSyntaxErrorException:
Unknown character set when
connectionCollation=ISO-8859-13. This was due to a wrong
logic in Connector/J’s internal charset mapping, which
has now been fixed. (Bug #25504578)
* When loading classes through some external class loaders,
com.mysql.jdbc.Util threw an NoClassDefFoundError. This
was caused by Class.getPackage() returning null when some
external class loaders were used. This fix replaces those
calls of Class.getPackage() with calls of the new method
Class.getName(), which return package names that are
extracted from the fully-qualified class names. (Bug
#25048543, Bug #83052)
* In the manifest for the Connector/J JAR file, the
Import-Package directive specified version numbers for
the javax.net.ssl package. The specification was
unnecessary, and it caused the configuration of an SSL
connection to a MySQL server to fail in an OSGi
environment. The version requirement has now been
removed. (Bug #24942672, Bug #82826)
* In a Fabric setup, when multiple threads required to have
hashes computed, an ArrayIndexOutOfBoundsException might
be thrown from inside HashShardMapping. This fix prevents
the issue by having
HashShardMapping.getShardIndexForKey() synchronized. (Bug
#24289730, Bug #82203)
* When the configuration property cacheResultSetMetadata
was set to true, a ping query using a PreparedStatement
failed with a NullPointerException. This fix moves the
ping query to an earlier stage of the statement
execution, which prevents the exception. (Bug #23535001,
Bug #81706)
* The setFabricShardTable() method failed to parse
qualified table names (in the format of
database_name.table_name), which causes SQLExceptions to
be thrown. (Bug #23264511, Bug #81108)
* A race condition occurred when a call of
Connection.setNetworkTimeout() was followed closely by a
call of Connection.close(), and a NullPointerException
might result if the connection was closed before the
executor supplied to setNetworkTimeout() was able to set
the timeout, as the executor would run into a null
mysqlConnection object. This fix removed the race
condition. (Bug #21181249, Bug #75615)
* With the connection properties
cacheServerConfiguration=true and
elideSetAutoCommits=true, any new connection to the
server obtained after the first connection was
established had the variable autoCommit equaled false,
even if the value of the variable was true on the server.
That was because the value of autoCommit was not properly
initialized when a new connection was established, and
this fix corrects that.
Also, since release 5.1.41, the functionality of the
property elideSetAutoCommits has been disabled due to
Bug# 66884. Any value given for the property is now
ignored by Connector/J. (Bug #17756825, Bug #70785)
* When using Tomcat and a web application that utilized
Connector/J was down, Tomcat was unable to stop the
AbandonedConnectionCleanupThread started internally by
Connector/J, leading to multiple instances of the thread
when the web application was restarted; or, Tomcat was
able to stop the thread but unable to restart it on
reload of the web application. Different combinations of
Tomcat’s default settings, usage of Tomcat’s
ServletContextListener feature, and locations of the
Connector/J jar could result in the undesired behaviors,
as well as warning messages in the Tomcat error log
saying it was unable to stop the thread and a memory leak
was likely.
The implementation of AbandonedConnectionCleanupThread
has now been improved, so that there are now four ways
for developers to deal with the situation:
+ When the default Tomcat configuration is used and
the Connector/J jar is put into a local library
directory, the new built-in application detector in
Connector/J now detects the stopping of the web
application within 5 seconds and kills
AbandonedConnectionCleanupThread. Any unnecessary
warnings about the thread being unstoppable are also
avoided. If the Connector/J jar is put into a global
library directory, the thread is left running until
the JVM is unloaded.
+ When Tomcat’s context is configured with the
attribute clearReferencesStopThreads=”true”, Tomcat
is going to stop all spawned threads when the
application stops unless Connector/J is being shared
with other web applications, in which case
Connector/J is now protected against an
inappropriate stop by Tomcat; the warning about the
non-stoppable thread is still issued into Tomcat’s
error log.
+ When a ServletContextListener is implemented within
each web application that calls
AbandonedConnectionCleanupThread.checkedShutdown()
on context destruction, Connector/J now, again,
skips this operation if the driver is potentially
shared with other applications. No warning about the
thread being unstoppable is issued to Tomcat’s error
log in this case.
+ When
AbandonedConnectionCleanupThread.uncheckedShutdown()
is called, the AbandonedConnectionCleanupThread is
closed even if Connector/J is shared with other
applications. However, it may not be possible to
restart the thread afterwards.
(Bug #17035755, Bug #69526)
References: See also: Bug #14570236, Bug #16443387.
On Behalf of MySQL/ORACLE RE Team
Gipson Pulla

Webinar Thursday March 2, 2017: MongoDB Query Patterns

MongoDB QueryJoin Percona’s Senior Technical Services Engineer Adamo Tonete on Thursday, March 2, 2017, at 11:00 a.m. PST / 2:00 p.m. EST (UTC-8) as he reviews and discusses MongoDB® query patterns. Register Now MongoDB is a fast and simple-to-query schema-free database. It features a smart query optimizer that tries to use the easiest data retrieval method. […]

MySQL Ransomware: Open Source Database Security Part 3

MySQL RansomwareThis blog post examines the recent MySQL® ransomware attacks, and what open source database security best practices could have prevented them. Unless you’ve been living under a rock, you know that there has been an uptick in ransomware for MongoDB and Elasticsearch deployments. Recently, we’re seeing the same for MySQL. Let’s look and see if this is MySQL’s […]

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569