Month: November 2017

Percona Server for MongoDB 3.4.10-2.10 Is Now Available

Percona Server for MongoDB 3.4Percona announces the release of Percona Server for MongoDB 3.4.10-2.10 on November 30, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features: External […]

Boost Your Pay and Your Career: Become an Oracle Database Administrator

Written by Barbara Assadi from Virtual Earth

Whether you’re a database administrator just beginning your IT career or have been at it for years, you can boost your pay and increase your job prospects by gaining an Oracle certification. Becoming a certified Oracle Database Administrator enhances your job prospects as well as your salary.

According to IT Career Finder, Oracle certified IT professionals are among the highest paid employees in the IT industry and certifications offers some impressive benefits for individuals 1:

  • Oracle certifications authenticate your proficiency with Oracle’s widely respected database technologies.
  • Earning Oracle certifications proves to IT managers that you’re committed to developing your IT career.
  • Oracle certifications are sought-after indicators of credibility in the IT workforce.
  • Oracle certifications distinguish you from co-workers and competing job candidates.
  • Oracle certifications can increase your job security.
  • Oracle offers specialized upgrade training, allowing Oracle certified IT pros to easily update their credentials to the latest generation of Oracle technologies.

There is additional proof that certification contributes to a higher salary. Certification Magazine conducted a salary survey 2 in 2016 where they asked participants about the benefits of certification. Over 15% responded that certification helped them to gain increased earning power. Additionally, survey respondents reported other benefits of certification:

  • Gain qualifications for a future job — 49.4 percent
  • Improve or confirm my qualification for my current job — 48.2 percent
  • Gain greater confidence in my own skills — 40 percent
  • Become eligible for positions of greater responsibility with my current employer — 34.9 percent
  • Gain prestige and recognition among colleagues — 28.7 percent

Another Certification Magazine article reports that certification makes a difference in salary. “According to the U.S. Bureau of Labor Statistics, the average weekly income for a U.S. “information” worker at the end of 2016 was $1,323.96. Among salary survey respondents who hold just one active certification, however, the comparable figure derived from our data is $1,979.74. The comparison isn’t exact, but it does give a sense of the extent to which certification can increase earning power.” 3

Discover which Oracle Database Certification is right for you.

Related Articles

You Can Build A Career That Lasts

The Value of IT Certification 

___________________________________

https://www.itcareerfinder.com/it-certifications/oracle-certifications.html
http://certmag.com/salary-survey-extra-what-is-the-value-of-your-certification/
http://certmag.com/salary-survey-2016-efficacy-impact-certification/

 

In-memory linkbench and a fast server: MyRocks, InnoDB and TokuDB

This post explains MySQL performance for Linkbench on a fast server. This used a low-concurrency workload to measure response time, IO and CPU efficiency. Tests were run for MyRocks, InnoDB and TokuDB. I wrote a similar report a few months ago. The difference here is that I used an updated compiler toolchain, a more recent version of MyRocks and MySQL 8.0.3. The results didn’t change much from the previous blog post.’tl;dr:
InnoDB from MySQL 5.6 had the best throughput
CPU efficiency is similar for MyRocks and InnoDB. But to be fair, MyRocks uses ~20% more CPU than InnoDB in MySQL 5.6.35
There is a CPU regression from MySQL 5.6 to 5.7 to 8.x. About 30% of throughput is lost on load and transaction rates from 5.6.35 to 8.0.3. I assume most of this is code above the storage engine layer.
InnoDB writes more than 10X to storage per transaction compared to MyRocks. An SSD will last longer with MyRocks. 
Uncompressed InnoDB uses ~1.6X more space than uncompressed MyRocks

ConfigurationI used my Linkbench repo and helper scripts to run linkbench with maxid1=10M, loaders=1 and requestors=1 so there will be 2 concurrent connections doing the load and 1 connection running transactions after the load finishes. My linkbench repo has a recent commit that changes the Linkbench workload and this test included that commit. The test pattern is 1) load and 2) transactions. The transactions were run in 12 1-hour loops and I share results from the last hour. The test server has 48 HW threads, fast SSD and 256gb of RAM.Tests were run for MyRocks, InnoDB from upstream MySQL, InnoDB from FB MySQL and TokuDB. The binlog was enabled but sync on commit was disabled for the binlog and database log. All engines used jemalloc. Mostly accurate my.cnf files are here but the database cache was made large enough to cache the ~10gb database.
MyRocks was compiled on October 16 with git hash 1d0132. Compression was not used.
Upstream 5.6.35, 5.7.17, 8.0.1, 8.0.2 and 8.0.3 was used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases.
InnoDB from FB MySQL 5.6.35 was compiled on June 16 with git hash 52e058. The results for it aren’t interesting here but will be interesting for IO-bound linkbench.
TokuDB was from Percona Server 5.7.17. Compression was not used.

The performance schema was enabled for upstream InnoDB and TokuDB. It was disabled at compile time for MyRocks and InnoDB from FB MySQL because FB MySQL still has user & table statistics for monitoring.
GraphsThe first two graphs show the load and transaction rates relative to InnoDB from upstream MySQL 5.6.35. For this test it has the best rates for load and transactions. There is a big drop in throughput for InnoDB from 5.6.35 to 8.0.3 for both the load and transaction tests.
The chart below has the KB written to storage per transaction. The rate for InnoDB is more than 10X the rate for MyRocks. An SSD will last longer with MyRocks. The rate for MyRocks is also much better than TokuDB. The rate here for TokuDB is worse than what I measured in September and I have yet to debug it.
All engines use a similar amount of space after the load, ~15gb. But MyRocks does much better after 12 hours of transactions — InnoDB is ~1.6X larger and TokuDB is ~1.19X larger. The problem for InnoDB is B-Tree fragmentation. The advantage for MyRocks is leveled compaction which limits garbage to ~10% of the database size.

Load Results
All of the data is here. I adjusted iostat metrics for MyRocks because iostat currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. The table below has a subset of the results
InnoDB 5.6 has the best insert rate but there is a regression from 5.6.35 to 5.7.17 to 8.0.3. I assume most of that is from code above the storage engine.
Write efficiency (wkb/i) is similar for all engines
CPU efficiency (Mcpu/i) is similar for MyRocks and InnoDB

ips     wkb/i   Mcpu/i  size    wMB/s   cpu     engine

 49986  0.80     98     14      40.1     4.9    MyRocks.Oct16 62224  0.98     72     15      61.1     4.5    FbInno.Jun16 63891  1.03     74     16      65.7     4.7    Inno.5635 56383  1.03     85     16      58.3     4.8    Inno.5717 55173  1.04     78     16      57.6     4.3    Inno.801 41815  1.05    103     16      44.0     4.3    Inno.802 43590  1.06    101     16      46.4     4.4    Inno.803 23664  1.34    160     14      31.7     3.8    Toku.5717

legend:
* ips – inserts/second
* wkb/i – iostat KB written per insert
* Mcpu/i – normalized CPU time per insert
* wMB/s – iostat write MB/s, average
* size – database size in GB at test end
* cpu – average value of vmstat us + sy columns

Transaction Results

These are results from the 12th 1-hour loop of the transaction phase. All of the data is here. I adjusted iostat metrics to for MyRocks because iostat currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. 

InnoDB 5.6 has the best transaction rate but there is a regression from 5.6.35 to 5.7.17 to 8.0.3. I assume most of that is from code above the storage engine.
Write efficiency (wkb/t) better for MyRocks. InnoDB writes more than 10X to storage per transaction compared to MyRocks.
CPU efficiency (Mcpu/t) is similar for MyRocks and InnoDB
Response times are similar for MyRocks and InnoDB
Space efficiency is better for MyRocks. InnoDB is about 1.6X larger.

tps   wkb/t  Mcpu/t  size  un   gn   ul   gl    wMB/s  engine

5753  0.44   677     16    0.3  0.1  0.5  0.5    2.5   MyRocks.Oct167065  5.11   624     23    0.3  0.1  0.4  0.3   36.1   FbInno.Jun167420  5.17   562     26    0.3  0.1  0.4  0.2   38.4   Inno.56356616  5.20   628     26    0.3  0.1  0.5  0.3   34.4   Inno.57176313  5.16   654     25    0.3  0.1  0.5  0.3   32.6   Inno.8015978  5.38   682     25    0.3  0.1  0.6  0.3   32.2   Inno.8026070  5.39   669     25    0.3  0.1  0.6  0.3   32.7   Inno.8034234  2.92   814     19    0.5  0.2  1    0.6   12.4   Toku.5717

legend:
* tps – transactions/second
* wkb/t – iostat KB written per transaction
* Mcpu/t – normalized CPU time per transaction
* size – database size in GB at test end

* un, gn, ul, gl – 99th percentile response time in millisecs for      UpdateNode, GetNode, UpdateList and GetLinkedList transactions

* wMB/s – iostat write MB/s, average

How to Transfer a MySQL Database Between Two Servers?

Migrating a MySQL database usually requires only few simple steps, but can take quite some time, depending on the amount of data you would like to migrate.
The following steps will guide through how to export the MySQL database from the old server, secure it, copy it to the new server, import it successfully and make sure the data is there.
Exporting MySQL database to a dump file
Oracle provides a utility named mysqldump which allows to easily export the database structure and data to an SQL dump file. Use the following command:
mysqldump -u root -p –opt [database name] > [database name].sql
Few notes:

We’re using the –single-transaction flag to avoid a database lock while exporting the data. It will allow you to continue updating data in your old database while exporting the dump file. Please note though, that new data that will be updated after the exporting process already started, won’t be available in the exported dump file.
Make sure to replace [database name] with your actual database name before running the command.
Make sure to enter your user credentials instead of “user” and “Password” in the command. Make sure the user has permissions to backup the database.

Secure the backup file
In most cases, an organization’s data is its most critical asset. Therefore, we do not want database backups laying around in our servers unprotected, as they can mistakenly leak or even worse, get stolen by hackers.
Therefore, at the first chance you get, let’s compress and encrypt the file and delete the original file. To encrypt the file to a compressed file in Linux operating systems, use this command:
zip –encrypt dump.zip db.sql
You will be prompted to enter a password before the compression starts.
Transfer the backup file
Now that we have an encrypted dump file, let’s transfer it over the network to the new server, using SCP:
scp /path/to/source-file user@host:/path/to/destination-folder/
Import MySQL dump to new server
Now that we have the backup file on the new server, let’s decrypt and extract it:
unzip -P your-password dump.zip
Once the file is imported, remember to delete the dump file both for storage and security reasons.
To import the file, use the following command:
mysql -u root -p newdatabase < /path/to/newdatabase.sql
Validate imported data in new server
Now that we have the database imported on the new server, we need a way to make sure that the data is actually there and we didn’t lose anything.
We recommend to start with running this query on both the old and new databases and compare the results.
The query will count the rows on all tables, which will provide an indication on the amount of data in both databases.

SELECT
TABLE_NAME,
TABLE_ROWS
FROM
`information_schema`.`tables`
WHERE
`table_schema` = ‘YOUR_DB_NAME’;

In addition, we recommend to check for MIN and MAX records of columns in the tables, to make sure the data itself is valid and not only the amount of data.
Also, before migrating the application itself, we recommend to redirect one application instance to the new database and confirm that everything is working properly.
Another export & import option
We kept this option to the end, as we do not really recommend working with it.
It seems to be a lot easier, as it will export, transfer the dump file and import the data to the new database, all in one command.
The downside though is that if the network link dies, you need to start over.
Therefore, we believe it’s less recommended to work with this command, especially with large database.
If you would like to try it anyway, use this command:
mysqldump -u root -pPassword –all-databases | ssh user@new_host.host.com ‘cat – | mysql -u root -pPassword’
Important Notes

Make sure have both MySQL servers installed with the same official distribution and version. Otherwise, you’ll need to follow the upgrade instructions from MySQL’s website.
Make sure you have enough space in your old server to hold the dump file and the compressed file (2 x db_size => free).
Make sure you have enough space in your new server to hold the encrypted dump file, the decrypted dump file and the imported database (3 x db_size => free).
If you ever considered just moving the datadir from one database to another, please don’t. You do not want to mess with the internal structure of the database, as it’s very likely to be an invitation for trouble in the future.
Do not forget to configure important flags such as innodb_log_file_size in your new server’s configurations. Forgetting to update the configuration according to the new server’s specifications might result in serious performance issues.

Enjoy your new server!

MySQL Connector/J 5.1.45 has been released

Dear MySQL Users,
MySQL Connector/J 5.1.45, a maintenance release of the production 5.1
branch has been released. Connector/J is the Type-IV pure-Java JDBC
driver for MySQL.
MySQL Connector Java is available in source and binary form from the
Connector/J download pages athttp://dev.mysql.com/downloads/connector/j/5.1.html
and mirror sites as well as Maven-2 repositories.
MySQL Connector Java (Commercial) is already available for download on the
My Oracle Support (MOS) website. This release will shortly be available on
eDelivery (OSDC).
As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.
MySQL Connector/J 5.1.45 includes the following general bug fixes and
improvements, also available in more detail onhttp://dev.mysql.com/doc/relnotes/connector-j/en/news-5-1-45.html
Changes in MySQL Connector/J 5.1.45 (2017-11-30)
Version 5.1.45 is a maintenance release of the production 5.1
branch. It is suitable for use with MySQL Server versions
5.5, 5.6, and 5.7. It supports the Java Database Connectivity
(JDBC) 4.2 API.
Functionality Added or Changed
* Character set mappings have been added for the following
collations:
+ utf8mb4_ja_0900_as_cs_ks
+ utf8mb4_0900_as_ci
+ utf8mb4_ru_0900_ai_ci
+ utf8mb4_ru_0900_as_cs
(Bug #26724085)
Bugs Fixed
* With the combination of the connection properties
useServerPrepStmts=true, useInformationSchema=true,
useCursorFetch=true, and defaultFetchSize=N, if a warning
was returned for a query during connection
initialization, a NullPointerException would result when
Connector/J tried to get the warning. That was because
the charsets were not yet initialized in the connection
at the time. This fix corrects the problem by preventing
cursors from being used when Connector/J fetches warnings
from the server. (Bug #27131768)
* When a communications exception was thrown by Connector/J
after a socket timeout event, because the current
transaction did not get rolled back automatically, if
autoReconnect=true, the next statement execution by
Connector/J might reuse the old server session and
continued with the previous transaction. This might
confuse the client application on the transaction status
of the statements it was handling. This fix corrects the
issue by forcibly closing the network resources after a
communication or IO exception, causing the server to
rollback the transaction. (Bug #27047676, Bug #88232)
* Normally, when the socketTimeout option has been set and
a socket timeout occurs on the client side, the server
may continue working and returning query results. At the
next query executed after the timeout, Connector/J first
clears the socket input stream and then sends a ping
request to the server.
However, an error occurred if the autoReconnect option
was set to true and, after reconnection, a new query was
executed by Connector/J, and the results from the
previous queries arrived before Connector/J sent its ping
request to the server, in which case the old packages
might be mistaken as results for the new query. This fix
corrects the issue by forcibly closing the network
resources after a communication or IO exception. The next
statement execution recreates the IO stream if
autoReconnect=true; otherwise, the connection stays
closed. (Bug #27040063, Bug #88242)
* High garbage collection pressure was observed when there
were a lot of queries performed using server-side
prepared statements. This patch reduces the pressure by
optimizing the generation process of the cache keys for
the prepared statements. Thanks to Johnathan Crawford for
contributing the patch. (Bug #26939943, Bug #88021)
* A number of regression tests for former bug fixes failed
when they were run against MySQL Server 8.0.3, because
binary logging has been enabled by default on the server.
The tests have now been fixed. (Bug #26794652)
* A number of regression tests for former bug fixes failed
when they were run against MySQL Server 8.0.3 because of
the name changes of some of the INFORMATION_SCHEMA tables
on the server. The tests have now been fixed. (Bug
#26794602)
* When server-side prepared statements and cursor-based
result sets were used, exceptions were thrown when
applications made calls to get output parameters of
INTEGER or BIGINT type from a result set. (Bug #26771560,
Bug #87704)
On Behalf of the MySQL/ORACLE RE Team,
-Sreedhar S

Advanced Oracle Troubleshooting seminar in 2018!

A lot of people have asked me to do another run of my Advanced Oracle Troubleshooting training or at least get access to previous recordings – so I decided to geek out over the holiday period, update the material with … Continue reading

Percona Monitoring and Management 1.5.1 Is Now Available

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.5.1. This release contains fixes for bugs found after Percona Monitoring and Management 1.5.0 was released. Bug fixes PMM-1771: When upgrading PMM to 1.5.0 using Docker commands, PMM System Summary, PMM Add Instance, PMM Query Analytics dashboards were not available. PMM-1761: The PMM Query Analytics dashboard did not display the list of hosts correctly. PMM-1769: It was possible to add an Amazon […]

Percona XtraBackup 2.3.10 Is Now Available

Percona_XtraBackup LogoVert_CMYKPercona announces the release of Percona XtraBackup 2.3.10 on November 29, 2017. Downloads are available from our download site or Percona Software Repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open […]

Percona XtraBackup 2.4.9 Is Now Available

Percona_XtraBackup LogoVert_CMYKPercona announces the GA release of Percona XtraBackup 2.4.9 on November 29, 2017. You can download it from our download site and apt and yum repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free […]

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569