Month: April 2018

A Look at MyRocks Performance

In this blog post, I’ll look at MyRocks performance through some benchmark testing.
As the MyRocks storage engine (based on the RocksDB key-value store http://rocksdb.org ) is now available as part of Percona Server for MySQL 5.7, I wanted to take a look at how it performs on a relatively high-end server and SSD storage. I wanted to check how it performs for different amounts of available memory for the given database size. This is similar to the benchmark I published a while ago for InnoDB (https://www.percona.com/blog/2010/04/08/fast-ssd-or-more-memory/).
In this case, I plan to use a sysbench-tpcc benchmark (https://www.percona.com/blog/2018/03/05/tpcc-like-workload-sysbench-1-0/) and I will execute it for both MyRocks and InnoDB. We’ll use InnoDB as a baseline.
For the benchmark, I will use 100 TPC-C warehouses, with a set of 10 tables (to shift the bottleneck from row contention). This should give roughly 90GB of data size (when loaded into InnoDB) and is a roughly equivalent to 1000 warehouses data size.
To vary the memory size, I will change innodb_buffer_pool_size from 5GB to 100GB for InnoDB, and rocksdb_block_cache_size for MyRocks.
For MyRocks we will use LZ4 as the default compression on disk. The data size in the MyRocks storage engine is 21GB. Interesting to note, that in MyRocks uncompressed size is 70GB on the storage.
For both engines, I did not use FOREIGN KEYS, as MyRocks does not support it at the moment.
MyRocks does not support SELECT .. FOR UPDATE statements in REPEATABLE-READ mode in the Percona Server for MySQL implementation. However, “SELECT .. FOR UPDATE” is used in this benchmark. So I had to use READ-COMMITTED mode, which is supported.
The most important setting I used was to enable binary logs, for the following reasons:

Any serious production uses binary logs
With disabled binary logs, MyRocks is affected by a suboptimal transaction coordinator

I used the following settings for binary logs:

binlog_format = ‘ROW’
binlog_row_image=minimal
sync_binlog=10000 (I am not using 0, as this causes serious stalls during binary log rotations, when the  content of binary log is flushed to storage all at once)

While I am not a full expert in MyRocks tuning yet, I used recommendations from this page: https://github.com/facebook/mysql-5.6/wiki/my.cnf-tuning. The Facebook-MyRocks engineering team also provided me input on the best settings for MyRocks.
Let’s review the results for different memory sizes.
This first chart shows throughput jitter. This helps to understand the distribution of throughput results. Throughput is measured every 1 second, and on the chart I show all measurements after 2000 seconds of a run (the total length of each run is 3600 seconds). So I show the last 1600 seconds of each run (to remove warm-up phases):

To better quantify results, let’s take a look at them on a boxplot. The quickest way to understand boxplots is to take a look at the middle line. It represents a median of measurements (see more at https://www.percona.com/blog/2012/02/23/some-fun-with-r-visualization/):

Before we jump to the summary of results, let’s take a look at a variation of the throughput for both InnoDB and MyRocks. We will zoom to a 1-second resolution chart for 100 GB of allocated memory:

We can see that there is a lot of variation with periodical 1-second performance drops with MyRocks. At this moment, I do not know what causes these drops.
So let’s take a look at the average throughput for each engine for different memory settings (the results are in tps, and more is better):

Memory, GB
InnoDB
MyRocks

5
849.0664
4205.714

10
1321.9
4298.217

20
1808.236
4333.424

30
2275.403
4394.413

40
2968.101
4459.578

50
3867.625
4503.215

60
4756.551
4571.163

70
5527.853
4576.867

80
5984.642
4616.538

90
5949.249
4620.87

100
5961.2
4599.143

 
This is where MyRocks behaves differently from InnoDB. InnoDB benefits greatly from additional memory, up to the size of working dataset. After that, there is no reason to add more memory.
At the same time, interestingly MyRocks does not benefit much from additional memory.
Basically, MyRocks performs as expected for a write-optimized engine. You can refer to my article How Three Fundamental Data Structures Impact Storage and Retrieval for more details. 
In conclusion, InnoDB performs better (compared to itself) when the working dataset fits (or almost fits) into available memory, while MyRocks can operate (and outperform InnoDB) on small memory sizes.
IO and CPU usage
It is worth looking at resource utilization for each engine. I took vmstat measurements for each run so that we can analyze IO and CPU usage.
First, let’s review writes per second (in KB/sec). Please keep in mind that these writes include binary log writes too, not just writes from the storage engine.

Memory, GB
InnoDB
MyRocks

5
244754.4
87401.54

10
290602.5
89874.55

20
311726
93387.05

30
313851.7
93429.92

40
316890.6
94044.94

50
318404.5
96602.42

60
276341.5
94898.08

70
217726.9
97015.82

80
184805.3
96231.51

90
187185.1
96193.6

100
184867.5
97998.26

 
We can also calculate how many writes per transaction each storage engine performs:

This chart shows the essential difference between InnoDB and MyRocks. MyRocks, being a write-optimized engine, uses a constant amount of writes per transaction.
For InnoDB, the amount of writes greatly depends on the memory size. The less memory we have, the more writes it has to perform.
What about reads?
The following table shows reads in KB per second.

Memory, GB
InnoDB
MyRocks

5
218343.1
171957.77

10
171634.7
146229.82

20
148395.3
125007.81

30
146829.1
110106.87

40
144707
97887.6

50
132858.1
87035.38

60
98371.2
77562.45

70
42532.15
71830.09

80
3479.852
66702.02

90
3811.371
64240.41

100
1998.137
62894.54

 
We can translate this to the number of reads per transaction:

This shows MyRocks’ read-amplification. The allocation of more memory helps to decrease IO reads, but not as much as for InnoDB.
CPU usage
Let’s also review CPU usage for each storage engine. Let’s start with InnoDB:

The chart shows that for 5GB memory size, InnoDB spends most of its time in IO waits (green area), and the CPU usage (blue area) increases with more memory.
This is the same chart for MyRocks:

In tabular form:

Memory, GB
engine
us
sys
wa
id

5
InnoDB
8
2
57
33

5
MyRocks
56
11
18
15

10
InnoDB
12
3
57
28

10
MyRocks
57
11
18
13

20
InnoDB
16
4
55
25

20
MyRocks
58
11
19
11

30
InnoDB
20
5
50
25

30
MyRocks
59
11
19
10

40
InnoDB
26
7
44
24

40
MyRocks
60
11
20
9

50
InnoDB
35
8
38
19

50
MyRocks
60
11
21
7

60
InnoDB
43
10
36
10

60
MyRocks
61
11
22
6

70
InnoDB
51
12
34
4

70
MyRocks
61
11
23
5

80
InnoDB
55
12
31
1

80
MyRocks
61
11
23
5

90
InnoDB
55
12
32
1

90
MyRocks
61
11
23
4

100
InnoDB
55
12
32
1

100
MyRocks
61
11
24
4

 
We can see that MyRocks uses a lot of CPU (in us+sys state) no matter how much memory is allocated. This leads to the conclusion that MyRocks performance is limited more by CPU performance than by available memory.
MyRocks directory size
As MyRocks writes all changes and compacts SST files down the road, it would be interesting to see how the data directory size changes during the benchmark so we can estimate our storage needs. Here is a chart of datadirectory size:

We can see that datadirectory goes from 20GB at the start, to 31GB during the benchmark. It is interesting to observe the data growing until compaction shrinks it.
Conclusion
In conclusion, I can say that MyRocks performance increases as the ratio of dataset size to memory increases, outperforming InnoDB by almost five times in the case of 5GB memory allocation. Throughput variation is something to be concerned about, but I hope this gets improved in the future.
MyRocks does not require a lot of memory and shows constant write IO, while using most of the CPU resources.
I think this potentially makes MyRocks a great choice for cloud database instances, where both memory and IO can cost a lot. MyRocks deployments would make it cheaper to deploy in the cloud.
I will follow up with further cloud-oriented benchmarks.
Extras
Raw results, scripts and config
My goal is to provide fully repeatable benchmarks. To this end, I’m  sharing all the scripts and settings I used in the following GitHub repo:
https://github.com/Percona-Lab-results/201803-sysbench-tpcc-myrocks
MyRocks settings

rocksdb_max_open_files=-1
rocksdb_max_background_jobs=8
rocksdb_max_total_wal_size=4G
rocksdb_block_size=16384
rocksdb_table_cache_numshardbits=6
# rate limiter
rocksdb_bytes_per_sync=16777216
rocksdb_wal_bytes_per_sync=4194304
rocksdb_compaction_sequential_deletes_count_sd=1
rocksdb_compaction_sequential_deletes=199999
rocksdb_compaction_sequential_deletes_window=200000
rocksdb_default_cf_options=”write_buffer_size=256m;target_file_size_base=32m;max_bytes_for_level_base=512m;max_write_buffer_number=4;level0_file_num_compaction_trigger=4;level0_slowdown_writes_trigger=20;level0_stop_writes_trigger=30;max_write_buffer_number=4;block_based_table_factory={cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=0};level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=true;memtable_prefix_bloom_size_ratio=0.05;prefix_extractor=capped:12;compaction_pri=kMinOverlappingRatio;compression=kLZ4Compression;bottommost_compression=kLZ4Compression;compression_opts=-14:4:0″
rocksdb_max_subcompactions=4
rocksdb_compaction_readahead_size=16m
rocksdb_use_direct_reads=ON
rocksdb_use_direct_io_for_flush_and_compaction=ON

InnoDB settings

# files
 innodb_file_per_table
 innodb_log_file_size=15G
 innodb_log_files_in_group=2
 innodb_open_files=4000
# buffers
 innodb_buffer_pool_size= 200G
 innodb_buffer_pool_instances=8
 innodb_log_buffer_size=64M
# tune
 innodb_doublewrite= 1
 innodb_support_xa=0
 innodb_thread_concurrency=0
 innodb_flush_log_at_trx_commit= 1
 innodb_flush_method=O_DIRECT_NO_FSYNC
 innodb_max_dirty_pages_pct=90
 innodb_max_dirty_pages_pct_lwm=10
 innodb_lru_scan_depth=1024
 innodb_page_cleaners=4
 join_buffer_size=256K
 sort_buffer_size=256K
 innodb_use_native_aio=1
 innodb_stats_persistent = 1
 #innodb_spin_wait_delay=96
# perf special
 innodb_adaptive_flushing = 1
 innodb_flush_neighbors = 0
 innodb_read_io_threads = 4
 innodb_write_io_threads = 2
 innodb_io_capacity=2000
 innodb_io_capacity_max=4000
 innodb_purge_threads=4
 innodb_adaptive_hash_index=1

Hardware spec
Supermicro server:

CPU:

Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
2 sockets / 28 cores / 56 threads

Memory: 256GB of RAM
Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
Filesystem: ext4
Percona-Server-5.7.21-20
OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic

The post A Look at MyRocks Performance appeared first on Percona Database Performance Blog.

Keep Sensitive Data Secure in a Replication Setup

Keep sensitive data secure

This blog post describes how to keep sensitive data secure on slave servers in a MySQL async replication setup. Almost every web application has a sensitive data: passwords, SNN, credit cards, emails, etc. Splitting the database to secure and “public” parts allows for restricting user and application parts access to sensitive data. Field encryption This is […]

The post Keep Sensitive Data Secure in a Replication Setup appeared first on Percona Database Performance Blog.

Cloud Database Features Comparison – Amazon RDS vs Google Cloud SQL

As more companies run their workloads in the cloud, cloud database services are increasingly being used to manage data. One of the advantages of using a cloud database service instead of maintaining your database is that it reduces the management overhead. Database services from the leading cloud vendors share many similarities, but they have individual characteristics that may make them well-, or ill-suited to your workload. Developers are always looking for convenient ways of running their databases, whether it is to obtain more profound insight into database performance, to perform a migration efficiently, to simplify backup and restore processes, or to do many other “day to day” tasks. Among the number of available cloud services, it may not be easy to figure out which is the best one for our use case. In this article, we’ll compare two of the most popular cloud database services on the market – Google Cloud SQL and Amazon RDS.

Amazon RDS provides a web interface through which you can deploy MySQL. The RDS service manages the provisioning of the instance and configuration. Additionally, it also provides a console to monitor and perform basic database administration tasks. Google Cloud SQL similarly provides a predefined MySQL setup that is automatically managed. Predefined services can be a comfortable way to manage your databases however at the same time they can limit functionality. Let’s take a closer look then at these management features.

Database logs and metrics monitoring

Amazon RDS and Google Cloud don’t provide access to the shell. Your primary concern here may be access to essential log files. Amazon CloudWatch is a monitoring service for cloud resources which you can use to solve this problem. It collects metrics, collects and monitor log files or automatically react to changes in your AWS resources. Using CloudWatch, you can gather and processes error log, audit log and other logs from RDS into metrics presented in the web console. These statistics are recorded for 15 months so you can maintain a history. CloudWatch can take actions such as sending a notification to a notification recipient or if needed – autoscaling policies, which in turn may automatically handle an increase in load by adding more resources.

Amazon CloudWatch

Google cloud also provides log processing functionality. You can view the Google Cloud SQL logs in the operations panel or through Google console. The operations panel logs every operation performed on the instance with pretty basic information. It could be extended with manually added metrics based on data from a file source. Unfortunately, the operations log does not include activities performed using external management tools, such as the mysql client. To extend basic functionality Google has another service – Stackdriver. The Stackdriver service can be used to create alerts for metrics defined in operational panel. Stackdriver embraces not only Google Cloud Platform (GCP) but also AWS and local services. You can use it for cross-cloud platform monitoring without additional agents. Stackdriver requires the installation of an open source based collected agent to access non-cloud metrics.

Google Cloud SQL logging

There are various ways in which you could monitor the MySQL instances metrics. It can be performed by querying the server all the time for the metrics values or with predefined services. You can get more in-depth visibility into the health of your Amazon RDS instances in real time with Enhanced Monitoring for Amazon RDS. It provides metrics so that you can monitor the health of your DB instances and DB clusters. You can monitor both DB instance metrics and operating system (OS) metrics.

It provides a set of over 50 database instance metrics and aggregated process information for your instances, at the granularity of 1 second. You can visualize the metrics on the RDS console.

Both CloudWatch and Stackdriver provides functionality to create alarms based on metrics. Amazon does it with Amazon Simple Notification Service (SNS) for notification. In Stackdiver it’s done directly in this service.

Google Stackdriver monitoring dashboard

Data Migration into Cloud

At this moment backup based migration to Google Cloud SQL is quite limited. You can only use logical dump, which may be a problem for bigger databases. The SQL dump file must not include any triggers, views, or stored procedures. If your database needs these elements, you should recreate them after shipping the data. If you have already created a dump file that holds these components, you need manually edit the file. The database you are importing into must exist up front. There is no option to migrate to Google cloud from other RDBMS. It all makes the process quite limited, not to mention that there is no option for cross-platform migration in real time (AWS RDS).

AWS Database Migration Service

Amazon Database Migration Service (DMS) supports homogenous migrations such as MySQL to MySQL, as well as heterogeneous migrations between different database platforms. AWS DMS can help you in planning and migration of on-premises relational data stored in Oracle, SQL Server, MySQL, MariaDB, or PostgreSQL databases. DMS works by setting up and then managing a replication instance on AWS. This instance dumps data from the source database and loads it into the target database.

Achieving High Availability

Google use semisynchronous replicas to make your database highly available. Cloud SQL provides the ability to replicate a master instance to one or more read replicas. If the zone where the master is located experiences an outage and the backup server is set, Cloud SQL fails over to the failover replica.

Google Cloud SQL create read replica

The setup is straightforward, and with a couple of clicks, you can achieve a working slave node. Nevertheless, configuration options are limited and may not fit your system requirements. You can choose from the following replica scenarios:

read replica – a read replica is a one to one copy of the master. This is the base model where you create a replica to offload read requests or analytics traffic from the master,
external read replica – this option is to configure an instance that replicates to one or more replicas external to Cloud SQL,
external master – setup replication to migrate to Google Cloud SQL.

Amazon RDS provides read replica services. Cross-region read replicas gives you the ability to scale as AWS has its services in many areas in the world. RDS asynchronous replication is highly scalable. All read replicas are accessible and can be used for reading in a maximum number of five regions. These nodes are independent and can be used in your upgrade path or can be promoted to a standalone database.

In addition to that, Amazon offers Multi-AZ deployments based on DRBD, synchronous disk replication. How is it different from Read Replicas? The main difference is that only the database engine on the primary instance is active, which leads to other architectural variations.

Automated backups are taken from standby. That significantly reduces the possibility of performance degradation during a backup.

As opposed to read replicas, database engine version upgrades happen on the primary. Another difference is that AWS RDS will failover automatically while read replicas will require manual operations from you.

Multi-AZ failover on RDS uses a DNS change to point to the standby instance, according to Amazon this should happen in 60-120 seconds of unavailability during the failover. Because the standby uses the same storage data as the primary, there will probably be transaction/log recovery. Bigger databases may spend a significant amount of time on innoDB recovery, so please consider that in your DR plan.

Encryption

Security compliance is one of the critical concerns for enterprises whose data is in the cloud. When dealing with production databases that hold sensitive and vital data, it is highly recommended to implement encryption to protect the data from unauthorized access.

In Google Cloud SQL, customer data is encrypted when stored in database tables, temporary files, and backups. Outside connections can be encrypted by SSL certificates (especially for intra-zone connections to Cloud SQL), or by using the Cloud SQL Proxy. Google encrypts and authenticates all data in transit and data at rest with AES-256.

With RDS encryption enabled, the data is stored on the instance underlying storage, the automated backups, read replicas, and snapshots all become encrypted. The RDS encryption keys implement the AES-256 algorithm. Keys are being managed and protected by the AWS key management infrastructure through AWS Key Management Service (AWS KMS). You do not need to make any modifications to your code or operating model to benefit from this critical data protection feature. AWS CloudHSM is a service that helps meet stringent compliance requirements for cryptographic operations and storage of encryption keys by using a single tenant Hardware Security Module (HSM) appliances within the AWS cloud.

Pricing

Instance pricing for Google Cloud SQL is credited for every minute that the instance is running. The cost depends on the device type you choose for the instance, and the area where it’s placed. Read replicas and failover replicas are charged at the same rate as stand-alone instances. The pricing starts from $0.0126 per hour of micro instance to $8k, db-n1-highmem-64 with 64 vCPUs, 416 GB RAM, 10,230 GB disk and limit of 4,000 connections.

Like other AWS products, users pay for what they use with RDS. But, this pay-as-you-go model has a specific billing construct that can, if left unchecked, yield questions or surprise billing elements if no one’s aware of what’s actually in the bill. You may bill your database options starting from 0.175$ per hour to upfront thousands of dollars. Both platforms are quite flexible, but you will see more configuration options in AWS.

Infrastructure

As mentioned in the pricing section, Google Cloud SQL can be scaled up to 64 processor cores and more than 400GB of RAM. The maximum size of the disk is 10TB per instance. You can configure your instance settings to increase it automatically. That should be plenty for many project requirements. Nevertheless if we take a look on what Amazon offers, there is still a long way for Google. RDS not only offers power instances but also long list of other services around it.

RDS supports storage volume snapshots, which you can use for point-in-time recovery or share with other AWS accounts. You can also take advantage of its provisioned IOPS feature to increase I/O. RDS can also be launched in Amazon VPC, Cloud SQL doesn’t yet support a virtual private network.

Related resources

 How to Make Your MySQL or MariaDB Database Highly Available on AWS and Google Cloud

 DIY Cloud Database on Amazon Web Services: Best Practices

 Open Source Databases in 2017 and Trends for 2018

 ClusterControl in the Cloud – All Our Resources

 MySQL in the Cloud – Pros and Cons of Amazon RDS

Backup

RDS generates automated backups of your DB instance. RDS establishes a storage volume snapshot of your DB instance, backing up the entire DB instance and not individual databases. Automated backups occur daily during the preferred backup window. If the backup requires more time than allotted to the backup window, the backup continues after the window ends, until it finishes. Read replication doesn’t have backup enabled by default.

When you want to do a restore, the only option is to create a new instance. It can be restored to last backup or point in time recovery. Binary logs will be applied automatically, there is no possibility to get access to them. RDS PITR option is quite limited as it does not allow you to choose an exact time, or transaction. You will be limited to a 5 minutes interval. In most case scenarios, these settings may be sufficient however if you need to recover your database to the single transaction or exact time you need to be ready for manual actions.

Google Cloud SQL backup data is stored in separate regions for redundancy. With the automatic backup function enabled, database copy will be created every 4 hours. If needed you can create on-demand backups (for any Second Generation instance), whether the instance has automatic backups enabled or not. Google and Amazon approach for backups is quite the same however with Cloud SQL it is possible to perform the point in time recovery to the specific binary log and position.

Tags: 

Amazon RDS
MySQL
google cloud sql
cloud database

db4free.net goes MySQL 8.0

MySQL 8.0 has been released as stable (GA) earlier this month. For db4free.net this means it’s time to make MySQL 8.0 the default version and to deprecate the MySQL 5.7 server instance.
The new MySQL 8.0 server is running on the default port 3306. All new registrations will have the database created on this server. It is fresh and empty and will start from scratch.
The previous MySQL 5.7 server will remain available on port 3308. All users who have data there which they want to keep should migrate it to the new MySQL 8.0 server. This will require you to sign up again.
The previous MySQL 8.0 server will remain on port 3307. Both the old MySQL 5.7 and the old MySQL 8.0 server on port 3307 will be available until June 15, 2018. Data which isn’t migrated to the new server instance by then will be lost.
The new MySQL 8.0 server instance will come with the new utf8mb4 character set and the new utf8mb4_0900_ai_ci collation, which are the new defaults in MySQL 8.0. Since db4free.net already used utf8mb4 on the previous MySQL 5.7 server instance, this should not affect many people, if any at all.
Another long overdue change is that the default timezone (on db4free.net, this is not a change in MySQL 8.0 itself) will be UTC. Previously the servers were set to Central European Time since that’s the home time zone where db4free.net is hosted. But with a large international audience it makes sense to use UTC going forward. The timezone can be changed per connection as described in the MySQL Reference Manual.
The MySQL 8.0 Reference Manual is the place to go for all general MySQL questions and to find out what’s new in MySQL 8.0 (which are a lot of things).
As always: please backup data which you can’t afford to lose. db4free.net is a testing service and there is always a risk that something goes wrong, like the server doesn’t start up anymore. This has happened before and may happen again, especially with a brand new version. This service comes with no warranties at all.
If you keep that in mind you should have much fun exploring the new (and old) goodies of MySQL 8.0. Consider following db4free.net on Twitter as this is where you get updates and status information the quickest.
Enjoy!

This Week In Data with Colin Charles 37: Percona Live 2018 Wrap Up

Colin Charles

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community. Percona Live Santa Clara 2018 is now over! All things considered, I think it went off quite well; if you have any comments/complaints/etc., please don’t hesitate to drop me a line. I believe a survey […]

The post This Week In Data with Colin Charles 37: Percona Live 2018 Wrap Up appeared first on Percona Database Performance Blog.

MySQL 8.0 GA: Quality or Not?

MySQL 8.0 GA

What does Anton Ego – a fictional restaurant critic from the Pixar movie Ratatouille – have to do with MySQL 8.0 GA? When it comes to being a software critic, a lot. In many ways, the work of a software critic is easy. We risk very little and thrive on negative criticism, which is fun […]

The post MySQL 8.0 GA: Quality or Not? appeared first on Percona Database Performance Blog.

Four New Ways to Deliver Oracle Cloud Platform Success

Oracle Cloud is redefining how companies modernize, innovate, and compete in a digital world.

The Oracle Cloud Platform delivers a comprehensive, standards-based combination of Oracle and open source technologies to enable companies to more efficiently build, deploy, integrate, secure and manage all their enterprise applications.

Oracle customers are getting on board with spectacular results, finding solutions to their business challenges, and improving business processes and services.

  • Exelon used Oracle Mobile Cloud Enterprise Service as the foundation for its chatbot, improving the customer experience and empowering customers to engage and communicate with the company in the way that is most convenient for them.
  • Oracle Management Cloud helps Amis identify potential customer problems even before they are aware of the problems themselves. This allows Amis to proactively provide service before the customer is impacted.
  • Global service provider and manufacturer, Ricoh implemented Identity Cloud Services to expand to new businesses quickly and connect everything securely in the cloud. Ryu Taniguchi, System Achitect at Ricoh states, “We have no security concerns in the cloud because Oracle protects security.”

Cloud skills are in demand. A quick glance at Oracle’s Cloud Customer Success Stories page shows just how important these skills are in the industry. Our four new PaaS certifications allow you to quickly ramp up your skills and knowledge to implement these and other Oracle Cloud solutions for your own company.

Pass one of these exams to expand your skills:

Oracle Mobile Cloud Enterprise 2018 Associate Developer | 1Z0-927

Oracle Data Integration Platform Cloud 2018 Associate | 1Z0-935

Oracle Management Cloud 2018 Associate | 1Z0-930

Oracle Cloud Security 2018 Associate | 1Z0-933

Oracle Cloud Learning Subscriptions offer training with integrated certification exams, delivering 12 months access to continually updated digital content that is accessible 24/7.

BAAS: Battle Against Archiver Stuck – ALTERNATE Archive Location

If you are managing volatile systems, you might have encountered Archiver-Stuck in the past. 0RA-00257:archiver error, connect internal only until freed ORA-16014:log 2 sequence# 1789 not archived, no available destinations If this is the case, you might be interested in a rarely used feature called “ALTERNATE Archive Locations”. This is a second independent archive log […]

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569