High CPU during I/O?

加入dbDao.com Oracle技术学习QQ群:171092051

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 400-690-3643 备用电话: 13764045638 邮箱:service@parnassusdata.com

 

Yesterday, I posted a trace file that seemed odd to me Twitter. This resulted in a very lively discussion, and several very interesting theories were put forward. Even though the issue hasn’t been solved yet, I would like to post a summary of this discussion (plus my findings) so that more people would be able to give their inputs. Also, some of the theories put forward to explain the observed anomaly are very interesting on their own.

The issue is very simple (which I think is one of the reasons why it resulted in such a heated debate). It has to do with unreasonably high CPU utilization with low values of the db_file_multiblock_read_count (from that point on, MBRC) parameter. Here is the setup I used in my tests:

1) 11.2.0.1 Oracle database on a quad-core Windows 7 machine with 8GB of memory
2) A locally management tablespace with uniform 1M extents. I started off with automatic segment space management, then switched to manual to make sure it’s not an ASSM glitch (it wasn’t), so either type is fine
3) In this tablespace, I created a table with 1M rows, with just two columns — numeric “id” and 1000 byte “padding” (to make sure that enough blocks are filled):

create table t tablespace test_fts
as
with gen as (select null from dual connect by level <= 1e5)
select rownum id, rpad('x', 1000, 'x') padding
from gen g1,
     gen g2
where rownum <= 1e6;     

This produces a table of about 1.1-1.2 GB in size, neatly packed into 1M extents. Then I simply counted rows to induce a full tablescan (since there are no indexes defined, tablescan is the only option) for several different values of db_file_multiblock_read_count to find the optimal one (flushing buffer cache in between test runs to remove caching effects).

When I started with MBRC values of 1 and 2 (not that I expected any good results here, I was just curious) I noticed that CPU usage was unusually high: 7 seconds (as compared to 22 second execution time). It’s not unreasonable to see some CPU time during full tablescans, as it involves many activities that keep CPUs busy (acquiring/releasing latches, walking hash chains, pinning buffers, copying memory etc.). However, one third of total time seemed like a lot, considering that it was just an uncompressed unencrypted table with only 2 rows, and the query didn’t work any computationally intensive expressions.

That’s why I posted that plan below appeared odd to me and I decided to seek advice on it on Twitter.

select count(*) 
from
 t3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      7.39      19.37     142860     142973          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      7.39      19.37     142860     142973          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 85  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=142973 pr=142860 pw=0 time=0 us)
1000000   TABLE ACCESS FULL T3 (cr=142973 pr=142860 pw=0 time=20144866 us cost=79430 size=0 card=1000000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  db file sequential read                         4        0.01          0.02
  db file scattered read                      71428        0.13         14.26
  SQL*Net message from client                     2        0.00          0.00
********************************************************************************

I also noticed CPU time decreases as MBRC grows. This could be clearly seen when looking at plotted results for all MBRCs:

mbrc_cpu

That’s also somewhat counter-intuitive. If the amount of information we put into memory remains the same, why would the CPU time necessary to process it, depend on MBRC?

Several very interesting points were made in the discussion that followed. For the sake of compactness, I’m going to summarize them in form of a table.

Author Theory Counter-arguments
Jonathan Lewis, Frits Hoogland problems of measuring asynchronous direct path reads time The problem persisted after setting disk_async_io. Switching to buffered reads also didn’t make much difference
Jonathan Lewis latching Doesn’t explain why CPU decreases as MBRC grows
Jonathan Lewis constant cost per creating a multi-block objects in buffer cache Makes sense, but how can one test to confirm?
Cary Millsap CPU spent on SORT AGGREGATE, rowsource stats are wrong Why would the cost of SORT AGGREGATE depend on MBRC? Also, improving rowsource sampling frequency didn’t make any difference
Cary Millsap, Franck Pachot overhead from handling blocks etc. Without flushing buffer cache, the query completed less than in 7 seconds, so it’s unlikely for this overhead to be this big
Timur Akhmadeev Windows time accounting glitch Similar results were obtained on Linux

I think that one of the factors that is important for understanding what is going on is that compared to other databases I used for similar testing, this particular database has relatively fast physical reads (even when MBRC is much smaller than the optimal values!), probably because of some hardware optimizations (e.g. read-ahead). Because of that, various CPU overhead on top of physical I/O are becoming more noticeable. But that alone is not sufficient to explain anything.

I’m planning to run more tests to look at latching effects, and also to try and profile the CPU activity, and I’m going to post my results if I find anything. If anyone has any ideas that could shed light on this behavior, please share them, I’d be very much interested to know.

关注dbDao.com的新浪微博

扫码关注dbDao.com 微信公众号:

High CPU during I/O?

关注dbDao.com的新浪微博

扫码加入微信Oracle小密圈,了解Oracle最新技术下载分享资源

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569