Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Database \ Slow update statement Performance Analysis     - 2016, the new Node project Precautions (Programming)

- CentOS7 + Redis Live Installation and Configuration (Linux)

- About Python default character set (Linux)

- Linux + Apache + PHP + Oracle based environment to build (Server)

- The most common and most effective security settings under linux (Linux)

- Use SecureCRT to transfer files between local and remote hosts (Linux)

- Ubuntu 14.04 build Gitlab (Linux)

- GitLab remote backup of Linux Shell Scripting (Linux)

- Partition contrast manifestations under Windows and Linux (Linux)

- C ++ hash function (Programming)

- ethtool implementation framework and application in Linux (Linux)

- Using C / C ++ extensions Python (Programming)

- CoreOS Linux introduces Kubernetes kubelet (Server)

- Linux iptables: Composition Rules (Linux)

- KVM installation under CentOS 5.5 (Linux)

- Simple to install Tor browser to Ubuntu 14.04 and Linux Mint 17 (Linux)

- Ubuntu 14.04 build Hadoop 2.5.1 standalone and pseudo-distributed environment (32-bit) (Server)

- JSON Introduction and Usage Summary (Programming)

- Android recyclerview cardview (Programming)

- Linux command find (Linux)

 
         
  Slow update statement Performance Analysis
     
  Add Date : 2018-11-21      
         
       
         
  Recently dealt with a problem, first received DB time raised the alarm, and then view the case of DB time found that nearly 1000% of the load.

With curiosity wanted to see in the end is what kind of a statement that lead to such situations.

First grabbed a awr report because of time problems are concentrated and lasts a few hours, so grabbed an hour snapshot.

awr part are as follows:

Cache Sizes

 Begin End
Buffer Cache: 39,472M 39,472M Std Block Size: 8K
Shared Pool Size: 1,440M 1,440M Log Buffer: 14,256K
 

As can be seen from the following part of the activities within the database is actually not much, redo generation amount is not high, tps is not high.
Load Profile

 Per Second Per Transaction
Redo size: 154,276.41 24,024.13
Logical reads: 4,864.90 757.57
Block changes: 779.75 121.42
Physical reads: 509.53 79.35
Physical writes: 359.90 56.04
User calls: 2,658.46 413.98
Parses: 837.89 130.48
Hard parses: 0.09 0.01
Sorts: 171.22 26.66
Logons: 0.47 0.07
Executes: 949.10 147.80
Transactions: 6.42
 

The view waiting time, waiting for the event was first discovered db file sequential read, the average waiting time of nearly 17ms,

Delay generally require 10ms or less, or at least 100 reads / sec, in the case of SAN-based storage for caching data, sequential read indicators sometimes maintained at about 2ms, this only shows that SAN has put data into the cache, and fell I can not explain the hard drive really fast. This place has more than 10ms legend still exists a great influence on the IO. Let's not miss this issue, continue to look down.

Event Waits Time (s) Avg Wait (ms)% Total Call Time Wait Class
db file sequential read 917,810 15,310 17 96.1 User I / O
CPU time 596 3.7
log file sync 16,085 186 12 1.2 Commit
log file parallel write 15,466 140 9 .9 System I / O
ARCH wait on SENDREQ 374 10 27 .1 Network

According to the time model, the majority of DB time in terms of sql statement, so attention sql statement is a very important part.

Statistic Name Time (s)% of DB Time
sql execute elapsed time 15,533.43 97.47
DB CPU 596.11 3.74
connection management call elapsed time 82.89 0.52
parse time elapsed 20.22 0.13
For top1 sql statement and make their own surprise, it turned out to be a very simple update.

Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s)% Total DB Time SQL Id SQL Module SQL Text
8,659 69 622 13.92 54.34 94p345yuqh3zd update user_test t set t.login_status = '' where t.CN_TEST =: 1
The first feeling is that this statement came a full table scan, because even a simple Update takes nearly 13 seconds, has been considered a long.
Of course, speculation also need to verify, we look awrsqrpt results.
The report also found that very interesting. As regards the implementation of the plan is to take the unique index scan, so the implementation plan view or no problem.

Id Operation Name Rows Bytes Cost (% CPU) Time
0 UPDATE STATEMENT 1 (100)
1 UPDATE USER_BILLING
2 INDEX UNIQUE SCAN IDX_USER_TEST_CNMASTER 1 30 1 (0) 00:00:01
But the view sql statement execution statistics, it is a bit strange.

Stat Name Statement Total Per Execution% Snap Total
Elapsed Time (ms) 8,659,180 13,921.51 54.34
CPU Time (ms) 69,346 111.49 11.63
Executions 622
Buffer Gets 3,146,068 5,057.99 35.91
Disk Reads 645,229 1,037.35 70.31
Parse Calls 622 1.00 0.04
Rows 621,827 999.72
User I / O Wait Time (ms) 8,608,075
Execute sql statements continued for a total of about 8659s, and 8608s of the time user I / O wait on, so down, time spent actually executed 622 times, not many.
 For this problem, and it is relatively doubts began to wonder whether there has been a problem on the disk IO.
 But using MegaCli view, it found any bad block does not exist.
# MegaCli -CfgDsply -a0 | grep Error
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
 Other Error Count: 0
 Media Error Count: 0
This time the performance problem is a guess sql data type of bind variables may be provided by different result. But some investigation and found that did not get their desired results.
 View the input parameter type, it is expected in varchar2, so in the process of sql statement or the possibility of their own will not guess the full table scan appears.
  select name, datatype_string, value_string, datatype from DBA_HIST_SQLBIND where sql_id = '94p345yuqh3zd' and snap_id between 58711 and 58712
 NAME DATATYPE_STRING VALUE_STRING DATATYPE
 ------------------------------ --------------- ----- ------------------------- ----------
 : 1 VARCHAR2 (128) xxxxxx9@test.com 1
 : 1 VARCHAR2 (128) 23234324324234 1
For IO bottlenecks, they still get what they need from the addm.
 For disk throughput saying, addm report is so described.

FINDING 6: 39% impact (6136 seconds)
 ------------------------------------
 The throughput of the I / O subsystem was significantly lower than expected.

  RECOMMENDATION 1: Host Configuration, 39% benefit (6136 seconds)
      ACTION: Consider increasing the throughput of the I / O subsystem.
          Oracle's recommended solution is to stripe all data file using the
          SAME methodology. You might also need to increase the number of disks
          for better performance. Alternatively, consider using Oracle's
          Automatic Storage Management solution.
 RATIONALE: During the analysis period, the average data files' I / O
        throughput was 3.9 M per second for reads and 2.7 M per second for
        writes. The average response time for single block reads was 16
        milliseconds.
This section was able to explain the problem in the IO or experiencing great bottlenecks. The delay is caused by waiting for the main cause of the sharp increase in DB time.
 Of course we can not follow the argument addm directly replace asm, this is not immediately possible to implement.
 But awr report still found a trace of clues, some auxiliary tuning method.
 The first is the size of the shared pool, the library has about 1000 session, but because of the use of automatic management sga, the result was only a shared pool of about 1.4G cache 30G of space, it is clear support for more than 1,000 a session for the library, shard pool are squeezed too much, you can specify a value slightly larger, to ensure that all the shared pool is not drained.
Another problem is the update so slow execution, a user I / O beyond reason, you can perform a very frequent scanning sql statement is the same table, will cause some contention with the thermal block. To support the consistency will also read, is bound to be on the undo larger consumption, viewed undo this library is a relatively small value, it can be adjusted larger.
     
         
       
         
  More:      
 
- Java reflection Introduction (Programming)
- Cacti monitoring service Nginx (Linux)
- Java concurrent programming combat (using synchronized synchronization method) (Programming)
- DB2 table space is redirected to restore the database combat (Database)
- Use Linux built-in firewall to improve network access control (Linux)
- Two kinds of agents of Spring AOP (Programming)
- Eclipse 3.7.2 can not start solving under Ubuntu 14.04 (Linux)
- Ease of use "Explain Shell" script to understand Shell command (Linux)
- PHP call a Python program (Programming)
- bash login and welcome message: / etc / issue, / etc / motd (Linux)
- Java, on the dfile.encoding Systemproperty (Programming)
- Linux system security reinforcement (Linux)
- Nginx + uWSGI + Django + Python Application Architecture Deployment (Server)
- Kickstart automated installation and deployment RHEL 7.0 (Linux)
- How to Start a Linux application running in the background using the terminal mode (Linux)
- Hadoop vs spark (Server)
- CentOS 7.1 install NTFS-3G (Linux)
- The wrong in Linux: too many open files (Linux)
- MongoDB polymerization being given (Database)
- Github inventory objects Algorithm (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.