Programmer Notes | Step by Step Interpretation of Oracle AWR Performance Analysis Report

  Database, oracle, performance analysis

AWR in Oracle is all called Automatic Workload Repository, an automatic load repository. It collects operation statistics and other statistics about a specific database. Oracle takes a snapshot of all its important statistics and load information at fixed intervals (the default is 1 hour) and stores the snapshot in AWR. This information is retained in AWR for a specified period of time (the default is 1 week) and then deleted. The frequency and duration of snapshot execution can be customized.

The introduction of AWR provides us with very good convenience for analyzing the database (MySQL differs too much in this respect). Once there was a metaphor like this-“a system is like a big dark room. the statistical information collected by the system is like candles placed in different positions in the room to illuminate the big dark room. Oracle, properly placed enough candles (AWR), with few candles left uncovered in the room, performance bottlenecks can be easily located. For systems with few or no candles, performance optimization is like a dancer in the dark. “

How do you interpret AWR data? Oracle itself provides some reports for easy viewing and analysis. The following is a description of AWR Database Report, the most common report. I hope that through this article, we can make better use of AWR to facilitate analysis.

I. MAIN

1、Database Information

2、Snapshot Information

(1)Sessions

Indicates the number of sessions connected to the collection instance. This number can help us understand the approximate number of concurrent users in the database. This value is helpful for us to judge the type of database.

(2)Cursors/session

The average number of cursors opened per session.

(3)Elapsed

Elapsed/DB Time comparison reflects the busy degree of the database. If dbtime > elapsed, the database is busy.

(4)DB Time

Indicates the time spent by user operations, including CPU time and wait events. Usually this value is used to interpret the load of the database at the same time.

Specific meaning

Db time = cpu time+wait time (excluding idle waiting) (non-background process)

*db time is the time that the recorded server spends on database operations (non-background processes) and waiting (non-idle waiting). Elapsed_time field accumulation corresponding to V$SESSION.

“aggregate data”

It should be noted that AWR is a data set. For example, within one minute, one user waits for 30 seconds, then 10 users wait for an event for 300 seconds. The same is true for CPU time. Within one minute, one CPU processes for 30 seconds, so four CPUs are 120 seconds. These times are recorded in AWR in a cumulative way.

Example

Dbcu-this is an important indicator for measuring CPU usage. Assuming that the system has n CPUs, then if the CPUs are all busy, the DB CPU in one second is n seconds. In addition to using CPU for calculation, the database will also use other computing resources, such as network, hard disk, memory, etc. The utilization of these resources can also be measured by time. Assuming that the system has M sessions running, some sessions may be using CPU and some sessions may be accessing the hard disk at the same time, then in one second, the time of all sessions together can represent the busy degree of the system in this second. In general, the maximum value of this sum should be m. This is actually another important indicator provided by Oracle: DB time, which measures the total time consumed by front-end processes.

Oracle uses wait events to describe access to computing resources other than CPU. Similarly, just as CPU can be divided into foreground CPU consumption and background CPU consumption, waiting events can also be divided into foreground waiting events and background waiting events. DB Time should generally be equal to the sum of “DB CPU+time spent waiting for events in the foreground”. Waiting time is counted through the v$system\_event view, while DB Time and DB CPU are counted through the same view, that is, v$sys_time_model.

-description of DB Time in “loadprofile”

* The number of CPU in this system is 8, so we can know that the foreground process uses 7.1/8=88.75% of the system CPU. DB Time/s is 11.7, which shows that this system is very busy with CPU. CPU accounts for 7.1, while other foreground waiting events account for 11.7–7.1 = 4.6 waittime/s. DB Time as a percentage of DB CPU: 7.1/11.7= 60.68%

-description of DB CPU in “top5timed events”

Top 5 is listed here in terms of CPU/ wait events as a percentage of DB Time. If a workload is CPU-busy, you should see the shadow of DB CPU here.

* note that we have just calculated %DB time, 60% of DB CPU. Other external table read, direct path write, PX Deq: read credit, PX Deq: Slave Session Stats are among the Top 4 waiting events accounting for 40%.

Limitations of “top5timed foreground events”

Let’s look at this Top 5 Timed Foreground Events. If we don’t look at the Load Profile first, we can’t calculate a CPU-Bound workload. To know the busy program of the system CPU, also know the time interval between the two snapshot based on this AWR, and also know the number of the system CPU. Otherwise, the system can be a very IDLE system. Remember that CPU utilization = DB CPU/(CPU_COUNT*Elapsed TIME). The information given to us by Top 5 is that this workload should be a parallel query, reading data from external tables and writing it to disk by insert append. At the same time, the main time is spent on CPU operations.

-unscramble “DB Time” > “DB CPU”+”time spent waiting for events at the foreground “-process queuing time

As mentioned above, DB Time should generally be equal to the sum of the time spent by DB CPU+foreground wait events. Statistics on these three values are as follows:

DB CPU = 6474.65

DB TIME = 10711.2

FG Wait Time = 1182.63

Obviously, DB CPU+FG Wait Time < DB Time accounts for only 71.5%

* Where is the other 28.5% consumed? In fact, there is a hidden problem of how Oracle calculates DB CPU and DB Time. When the CPU is very busy, if there are many processes in the system, processes will queue up to wait for the CPU. In this way, DB TIME takes into account the time the process waits in line for the CPU, while DB CPU does not include this part of time. This is an important reason for DB CPU+FG Wait Time < DB Time. If a system CPU is not busy, the two should be relatively close. Don’t forget in this example, this is a system with very busy CPU, and 71.5% is a signal, which indicates that this system may be a CPU-Bound system.

Ii. Report Summary

1、Cache Sizes

This section lists the sizes of data buffer cache and shared pool when AWR starts and ends performance collection. By comparing the changes before and after, we can understand the changes in system memory consumption.

2、Load Profile

These two parts are a detailed list of database resource loads, divided into resource loads per second and resource loads per transaction.

  • Redo size

The log size (in bytes) generated per second (per transaction)

  • Logical reads

Logical reads per second (per transaction) (in block). In many systems, the number of select executions is far greater than that of transaction. In this case, you can refer to Logical reads/Executes. In a good oltp environment, this should not exceed 50, usually only about 10. If this value is large, some statements need to be optimized.

  • Block Changes

The number of data blocks changed per second (per transaction).

  • Physical reads

Physical reads per second (per transaction) (in block). In general, physical reading is accompanied by logical reading, unless it is directly read without going through cache.

  • Physical writes

Physical writes generated per second (per transaction) in block.

  • User calls

Number of user calls per second (per transaction). User calls/Executes basically represents the number of requests per statement, and the closer Executes are to usercalls, the better.

  • Parses

The number of parsing (or parsing) generated per second (per transaction), including soft parsing and hard parsing, but excluding fast soft parsing. More than 300 soft parsing times per second means that your “application” is not efficient, and you do not soft soft parse to adjust session_cursor_cache.

  • Hard parses

The number of hard parses generated per second (per transaction). More than 100 times per second may indicate that your binding is not working well.

  • Sorts

Number of sorts per second (per transaction).

  • Logons

Number of logins to the database per second (per transaction).

  • Executes

Number of SQL statement executions per second (per transaction). Including SQL statements executed by users and SQL statements executed by the system, indicating the busy degree of a system SQL statement.

  • Transactions

The number of transactions per second. Indicates how busy a system is. At present, the busiest system known is Taobao’s online trading system, with a value of 1,000.

  • % Blocks changed per Read

Indicates the proportion of logical reads used for read-only rather than modified blocks. If there is a lot of PLSQL, it will be higher.

  • Rollback per transaction %

See if the rollback rate is high, because rollback consumes resources.

  • Recursive Call %

The proportion of recursively calling SQL. The SQL executed on PL/SQL is called recursive SQL.

3、Instance Efficiency Percentages (Target 100%)

This section is statistical information on memory efficiency. For OLTP systems, these values should be as close as possible to 100%. For OLAP systems, the significance is not too great. Because in OLAP systems, the speed of large queries is the biggest factor that affects performance.

  • Buffer Nowait %

Percentage of data blocks acquired in a non-wait manner.

This value is small, indicating that the data block is being read into memory by another session when SQL accesses the data block, and it needs to wait for this operation to complete. This usually happens when some data blocks become hot blocks.

Buffer Nowait<99% indicates that there may be hot blocks (find tch of x$bh and cache buffers chains of v$latch_children).

  • Redo NoWait %

Percentage of redo data obtained in non-wait mode.

  • Buffer Hit %

The data buffer hit rate indicates the hit rate of the data block in the data buffer.

Buffer Hit<95%, possibly adding db_cache_size, but a large number of non-selected indexes will also cause this value to be very high (a large number of dbfilesequential reads).

  • In-memory Sort %

Percentage of data blocks sorted in memory. The total sorting includes memory sorting and disk sorting. When there is not enough sorting space in memory, use temporary table space to sort. This is the percentage of memory sorting to total sorting.

Too low indicates that there is a large amount of sorting in the temporary tablespace. In oltp environment, the best is 100%. If it is too small, the PGA parameters can be adjusted.

  • Library Hit %

The hit rate of SQL parsing in the shared pool.

Library Hit<95%, consider increasing the shared pool, binding variables, modifying cursor_sharing, etc.

  • Soft Parse %

Soft parsing as a percentage of total parsing. It can be approximated as the hit rate of sql in the shared area.

This value is low, indicating that some SQL in the system is not reused, and the best possible reason is that binding variables are not used.

< 95%: binding needs to be considered

< 80%: then sql is probably not reused

  • Execute to Parse %

Percentage of number of executions to number of analyses.

If the value is small, the proportion of analysis (hard analysis and soft analysis) is too large and the proportion of fast and soft analysis is small. According to the actual situation, the parameter session_cursor_cache can be adjusted appropriately to improve the hit rate of sql execution in the session.

round(100*(1-:prse/:exe),2)  即(Execute次数 - Parse次数)/Execute次数 x 100%

prse = select value from v$sysstat where name = 'parse count (total)';

exe = select value from v$sysstat where name = 'execute count';

If there is no binding, it is also a reason why it cannot be reused. Of course, it is also possible that sharedpool is too small. Simply adding session_cached_cursors is not a radical solution. Different sql still cannot be reused and needs to be analyzed. Even soft parse will be counted into parsecunt, so this indicator does not reflect the ratio of several kinds of parsing in fast soft(pga) /soft (shared pool)/hard (newly parsed in shared pool). A large number of parses can only be avoided by using variables in programs like loops in pl/sql. Therefore, this indicator is not necessarily related to whether to use bind or not. Adding session_cached_cursors is to save resources by converting soft into fast soft in the case of a large number of parses.

  • Latch Hit %

Latch’s hit rate.

The low value is due to too large shared_pool_size or too many hard parses caused by not using binding variables. Ensure > 99%, otherwise there are serious performance problems, such as binding, etc. that will affect this parameter.

  • Parse CPU to Parse Elapsd %

The percentage of time that CPU is consumed in parsing the total time. That is, 100 * (parsetime cpu/parsetime elapsed)

Analyze actual running events/(analyze actual running time+waiting time for resources in analysis), the higher the better.

  • % Non-Parse CPU

CPU non-analysis time as a percentage of total CPU time.

100*(parse time cpu / parse time elapsed)= Parse CPU to Parse Elapsd %

Query actual runtime/(query actual runtime +sql parsing time), too low means too much parsing time.

4、Shared Pool Statistics

  • Memory Usage %

Memory usage of shared pool.

Should be stable at 70%-90%, too small to waste memory, too large is insufficient memory.

  • % SQL with executions>1

SQL ratio with more than 1 executions.

If it is too small, the binding variable may not be used.

  • % Memory for SQL w/exec>1

SQL with execution times greater than 1 consumes memory/memory consumed by all SQLs (i.e. memory for sql with execution > 1).

5、Top 5 Timed Events

Iii. RAC Statistics

This part will only appear in RAC environment. It is the performance index of data sending and receiving in global memory, as well as information of global lock. Unless the database is running normally with a baseline as a reference, it is difficult to directly see the performance problem from this part of data.

Experience

According to Oracle’s experience, among the following GCS and GES indicators, any time-related indicator, as long as the GCS indicator is less than 10ms and the GES indicator is less than 15ms, generally indicates that the communication efficiency between nodes is normal. However, even if the time indicator is normal, it does not mean that there is no problem with the application itself or the application in RAC deployment.

1、Global Cache Load Profile

2、Global Cache Efficiency Percentages (Target local+remote 100%)

3、Global Cache and Enqueue Services – Workload Characteristics

4、Global Cache and Enqueue Services – Messaging Statistics

5、Global Cache Transfer Stats

* If %Busy of CR is large, there is a large amount of block contention between nodes.

Iv. Wait Events Statistics

1、Time Model Statistics

This section of information lists the proportion of database time occupied by various operations.

  • parse time elapsed/hard parse elapsed time

By comparing these two indexes, we can see that hard analysis accounts for the whole proportion. If it is very high, it means there are a lot of hard parsing.

  • % Not-Parse CPU

The proportion of CPU consumption spent on non-resolution to the total CPU consumption. On the contrary, you can see the analytic occupation. If it is very high, it can also reflect too much parsing (we can further see if it is too much hard parsing).

Example-Calculating CPU Consumption

Total DB CPU = DB CPU + background cpu time = 1305.89 + 35.91 = 1341.8 seconds

Divided by the total BUSY_TIME+IDLE_TIME

% Total CPU = 1341.8/1941.76 = 69.1%, which is just in line with the value of the above Report.

In fact, in the Load Profile section, we can also see DB’s utilization of system CPU resources.

Divide DBcperscond by cpcount to get% of the CPU% consumed by db in the foreground.

Here 5.3/8 = 66.25%

It is slightly smaller than 69.1%, indicating that DB also consumes about 3% of CPU in the background.

2、Wait Class

This part is the waiting type. It can be seen that the kind of waiting takes the longest time.

3、Wait Events

This part is the details of the waiting events of the whole instance, which contains the information of TOP 5 waiting events.

%Time-outs: Timeout Percentage (Timeout Basis Not Clear? )

4、Background Wait Events

This part is the wait event of the instance daemon. If we suspect that the background process (such as DBWR) cannot respond in time, we can confirm here whether there is any event that the background process has waited too long.

5、Operating System Statistics

(1) Background knowledge

If you pay attention to the performance of the database, when you get an AWR report, the first thing you want to know is probably the utilization of system resources, and the first thing you want to know is CPU. And subdivided, CPU may refer to:

  • OS level User%, Sys%, Idle%
  • DB accounts for Busy% of OS CPU resources
  • DB CPU can be divided into CPU consumed by foreground and CPU consumed by background.

(2)11g

If the version of the database is 11g, then fortunately, this information is clear in AWR reports:

The OS level %User is 75.4, %Sys is 2.8, %Idle is 21.2, so %Busy should be 78.8.

DB accounts for 69.1% of OS CPU resources, %Busy CPU can be obtained from the above data: %Busy CPU = %Total CPU/(%Busy)100 = 69.1/78.8100 = 87.69, consistent with 87.7 reported.

(3)10g

If it is 10g, some data in the Report need to be calculated manually. The result of the Host CPU comes from DBA_HIST_OSSTAT, and the AWR report has helped to compile the absolute data for this period of time (the time unit here is centisecond-that is, 1/100 second).

Interpretation output

%User = USER_TIME/(BUSY_TIME+IDLE_TIME)*100 = 146355/(152946+41230)*100 = 75.37

%Sys  = SYS_TIME/(BUSY_TIME+IDLE_TIME)*100

%Idle = IDLE_TIME/(BUSY_TIME+IDLE_TIME)*100

ELAPSED_TIME

The length of time between the two snapshot captured by this AWR report is already implied here. There is the following formula. A correct understanding of this formula can provide a deeper understanding of the use of system CPU resources and the way in which they are measured.

BUSY_TIME + IDLE_TIME = ELAPSED_TIME * CPU_COUNT

Calculated: Elapsed _ Time = (152946+41230)/8/100 = 242.72 seconds//This is correct.

Time statistics view v$sys_time_model

As for the utilization of CPU by DB, this involves a new view about time statistics-v$sys_time_model introduced by 10g. Simply put, Oracle has adopted a unified time model to record some important time indicators. Specifically, these indicators include:

1) background elapsed time

    2) background cpu time

          3) RMAN cpu time (backup/restore)

1) DB time

    2) DB CPU

    2) connection management call elapsed time

    2) sequence load elapsed time

    2) sql execute elapsed time

    2) parse time elapsed

          3) hard parse elapsed time

                4) hard parse (sharing criteria) elapsed time

                    5) hard parse (bind mismatch) elapsed time

          3) failed parse elapsed time

                4) failed parse (out of shared memory) elapsed time

    2) PL/SQL execution elapsed time

    2) inbound PL/SQL rpc elapsed time

    2) PL/SQL compilation elapsed time

    2) Java execution elapsed time

    2) repeated bind elapsed time

We only focus on the two related cpus here: background cputme and DB CPU. These two values are also recorded in AWR.

V. SQL Statistics

1、SQL ordered by Elapsed Time

This part is sorted by SQL execution time from long to short.

  • Elapsed Time(S)

The total length of SQL statement execution, and this sort is performed according to this field. Note that this time is not the time of a single SQL run, but the total time of SQL execution times within the monitoring range. The unit time is seconds. Elapsed Time = CPU Time + Wait Time

  • CPU Time(s)

The total time taken by CPU for SQL statement execution, which is less than or equal to Elapsed Time. The unit time is seconds.

  • Executions

The total number of executions of SQL statements within the monitoring range. If Executions=0, the statement is not completed normally and stopped in the middle, requiring attention.

  • Elap per Exec(s)

The average time to execute SQL once. The unit time is seconds.

  • % Total DB Time

Elapsed Time for SQL as a percentage of total database time.

  • SQL ID

The ID number of the SQL statement. Click to navigate to the SQL detailed list below. Click IE to return to the current SQL ID.

  • SQL Module

Shows how the SQL is connected to the database for execution. If it is connected by SQL*Plus or PL/SQL link, it is basically someone debugging the program. Generally, sql linked by foreground application is empty.

  • SQL Text

Simple SQL prompt, detailed need to click SQL ID.

Analysis description

If you see that SQL statements take a long time to execute and CPU time is low, SQL consumes more in I/O operations (including logical I/O and physical I/O). You can further analyze whether there is a problem with I/O based on the previous I/O reports and related waiting events. Of course, the waiting time of SQL mainly occurs in the aspect of I/O operation. It cannot be said that there is an I/O bottleneck in the system. It can only be said that SQL has a large number of I/O operations.

If SQL statements are executed many times, it is necessary to pay attention to the record changes of some corresponding tables. If the change is not big, it is necessary to consider whether most operations have been Rollback from the front, resulting in a lot of useless work.

2、SQL ordered by CPU Time

TOP SQL with the longest execution time of CPU time sum is recorded (please note that the execution time of this SQL in the monitoring range is the sum of CPU time, not the single SQL execution time). This part is the order of CPU time consumed by SQL from top to bottom.

  • CPU Time (s)

CPU time consumed by SQL.

  • Elapsed Time (s)

SQL execution time.

  • Executions

SQL execution times.

  • CPU per Exec (s)

Each execution consumes CPU time.

  • % Total DB Time

SQL execution time as a percentage of total DB time.

3、SQL ordered by Gets

This section lists the number of memory data blocks acquired by SQL, sorted from large to small. Buffer get is actually a logical read or a consistent read. In sql 10046, it is also called query read. Represents the logical IO of a statement during execution, in blocks. In the report, this value is a cumulative value. Buffer Get= number of executions * bufferget each time. TOP SQL whose executIOn accounts for the total buffer Gets (logical io) is recorded (please note that the execution of this SQL in the monitoring range accounts for the total Gets, not the ones occupied by a single SQL execution).

  • Buffer Gets

Number of memory blocks obtained by SQL execution.

  • Executions

SQL execution times.

  • Gets per Exec

Number of memory blocks obtained per execution.

  • %Total

Percentage of the total.

  • CPU Time (s)

CPU time consumed.

  • Elapsed Time (s)

SQL execution time.

Criteria for filtering SQL

Because statspack/awr lists the overall top buffer, they are concerned with the overall performance index instead of focusing on statements that are executed only once. In order to prevent oversize, the following principles have been adopted. If there is sql that does not use binding variables, the execution is very poor, but because there is no binding, the system is artificially different sql. It may not be included in this list.

A value greater than the threshold buffer_gets_th, which is the number of sql execution buffers fetched (default 10000).

A value less than define top_n_sql=65.

4、SQL ordered by Reads

This section lists the information of SQL execution physical reading, sorted from high to low. TOP SQL that accounts for the total disk physical reads (physical IO) is recorded (please note that the execution of this SQL accounts for the total disk physical reads within the monitoring range, not the disk physical reads accounted for by a single SQL execution).

  • Physical Reads

Number of SQL physical reads.

  • Executions

SQL execution times.

  • Reads per Exec

Physical reads generated by each SQL execution.

  • %Total

Percentage of total physical reading.

  • CPU Time (s)

CPU time consumed by SQL execution.

  • Elapsed Time (s)

SQL execution time.

5、SQL ordered by Executions

This section lists the information of SQL execution times, in descending order. This part is useful if it is OLTP system. Therefore, the execution frequency of SQL is very high, and the execution frequency of SQL will have a great impact on the performance. OLAP systems have little meaning because SQL is executed repeatedly at a low frequency.

  • Executions

Number of SQL executions.

  • Rows Processed

Number of records processed by SQL.

  • Rows per Exec

Number of records per SQL execution.

  • CPU per Exec (s)

CPU time consumed per execution.

  • Elap per Exec (s)

The duration of each execution.

6、SQL ordered by Parse Calls

This section lists the information of SQL according to the number of analysis times (soft analysis) in order from top to bottom. This part is more important for OLTP systems. The total analysis times listed here do not distinguish between hard analysis and soft analysis. But even if it is soft analysis, it needs to be paid more attention to. This will consume a lot of memory resources, cause latch to wait, and reduce the performance of the system. Too much soft analysis requires checking whether there are frequent cursor opening and closing operations on the application.

  • Parse Calls

Number of SQL analyses.

  • Executions

Number of SQL executions.

  • % Total Parses

Percentage of the total number of analyses.

7、SQL ordered by Sharable Memory

Top SQL that records the size of librarycoche occupied by sql.

  • Sharable Mem (b)

Occupies the size of the librarycoche. The unit is byte.

8、SQL ordered by Version Count

This section lists multiple versions of SQL. The TOP SQL of SQL’s open subscript was recorded. There are many reasons why a SQL can produce multiple versions. you can query the view v$sql_sahred_cursor view to find out the specific reasons. For OLTP systems, this part is worth paying attention to and understanding the reuse of SQL.

  • Version Count

Number of versions of SQL.

  • Executions

Number of SQL executions.

9、SQL ordered by Cluster Wait Time

TOP SQL that recorded the waiting time of the cluster. This section exists only in RAC environments and lists the waits that occur when sharing memory data between instances. In RAC environment, a lock mechanism is needed between several instances to ensure the consistency of data block versions, which leads to a new type of waiting event, data access waiting between RAC instances. For RAC structure, it is better to use business separation. In this way, a certain business uses a certain instance, and the memory blocks it accesses will be stored in the memory of a certain instance, thus reducing GC waiting events between instances. In addition, if RAC structure adopts load balancing mode, each instance will be connected by sessions of various applications, and a large number of data blocks need to be copied and locked in the memory of each instance, which will aggravate GC waiting events.

  • Cluster Wait Time (s)

The waiting time of the cluster.

  • CWT % of Elapsd Time

The percentage of waiting time for cluster operations to the total time.

  • Elapsed Time(s)

Total length of SQL execution.

10、Complete List of SQL Text

This section is the complete text of SQL covered in the above sections.

Vi. Instance Activity Statistics

1、Instance Activity Stats

This part is the information statistics of examples. There are many items. For RAC-based databases, the AWR report of each instance needs to be analyzed before the overall performance can be objectively evaluated.

  • CPU used by this session

This indicator is used to indicate the CPU units consumed by Oracle in the current performance acquisition interval. A CPU unit is 1/100 second. From this indicator, we can see the CPU load.

Case-Analysis of System CPU Busy

In the TOP5 wait event, find “CPU time” and you can see that the system consumes 26469 seconds of CPU time.

In the instance statistics section, it can be seen that 1813626 CPU units were consumed in the whole process. It consumes 21 CPU units per second, corresponding to an actual time of 0.21 seconds. That is, the CPU processing time per second is 0.21 seconds.

The number of CPUs in the system is 8. Per second per CPU consumption is 21/8=2.6 (CPU units). In one second, the processing time of each CPU is 2.6/100=0.026 seconds.

* Overall, the processing time per CPU per second in the current database is only 0.026 seconds, which is far from high load. The database is rich in CPU resources and is far from bottleneck.

Vii. IO Stats

1、Tablespace IO Stats

I/O performance statistics for tablespaces.

  • Reads

How many physical readings have taken place?

  • Av Reads/s

Number of physical reads per second.

  • Av Rd(ms)

The average time for a physical read (milliseconds). The response time of a high corresponding disk should be within 10ms, preferably not more than 20ms;; If it reaches 100ms, the application will begin to have serious problems or even fail to operate normally.

  • Av Blks/Rd

How many data blocks to read at a time.

  • Writes

How many times did it happen?

  • Av Writes/s

Number of writes per second.

  • Buffer Waits

Gets the number of times the memory block waited.

  • Av Buf Wt(ms)

Gets the average latency of memory blocks.

2、File IO Stats

I/O statistics at the file level.

Viii. Advisory Statistics

Consultant information. This section provides a variety of consultancy procedures and proposes simulations under different conditions. Including databuffer, pga, shared pool, sga, stream pool, java pool, etc.

1、Buffer Pool Advisory

Buffer pool size advice.

  • Size for Est (M)

Oracle estimates the size of Buffer pool.

  • Size Factor

Ratio of estimated value to actual value. If 0.9 means that the estimated value is 0.9 times the actual value. 1.0 indicates the actual size of buffer pool.

  • Buffers for Estimate

Estimated Buffer size (quantity).

  • Est Phys Read Factor

The estimated influence factor of physical reading is a proportion of the estimated physical reading and the actual physical reading. 1.0 indicates actual physical reading.

  • Estimated Physical Reads

Estimated number of physical reads.

2、PGA Memory Advisory

PGA size recommendations.

  • PGA Target Est (MB)

Estimated size of PGA.

  • Size Factr

The effect factor is the same as that in bufferpooladvice.

  • W/A MB Processed

Oracle estimates the amount of data processed to produce impacts.

  • Estd Extra W/A MB Read/ Written to Disk

The amount of data that requires physical reading and writing in processing data.

  • Estd PGA Cache Hit %

Estimated PGA hit rate.

  • Estd PGA Overalloc Count

The amount of additional memory that needs to be allocated under the estimated PGA size.

3、Shared Pool Advisory

The recommender obtains the corresponding performance index value by setting different shared pool sizes.

  • Shared Pool Size(M)

Estimated shared pool size.

  • SP Size Factr

Factors affecting the size of the shared pool.

  • Est LC Size (M)

Estimated size of the library cache footprint.

  • Est LC Mem Obj

The number of objects in the cache.

  • Est LC Time Saved (s)

Additional time is required to read objects into the shared pool.

  • Est LC Time Saved Factr

Factors affecting the time when objects are read into the shared pool.

Indicates the effect of each simulated shared pool size on re-reading objects into the shared pool. When the change in this value is small or constant, it makes little sense to increase the size of the shared pool.

  • Est LC Load Time (s)

Time spent on analysis.

  • Est LC Load Time Factr

Analyze the impact factors that take time.

  • Est LC Mem Obj Hits

Number of times objects in memory were found.

4、SGA Target Advisory

A suggestion of the recommender to the overall performance of SGA.

  • SGA Target Size (M)

Estimated SGA size.

  • SGA Size Factor

The influence factors of SGA size.

  • Est DB Time (s)

DB Time calculated by estimated SGA size.

  • Est Physical Reads

Number of physical reads.

IX. Latch Statistics

1、Latch Activity

  • Get Requests/Pct Get Miss/Avg Slps /Miss

Indicates willingness to wait for statistics for latch of type.

  • NoWait Requests/Pct NoWait Miss

Indicates reluctance to wait for statistics for latch of type.

  • Pct Misses

The ratio is preferably close to 0.

X. Segment Statistics

1、Segments by Logical Reads

The logical reading of the paragraph.

2、 Segments by Physical Reads

The physical reading of the segment.

3、Segments by Buffer Busy Waits

From this section, we can find that those objects are frequently accessed. Bufferbusingwaits events usually result in hot spots because some data blocks are accessed too frequently.

4、Segments by Row Lock Waits

AWR reports that the Segments by Row Lock Waits in the Segment Statistics section is very misleading. It includes not only row-level lock wait for transactions, but also wait for index splitting. I have been complaining about why there is no index splitting count at the segment level in v$segment_statistics, so ORACLE has already implemented it. But do you think it is appropriate to include statistics in this indicator?

Xi. other issues

Influence of SQL Running Cycle on Reports

For SQL statement, only after it has been executed will its relevant information be recorded by Oracle (e.g. CPU time, SQL execution time, etc.). At that time, when a SQL was terminated after two snapshot intervals selected for AWR report, its information could not be reflected by the AWR report. Although it runs in the sampling period, it also consumes a lot of resources.

In other words, the performance report of a certain interval cannot accurately reflect the consumption of resources in this sampling period. Because some SQL running in this interval may end after this time period, or some SQL may have run for a long time before the start of this period, just ending in this sampling period. All these factors lead to the fact that the data in the sampling period is not absolutely the data used by the resources of all database operations during this period.

Author: feng han

Source:Yixin Institute of Technology