Showing posts with label SQL Tuning. Show all posts
Showing posts with label SQL Tuning. Show all posts

Tuesday, December 20, 2011

Detect Row Chaining, Migrated Row and Avoid it

--------detecting chained row-----

This query will show how many chained (and migrated) rows each table has:

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;


-------------------

SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';

----------------------------------------------------------------------------------------

SELECT 'Chained or Migrated Rows = '||value FROM v$sysstat WHERE name = 'table fetch continued row';

Result:
Chained or Migrated Rows = 31637

Explain:
You could have 1 table with 1 chained row that was fetched 31'637 times. You could have 31'637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above -- any combo.

---------------------------------------------------------------------------------

How many Rows in a Table are chained?

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
3 100 3691 10 40

PCT_CHAINED shows 100% which means all rows are chained or migrated.

------------------------------------------------------------------------------------------------


List Chained Rows

You can look at the chained and migrated rows of a table using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows.

Creating a CHAINED_ROWS Table

To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script in $ORACLE_HOME/rdbms/admin. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script.


D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlchain.sql will create following table


create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);

After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement.

ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS;

SELECT owner_name,table_name, head_rowid FROM chained_rows;


-----------------------------How to avoid Chained and Migrated Rows?--------------------------


Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.

The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.

ALTER TABLE MOVE

First count the number of Rows per Block before the ALTER TABLE MOVE

SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;

Block-Nr Rows
---------- ----------
2066 3

Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:

SELECT distinct table_name FROM CHAINED_ROWS;


ALTER TABLE tbloptionaccesshistory MOVE
PCTFREE 20
PCTUSED 40
STORAGE (INITIAL 20K
NEXT 40K
MINEXTENTS 2
MAXEXTENTS 20
PCTINCREASE 0);

Table altered.

Again count the number of Rows per Block after the ALTER TABLE MOVE

SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM tbloptionaccesshistory
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;


Rebuild the Indexes for the Table

Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

ERROR at line 1:
ORA-01502: index 'SCOTT.SYS_C003228' or partition of such index is in unusable
state


This is the primary key of the table which must be rebuilt.

ALTER INDEX SYS_C003228 REBUILD;
Index altered.

------------

SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);


-------------

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
Table analyzed.


---------------------


SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);



------------------------



SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE TABLE_NAME IN (
SELECT distinct table_name FROM CHAINED_ROWS);



CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------

If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.




---------------

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;

-----------------


---------------------------Detect all Tables with Chained and Migrated Rows------------------------


1) Analyse all or only your Tables

SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM user_tables
/



SELECT owner, table_name, chain_cnt FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;

SELECT 'ANALYZE TABLE ' ||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0
/


SELECT distinct table_name FROM CHAINED_ROWS;

2) Alter Table ......Move

SELECT DISTINCT 'ALTER TABLE ' ||table_name||' MOVE PCTFREE 20 PCTUSED 40 STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 20 PCTINCREASE 0);' FROM CHAINED_ROWS;


3) Rebuild Indexes
SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);


4) Analyze Tables
SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);

5) Show the RowIDs for all chained rows

This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREE

SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/



6) SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;




Conclusion

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

Row migration is typically caused by UPDATE operation

Row chaining is typically caused by INSERT operation.

SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.

To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view

To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.

Wednesday, October 6, 2010

When to Rebuild an Index?

When to Rebuild an Index?

It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur,creating a bottleneck in performance. Here is a sample procedure on how to identify the such indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.


For example:

SQL> analyze index emp_empno_pk compute statistics;
Index analyzed.

2. Run the query below to find out how skewed each index is. This query checks on all indexes that are on emp table.


select index_name, blevel,decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from user_indexes;

select * from dba_indexes where blevel > 4;



3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. If the BLEVEL were to be more than 4, it is recommended to rebuild the index.

Note: If you do not analyze the index, the index_check.sql script will show "BLEVEL HIGH" for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table.



SQL> analyze index emp_empno_pk validate structure;
Index analyzed.

5. Run the following query to find out PCT_DELETED ratio.
SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
2 (LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
3 from index_stats
4 where NAME='EMP_EMPNO_PK';



The PCT_DELETED column shows the percent of leaf entries (i.e. index entries) that have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.

The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in (10000-9000) x 100 / 10000 = 10. This shows a good distribution of values. If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.




In general, indexes and tables should be rebuilt when they become too fragmented.
In practice, you probably will not have need to do it with Oracle 10g. Fragmentation occurs on tables and indexes with lots of changes to structure (adding/removing columns) and lots of data changes (insert, update, delete).

From v10, Oracle have number of automated processes that take care about database performance. One of them is "Segment advisor" that runs automatically.


-----------------------

Thursday, August 5, 2010

SQL Performance Tuning Tips

SQL Performance Tuning

1. Use EXPLAIN to profile the query execution plan
2. Use Slow Query Log (always have it on!)
3. Don't use DISTINCT when you have or could use GROUP BY
4. Insert performance
1. Batch INSERT and REPLACE
2. Use LOAD DATA instead of INSERT
5. LIMIT m,n may not be as fast as it sounds.
6. Don't use ORDER BY RAND() if you have > ~2K records
7. Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
8. Avoid wildcards at the start of LIKE queries
9. Avoid correlated subqueries and in select and where clause (try to avoid in)
10. No calculated comparisons -- isolate indexed columns
11. ORDER BY and LIMIT work best with equalities and covered indexes
12. Separate text/blobs from metadata, don't put text/blobs in results if you don't need them
13. Derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs without sorting them. (Self-join can speed up a query if 1st part finds the IDs and uses then to fetch the rest)
14. ALTER TABLE...ORDER BY can take data sorted chronologically and re-order it by a different field -- this can make queries on that field run faster (maybe this goes in indexing?)
15. Know when to split a complex query and join smaller ones
16. Delete small amounts at a time if you can
17. Make similar queries consistent so cache is used
18. Have good SQL query standards
19. Don't use deprecated features
20. Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
21. Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS()
22. Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
23. use groupwise maximum instead of subqueries
24. Avoid using IN(...) when selecting on indexed fields, It will kill the performance of SELECT query.


Scaling Performance Tips:

1. Use benchmarking
2. isolate workloads don't let administrative work interfere with customer performance. (ie backups)
3. Debugging sucks, testing rocks!
4. As your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.

Network Performance Tips:

1. Minimize traffic by fetching only what you need.
1. Paging/chunked data retrieval to limit
2. Don't use SELECT *
3. Be wary of lots of small quick queries if a longer query can be more efficient
2. Use multi_query if appropriate to reduce round-trips
3. Use stored procedures to avoid bandwidth wastage

OS Performance Tips:

1. Use proper data partitions
1. For Cluster. Start thinking about Cluster *before* you need them
2. Keep the database host as clean as possible. Do you really need a windowing system on that server?
3. Utilize the strengths of the OS
4. pare down cron scripts
5. create a test environment
6. source control schema and config files
7. for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward
8. partition appropriately
9. partition your database when you have real data -- do not assume you know your dataset until you have real data

MySQL Server Overall Tips:

1. innodb_flush_commit=0 can help slave lag
2. Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine the smallest data type for your needs.
3. use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATE
4. if you can, compress text/blobs
5. compress static data
6. don't back up static data as often
7. enable and increase the query and buffer caches if appropriate
8. config params -- http://docs.cellblue.nl/2007/03/17/easy-mysql-performance-tweaks/ is a good reference
9. Config variables & tips:
1. use one of the supplied config files
2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory variables
3. be aware of global vs. per-connection variables
4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
5. be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
6. defragment tables, rebuild indexes, do table maintenance
7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller
8. more RAM is good so faster disk speed
9. use 64-bit architectures
10. --skip-name-resolve
11. increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
12. look up memory tuning parameter for on-insert caching
13. increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
14. Run in SQL_MODE=STRICT to help identify warnings
15. /tmp dir on battery-backed write cache
16. consider battery-backed RAM for innodb logfiles
17. use --safe-updates for client
18. Redundant data is redundant

Storage Engine Performance Tips:

1. InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary key very large
2. Utilize different storage engines on master/slave ie, if you need fulltext indexing on a table.
3. BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.
4. Know your storage engines and what performs best for your needs, know that different ones exist.
1. ie, use MERGE tables ARCHIVE tables for logs
2. Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables
5. use row-level instead of table-level locking for OLTP workloads
6. try out a few schemas and storage engines in your test environment before picking one.

Database Design Performance Tips:

1. Design sane query schemas. don't be afraid of table joins, often they are faster than denormalization
2. Don't use boolean flags
3. Use Indexes
4. Don't Index Everything
5. Do not duplicate indexes
6. Do not use large columns in indexes if the ratio of SELECTs:INSERTs is low.
7. be careful of redundant columns in an index or across indexes
8. Use a clever key and ORDER BY instead of MAX
9. Normalize first, and denormalize where appropriate.
10. Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn't a real database
11. use INET_ATON and INET_NTOA for IP addresses, not char or varchar
12. make it a habit to REVERSE() email addresses, so you can easily search domains (this will help avoid wildcards at the start of LIKE queries if you want to find everyone whose e-mail is in a certain domain)
13. A NULL data type can take more room to store than NOT NULL
14. Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8.
15. Use Triggers wisely
16. use min_rows and max_rows to specify approximate data size so space can be pre-allocated and reference points can be calculated.
17. Use HASH indexing for indexing across columns with similar data prefixes
18. Use myisam_pack_keys for int data
19. be able to change your schema without ruining functionality of your code
20. segregate tables/databases that benefit from different configuration variables

Other:

1. Hire a MySQL (tm) Certified DBA
2. Know that there are many consulting companies out there that can help, as well as MySQL's Professional Services.
3. Read and post to MySQL Planet at http://www.planetmysql.org
4. Attend the yearly MySQL Conference and Expo or other conferences with MySQL tracks
5. Support your local User Group (link to forge page w/user groups here)

Stored Outlines

Stored Outlines

Oracle preserves the execution plans in objects called “Stored Outlines.” You can create a Stored Outline for one or more SQL statements and group Stored Outlines into categories. Grouping Stored Outlines allows you to control which category of outlines Oracle uses.


select * from v$parameter where name like '%create_stored_outlines%';
select * from dictionary where table_name like '%OUTLINE%';


ALTER SYSTEM SET create_stored_outlines=TRUE;
ALTER SESSION SET create_stored_outlines=TRUE;



GRANT CREATE ANY OUTLINE TO LDBO;
GRANT EXECUTE_CATALOG_ROLE TO LDBO;


-- Create an outline for a specific SQL statement.

CREATE OUTLINE client_email FOR CATEGORY ldbo_outlines
ON select distinct accounts.fibsacct,accountemaildetail.email from accounts,accountemaildetail where accounts.oowncode=accountemaildetail.oowncode;

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

-- List the hints associated with the outline.

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'CLIENT_EMAIL';


SELECT hash_value, child_number, sql_text FROM v$sql WHERE sql_text LIKE 'select distinct accounts.fibsacct,accountemaildetail.email from accounts,account%';


-- Create an outline for the statement.
BEGIN
DBMS_OUTLN.create_outline(
hash_value => 3174963110,
child_number => 0,
category => 'LDBO_OUTLINES');
END;
/

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'SYS_OUTLINE_10080512161704577';


-- Check if the outlines have been used.
SELECT name, category, used FROM user_outlines;


--------In the following example we will enable stored outlines for the current session.

ALTER SESSION SET query_rewrite_enabled=TRUE;
ALTER SESSION SET use_stored_outlines=SCOTT_OUTLINES;



--DROPPING OUTLINES
BEGIN
DBMS_OUTLN.drop_by_cat (cat => 'LDBO_OUTLINES');
END;
/



---------------------

Wednesday, August 4, 2010

AutoTrace

Prerequisites

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

----------
Setting AUTOTRACE On

There is also an easier method with SQL*Plus for generating an EXPLAIN PLAN and statistics about the performance of a query.

SET AUTOTRACE ON
select * from accounts;


----------------------

set autotrace off
set autotrace on
set autotrace traceonly

set autotrace on explain
set autotrace on statistics
set autotrace on explain statistics

set autotrace traceonly explain
set autotrace traceonly statistics
set autotrace traceonly explain statistics


set autotrace off explain
set autotrace off statistics
set autotrace off explain statistics

----------


set autotrace on: Shows the execution plan as well as statistics of the statement.
set autotrace on explain: Displays the execution plan only.
set autotrace on statistics: Displays the statistics only.
set autotrace traceonly: Displays the execution plan and the statistics (as set autotrace on does), but doesn't print a query's result.
set autotrace off: Disables all autotrace
If autotrace is enabled with statistics, then the following statistics are displayed:

* recursive calls
* db block gets
* consistent gets
* physical reads
* redo size
* bytes sent via SQL*Net to client
* bytes received via SQL*Net from client
* SQL*Net roundtrips to/from client
* sorts (memory)
* sorts (disk)

------------

Cost Based Optimizer (CBO) and Database Statistics

Cost Based Optimizer (CBO) and Database Statistics


The mechanisms and issues relating to maintenance of internal statistics are explained below:

* Analyze Statement
* DBMS_UTILITY
* DBMS_STATS
* Scheduling Stats
* Transfering Stats
* Issues


1) Analyze Statement
select 'ANALYZE TABLE '||Owner||'.'||table_name||' compute statistics;'
from sys.all_tables where table_name!='_default_auditing_options_'
/

select 'ANALYZE INDEX '||Owner||'.'||index_name||' compute statistics;'
from sys.all_indexes
/

2) DBMS_UTILITY

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO','COMPUTE');

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO', 'ESTIMATE')

3) DBMS_STATS
EXEC DBMS_STATS.gather_schema_stats('LDBO');

4) Scheduling Stats

SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN

DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''LDBO''); END;',
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/


-----EXEC DBMS_JOB.remove(X);
-----COMMIT;


5) Transfering Stats
It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:

EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');


6) Issues

* Exclude dataload tables from your regular stats gathering, unless you know they will be full at the time that stats are gathered.
* I've found gathering stats for the SYS schema can make the system run slower, not faster.
* Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only.
* Even if scheduled, it may be necessary to gather fresh statistics after database maintenance or large data loads.

7)

select table_name, avg_row_len, chain_cnt, num_rows,last_analyzed from dba_tables where owner ='LDBO' order by last_analyzed desc;

Explain Plan

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

Run EXPLAIN PLAN for the query to be optimized:

1)explain plan for
select * from accounts;

2)select * from PLAN_TABLE; ---to check output


3)explain plan
SET STATEMENT_ID = 'ACC' FOR
select * from accounts;



4)

SELECT cardinality "Rows",
lpad(' ',level-1)||operation||' '||
options||' '||object_name "Plan",cost "Cost by CBO",bytes/1024/1024 "MB",time "Time By CBO"
FROM PLAN_TABLE

CONNECT BY prior id = parent_id
AND prior statement_id = statement_id
START WITH id = 0
AND statement_id = 'ACC'
ORDER BY id;



5)
SELECT LPAD(' ', 2 * (level - 1)) ||
DECODE (level,1,NULL,level-1 || '.' || pt.position || ' ') ||
INITCAP(pt.operation) ||
DECODE(pt.options,NULL,'',' (' || INITCAP(pt.options) || ')') plan,
pt.object_name,
pt.object_type,
pt.bytes,
pt.cost,
pt.partition_start,
pt.partition_stop
FROM plan_table pt
START WITH pt.id = 0
AND pt.statement_id = '&1'
CONNECT BY PRIOR pt.id = pt.parent_id
AND pt.statement_id = '&1';


The following diagram demonstrates the procedures for running TRACE versus EXPLAIN PLAN:



TRACE

It takes four hours to TRACE a query that takes four hours to run.

  • Set up Init.ora Parameters
  • Create PLAN_TABLE table
  • Run Query
  • Statement is executed PLAN_TABLE is populated
  • Run TKPROF
  • Output shows disk and memory reads in addition to EXPLAIN PLAN output
EXPLAIN PLAN

It takes less than a minute to EXPLAIN PLAN a query that takes four hours to run.

  • Create PLAN_TABLE table
  • Explain Query
  • PLAN_TABLE is populated
  • Query PLAN_TABLE
  • Output shows EXPLAIN PLAN

Tuesday, August 3, 2010

SQL Trace / TKPROF

To start a SQL trace for the current session, execute:
ALTER SESSION SET sql_trace = true;

ALTER SESSION SET sql_trace = true;
or DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);


ALTER SESSION SET tracefile_identifier = mysqltrace;

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:
select sid, serial# from sys.v_$session where .....


SID SERIAL#
---------- ----------
8 13607

--------------------

Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times. The resulting trace output is more meaningful with these statistics. The command to enable timed statistics is:

SQL> ALTER SYSTEM SET timed_statistics = true;

--------------------

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);


Ask user to run just the necessary to demonstrate his problem.
Disable tracing for your selected process:

execute dbms_system.set_sql_trace_in_session(8,13607, false);


Look for trace file in USER_DUMP_DEST


ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;


Identifying trace files

Trace output is written to the database's UDUMP directory.

The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:

* INSTANCE is the name of the Oracle instance,
* PID is the operating system process ID (V$PROCESS.OSPID); and
* TRACEID is a character string of your choosing.


---http://www.ordba.net/Tutorials/OracleUtilities~TKPROF.htm---

Trace output is quite unreadable. However, Oracle provides a utility, called TKProf, that can be used to format trace output.


SET ORACLE_SID=SNS6


tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n]
[aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table]
[explain=user/password] [record=filename4] [width=n]

C:\> tkprof C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.trc C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.prf explain = ldbo/ldbo sys=no sort = (PRSDSK,EXEDSK,FCHDSK,EXECPU,FCHCPU)




Some of the things to look for in the TKPROF output are listed in this table:

Problems Solutions
High numbers for the parsing The SHARED_POOL_SIZE may need to be increased.
The disk reads are very high Indexes are not used or may not exist.
The "query" and/or "current" (memory reads) are very high Indexes may be on columns with high cardinality (columns where an individual value generally makes up a large percentage of the table). Removing or suppressing the index may increase performance.
The parse elapse time is high There may be a problem with the number of open cursors.
The number of rows processed by a row in the EXPLAIN PLAN is high compared to the other rows This could be a sign of an index with a poor distribution distinct keys (unique values for a column). Or this could also be a sign of a poorly written statement.
If the number of misses in the library cache during parse is greater than 1 This is an indication that the statement had to be reloaded. You may need to increase the SHARED_POOL_SIZE in the init.ora.


----------------

Followers