Showing posts with label performance tuning. Show all posts
Showing posts with label performance tuning. Show all posts

Wednesday, May 9, 2012

Analyze Queue Table (locked table)




-----------------------------------------------------------Analyze perticular table------------------------------------------

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_queue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLRKQUEUE'' ,force=>TRUE); END;',
    start_date      => '01-APR-12 01.00.00 PM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=HOURLY;INTERVAL=1',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/

exec DBMS_SCHEDULER.RUN_JOB('analyze_queue');

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------


create or replace procedure sp_analyzequeue as
BEGIN
FOR cur_rec IN (select owner,table_name from dba_tab_statistics where owner='LDBO' and stattype_locked is not null) LOOP
BEGIN
dbms_stats.gather_table_stats('' || cur_rec.owner || '' , ''|| cur_rec.table_name || '',force=>TRUE);
END;
  END LOOP;
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_queue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN sp_analyzequeue; END;',
    start_date      => '01-APR-12 02:00.00.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Analyze queue tables');
END;
/

exec DBMS_SCHEDULER.RUN_JOB('analyze_queue');

select table_name,num_rows,to_char(last_analyzed,'DD-Mon-YYYY HH24:MI:SS') last_anaylze from user_tables where table_name in (select table_name from dba_tab_statistics where owner='LDBO' and stattype_locked is not null);


Monday, May 7, 2012

Table Reorganization, Rebuild

Tables in Oracle database become fragmented after mass deletion, or after so many delete and/or insert operations.

BEGIN
 FOR cur_rec IN (SELECT distinct table_name
                  FROM   dba_tables WHERE OWNER NOT in ('DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSMAN','SYSTEM','TSMSYS','WMSYS','XDB','SYSMAN')) LOOP
    BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' ENABLE ROW MOVEMENT';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' SHRINK SPACE COMPACT';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' SHRINK SPACE';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' DISABLE ROW MOVEMENT';
EXECUTE IMMEDIATE 'ANALYZE TABLE '|| cur_rec.table_name ||' COMPUTE STATISTICS';
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR cur_rec IN (SELECT distinct index_name
                  FROM   dba_indexes) LOOP
    BEGIN
       EXECUTE IMMEDIATE 'ALTER INDEX '|| cur_rec.index_name ||' REBUILD' ;
      EXECUTE IMMEDIATE 'ANALYZE INDEX '|| cur_rec.index_name ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
    BEGIN
     SYS.UTL_RECOMP.recomp_serial('LDBO');
    END;
END;
/


Saturday, March 24, 2012

PGA Top Consumer

set pagesize 1000
set lines 100
col sid format 9999
col username format a12
col module format a30
column pga_memory_mb format 9,999.99 heading "PGA MB"
column max_pga_memory_mb format 9,999.99 heading "PGA MAX|MB"
col service name format a20
col sql_text format a70 heading "Currently executing SQL"
set echo on

WITH pga AS
(SELECT sid,
ROUND(SUM(CASE name WHEN 'session pga memory'
THEN VALUE / 1048576 END),2) pga_memory_mb,
ROUND(SUM(CASE name WHEN 'session pga memory max'
THEN VALUE / 1048576 END),2) max_pga_memory_mb
FROM v$sesstat
JOIN v$statname USING (statistic#)
WHERE name IN ('session pga memory','session pga memory max' )
GROUP BY sid)
SELECT sid, username,s.module,
pga_memory_mb,
max_pga_memory_mb, substr(sql_text,1,70) sql_text
FROM v$session s
JOIN (SELECT sid, pga_memory_mb, max_pga_memory_mb,
RANK() OVER (ORDER BY pga_memory_mb DESC) pga_ranking
FROM pga)
USING (sid)
LEFT OUTER JOIN v$sql sql
ON (s.sql_id=sql.sql_id and s.sql_child_number=sql.child_number)
WHERE pga_ranking <=5
ORDER BY pga_ranking
/

Thursday, March 15, 2012

Index Clustering Factor

The clustering_factor measures how synchronized an index is with the data in a table. A table with a high clustering factor is out-of-sequence with the rows and large index range scans will consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned with the table and related rows reside together of each data block, making indexes very desirable for optimal access.

Oracle provides a column called clustering_factor in the dba_indexes view that provides information on how the table rows are synchronized with the index. The table rows are synchronized with the index when the clustering factor is close to the number of data blocks and the column value is not row-ordered when the clustering_factor approaches the number of rows in the table.


select a.index_name, a.num_rows, a.clustering_factor, b.blocks,b.avg_row_len from user_indexes a, user_tables b
where a.num_rows !=0 and a.table_name = b.table_name order by 2 desc,1 desc;


Un-Clustered Table Rows
clustering_factor ~= num_rows

Clustered Table Rows
clustering_factor ~= blocks

------------------------------------------------------------------------------------------------------------------------------------------------
- A good CF is equal (or near) to the values of number of blocks of table.

- A bad CF is equal (or near) to the number of rows of table.

- Rebuilding of index can improve the CF.

Then how to improve the CF?

- To improve the CF, it’s the table that must be rebuilt (and reordered).
- If table has multiple indexes, careful consideration needs to be given by which index to order table.

------------------------------------------------------------------------------------------------------------------------------------------------
Four factors work together to help the CBO decide whether to use an index or a full-table scan: the selectivity of a column value, the db_block_size, the avg_row_len, and the cardinality. An index scan is usually faster if a data column has high selectivity and a low clustering_factor.


when a column has high selectivity, a high clustering_factor, and small avg_row_len, there is still indication that column values are randomly distributed in the table, and an additional I/O will be required to obtain the rows. An index range scan would cause a huge amount of unnecessary I/O as shown in below, thus making a full-table scan more efficient.

---------------------------------------Calculating the Clustering Factor

To calculate the clustering factor of an index during the gathering of index statistics, Oracle does the following.

For each entry in the index Oracle compares the entry's table rowid block with the block of the previous index entry.

If the block is different, Oracle increments the clustering factor by 1.

If the clustering factor is close to the number of entries in the index, then an index range scan of 1000 index entries may require nearly 1000 blocks to be read from the table.

If the clustering factor is close to the number of blocks in the table, then an index range scan of 1000 index entries may require only 50 blocks to be read from the table.

CTAS create table as select

Index Hint is best solution



----------------------------------CTAS with ORDER BY

create table transactions14 as select * from transactions;
50 SEC

create table transactions15 as select * from transactions ORDER BY FIRMNUMBER,TRANSACTION,SUBTRANS;
90 SEC



--------------------------------------------Parallel CTAS

create table transactions16 parallel (degree 2) as select * from transactions ORDER BY FIRMNUMBER,TRANSACTION,SUBTRANS;
120 SEC


create table transactions17 parallel (degree 2) as select * from transactions;
40 SEC



create table transactions18 parallel (degree 4) as select * from transactions;

50 SEC

create table transactions20 parallel (degree 8) as select * from transactions;

55 SEC



------------------------------------CTAS using INDEX hint---

SELECT * FROM dba_ind_columns WHERE table_name='TRANSACTIONS';

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;
8 sec


create table transactions23 as select /*+ index(FIRMNUMBER) */ * from transactions;
8 sec


----------------------CTAS WITH PRIMARY KEY

create table transactions24 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;

ALTER TABLE transactions24 ADD constraint pk_SAUDA23 PRIMARY KEY(FIRMNUMBER,TRANSACTION,SUBTRANS)


-----------------------------------------------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions where 1=2;

insert into transactions22 (select * from transactions);
30 sec

insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions);
30sec


insert /*+ parallel(transactions22,2) */ into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions);
60sec


-----------------------------------------------------------------------

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions where 1=2;

CREATE UNIQUE INDEX "LDBO"."PK_SAUDA1" ON "LDBO"."TRANSACTIONS22" ("FIRMNUMBER", "TRANSACTION", "SUBTRANS") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 1610612736 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDX" ;

analyze table transactions22 compute STATISTICS;

analyze index PK_SAUDA1 compute STATISTICS;


---------------------------------------------no append

insert into dest select * from source1;
189SEC


---------------------------------------------append
insert /*+ append */ into dest select * from source1;

----------------------------------------CTAS, no parallel--------------

insert /*+ append */ into dest select * from source1;


create table dest as select * from source1;

----------------------------------------CTAS, parallel--------------

alter session force parallel ddl parallel 3;
alter session force parallel query parallel 3;

create table transactions22 as select * from transactions;

40SEC

----------------------------------------CTAS, parallel WITH INDEX--------------

alter session force parallel ddl parallel 3;
alter session force parallel query parallel 3;

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;


----------------------------------GOOD
CTAS INDEX > CTAS PARALLEL DLL > APPEND

---------------------------------------------------------------------------------------------------

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert into transactions22 (select * from ldbo.transactions@cmldlink);

20 min
---------------------------------------------------

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);

2 min 10 sec
-----------------------------------

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert /*+ parallel(transactions22) */ into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);

2 min 10 sec

-------------------------------------

create table transactions23 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink;

60 sec


-------------------------------------------------------

create table transactions23 as select /*+ index(TRANSACTIONS PK_SAUDAPRIMARY) */ * from ldbo.transactions@cmldlink;
60 SEC

----------------------

create table transactions23 as select /*+ index(TRANSACTIONS PK_SAUDAPRIMARY,IDXCLIENTSAUDA,IDXCLIENTBRSAUDA) */ * from ldbo.transactions@cmldlink;

10 MIN

--------------------------------------------------------

create table transactions24 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert /*+ append */ into transactions24 select * from transactions23;

40 sec

--------------------------------------------------

alter session force parallel ddl parallel 4;
alter session force parallel query parallel 4;

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);

2min


---------------------------------

alter session force parallel ddl parallel 2;
alter session force parallel query parallel 2;

create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;

insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);

2.5 min

Wednesday, February 8, 2012

Analyze Scheduling using oracle




---------------------------------------------------------------------frequency 1 day-----------------------------------------

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '01-APR-12 11.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics');
END;
/


----------------- frequency 2 hours---------------------------------------

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '16-FEB-12 06.00.00 PM ASIA/CALCUTTA',
repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics every 2 hours');
END;
/

------------------------------------------frequency syntax

FREQ=[YEARLY | MONTHLY | WEEKLY | DAILY | HOURLY | MINUTELY | SECONDLY] ;


-------------------To run a job every Tuesday at 11:25

FREQ=DAILY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;

FREQ=WEEKLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;

FREQ=YEARLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;



------------------ To run a job Tuesday and Thursday at 11, 14 and 22 o'clock

FREQ=WEEKLY; BYDAY=TUE,THUR; BYHOUR=11,14,22;

Saturday, February 4, 2012

Performance Tuning Basic Guidelines

** Redo Log files – ensure that redo log are allocated on the fast disk, with minimum activities.
** Temporary tablespaces – ensure that temporary tablespaces are allocated on the fast disk, with minimum activities.
** Fragmentation of tablespaces – defragmentize tablespaces, equal blocksize for INITIAL and NEXT extents.
** Shared Pool Sizing – 1/3 or more of total physical memory, and check for thrashing/paging/swapping activity.
** DB_BLOCK_BUFFER – to enable buffering of data from datafiles during query and updates/inserts operation.
** Use BIND variables – to minimize parsing of SQL and enable SQL area reuse, and standardize bind-variable naming conventions.
** Identical SQL statements – literally identical – to enable SQL area reuse.
** Initial/Next Extents sizing – ensure initial and next are the same. Should be as small as possible to avoid wastage of spaces, but at the same time large enough to minimize time spent in frequent

allocation.
** PCTINCREASE – zero to ensure minimum fragmentization.
** Small PCTUSED and large PCTFREE – to ensure sufficient spaces for INSERT intensive operation.
** Freelist groups – large values to ensure parallelization of INSERT-intensive operation.
** INITRANS and MAXTRANS – large values to enable large number of concurrent transactions to access tables.
** Readonly tablespaces – to minimize latches/enqueues resources, as well as PINGING in OPS.
** Create indexes for frequently accessed columns – especially for range scanning and equality conditions in “where” clause.
** Use hash indexes if equality conditions is used, and no range scanning involved.
** If joining of tables is used frequently, consider Composite Indexes.
** Use Clustered tables – columns allocated together.
** Create Index-Organized Tables when data is mostly readonly – to localize both the data and indexes together.
** Use PARALLEL hints to make sure Oracle parallel query is used.
** IO slaves – to enable multiple DB writers to write to disks.
** Minextents and Maxextents sizing – ensure as large as possible to enable preallocation.
** Avoid RAID5 – IO intensive (redo log, archivelog, temporary tablespace, RBS etc)
** MTS mode – to optimize OLTP transaction, but not BATCH environment.
** Partition Elimination – to enable unused tablespaces partition to be archived.
** Performance hit seriously when bitmap indexes used in table with heavy DML. Might have to drop and recreate the bitmap indexes.
** Increase LOG_SIMULTANEOUS_COPIES – minimize contention for redo copy latches.
** In SQLLoader - using direct path over conventional path loading.
** Using parallel INSERT... SELECT when inserting data that already exists in another table in the database – faster than parallel direct loader using SQLLoader.
** Create table/index using UNRECOVERABLE option to minimize REDO LOG updating. SQLloading can use unrecoverable features, or ARCHIVELOG disabled.
** Alter index REBUILD parallel 2 – to enable 2 parallel processes to index concurrently.
** Use large redo log files to minimize log switching frequency.
** Loading is faster when using SQLLOADING if data source is presorted in a file.
** Drop the indexes, and disable all the constraints, when using SQLloader. Recreate the indexes after SQLloader has completed.
** Use Star Query for Data Warehousing-like application: /*+ ORDERED USE_NL(facts) INDEX(facts fact_concat) */ or /*+ STAR */.
** Using Parallel DDL statements in:
** CREATE INDEX
** CREATE TABLE ... AS SELECT
** ALTER INDEX ... REBUILD
** The parallel DDL statements for partitioned tables and indexes are:
** CREATE TABLE ... AS SELECT
** CREATE INDEX
** ALTER TABLE ... MOVE PARTITION
** ALTER TABLE ... SPLIT PARTITION
** ALTER INDEX ... REBUILD PARTITION
** ALTER INDEX ... SPLIT PARTITION
** Parallel analyze on partitioned table - ANALYZE {TABLE,INDEX} PARTITION.
** Using Asynchronous Replication instead of Synchrnous replication.
** Create snapshot log to enable fast-refreshing.
** In Replication, use parallel propagation to setup multiple data streams.
** Using ALTER SESSION ….HASHED_JOINED_ENABLED.
** Using ALTER SESSION …. ENABLE PARALLEL DML.
** Use ANALYZE TABLE….ESTIMATE STATISTICS for large tables, and COMPUTE STATISTICS for small table, to create statistics to enable Cost-Based Optimizer to made more accurate decision on

optimization technique for the query.
** To reduce contention on the rollback segments, at most 2 parallel process transactions should reside in the same rollback segment.
** To speed up transaction recovery, the initialization parameter CLEANUP_ROLLBACK_ENTRIES should be set to a high value approximately equal to the number of rollback entries generated for the forward-

going operation.
** Using raw devices/partition instead of file system partition.
** Consider increasing the various sort related parameters:
** sort_area_size
** sort_area_retained_size
** sort_direct_writes
** sort_write_buffers
** sort_write_buffer_size
** sort_spacemap_size
** sort_read_fac
** Tune the database buffer cache parameter BUFFER_POOL_KEEP and BUFFER_POOL_RECYCLE to keep the buffer cache after use, or age out the data blocks to recycle the buffer cache for other data.
** Larger values of LOG_BUFFER reduce log file I/O, particularly if transac-tions are long or numerous. The default setting is four times the maximum data block size for the host operating system.
** DB_BLOCK_SIZE should be multiple of OS block size.
** SHARED_POOL_SIZE –The size in bytes of the area devoted to shared SQL and PL/SQL statements.
** The LOCK_SGA and LOCK_SGA_AREAS parameters lock the entire SGA or particular SGA areas into physical memory.
** You can force Oracle to load the entire SGA into main memory by set ting the PRE_PAGE_SGA=TRUE in the init.ora file. This load slows your startup process slightly, but eliminates cache misses on the

library and data dictionary during normal runs.
** Enable DB_BLOCK_CHECKSUM if automatic checksum on datablocks is needed, performance will be degraded slightly.
** Use EXPLAIN PLAN to understand how Oracle process the query – utlxplan.sql.
** Choose between FIRST_ROWS or ALL_ROWS hint in an individual SQL state-ment to determine the best response time required for returning data.
** Use bitmap indexes for low cardinality data.
** Use full-table scan when the data selected ranged over a large percentage of the tables.
** Use DB_FILE_MULTIBLOCK_READ_COUNT – to enable full table scans by a single multiblock read. Increase this value if full table scan is desired.
** Check if row migration or row chaining has occurred - running utlchain.sql.
** Choose between offline backup or online backup plan.

Thursday, December 22, 2011

Table Defragmentation / Table Reorganization / Table Rebuilding

Tables in Oracle database become fragmented after mass deletion, or after so many delete and/or insert operations. If you are running a 24×7 DB, you don’t

have an option to reorganize (or defragement) the table by traditional export/truncate/import method, i.e., exporting data from affected table, truncate the

table, then importing data back to the table.
There is an “alter table table_name move” command that you can use to defragment tables.
Note: This method does not apply to tables with with 'LONG' columns.



--------detecting chained row-----

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;


List Chained Rows

Creating a CHAINED_ROWS Table

@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlchain.sql will create following table


create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);


SELECT owner_name,table_name, head_rowid FROM chained_rows;


-------------------


SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;


SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE TABLE_NAME IN (
SELECT distinct table_name FROM CHAINED_ROWS);



If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table)

which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.




---------------------------Detect all Tables with Chained and Migrated Rows------------------------


1) Analyze all or only your Tables


SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;' FROM user_tables;


Analyze only chained rows tables

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;


set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\chained_statistics.sql;

SELECT 'ANALYZE TABLE ' ||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;

spool off



2) Alter Table ......Move

set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\defrag.sql;

SELECT DISTINCT 'ALTER TABLE ' ||table_name|| FROM CHAINED_ROWS;
spool off


or

select sum(bytes/1024/1024) "FOR INITIAL VALUE OR MORE"
from dba_segments
where owner = 'LDBO'
and segment_name = 'TBLOPTIONACCESSHISTORY';


SELECT DISTINCT 'ALTER TABLE ' ||table_name||' MOVE PCTFREE 20 PCTUSED 40 STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 20 PCTINCREASE 0);' FROM

CHAINED_ROWS;


3) Rebuild Indexes because these tables’s indexes are in unstable state.

connect deltek/xxx@fin;
set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\rebuild_index.sql;

SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
spool off

4) Analyze Tables for compute statistics after defragmentation

set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\compute_stat.sql;

SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);

spool off


5) Show the RowIDs for all chained rows

This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild

with a higher value for PCTFREE

SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/



6) SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;

SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;


delete FROM chained_rows;

Tuesday, December 20, 2011

Detect Row Chaining, Migrated Row and Avoid it

--------detecting chained row-----

This query will show how many chained (and migrated) rows each table has:

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;


-------------------

SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';

----------------------------------------------------------------------------------------

SELECT 'Chained or Migrated Rows = '||value FROM v$sysstat WHERE name = 'table fetch continued row';

Result:
Chained or Migrated Rows = 31637

Explain:
You could have 1 table with 1 chained row that was fetched 31'637 times. You could have 31'637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above -- any combo.

---------------------------------------------------------------------------------

How many Rows in a Table are chained?

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';

CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
3 100 3691 10 40

PCT_CHAINED shows 100% which means all rows are chained or migrated.

------------------------------------------------------------------------------------------------


List Chained Rows

You can look at the chained and migrated rows of a table using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows.

Creating a CHAINED_ROWS Table

To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script in $ORACLE_HOME/rdbms/admin. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script.


D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlchain.sql will create following table


create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);

After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement.

ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS;

SELECT owner_name,table_name, head_rowid FROM chained_rows;


-----------------------------How to avoid Chained and Migrated Rows?--------------------------


Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.

The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.

ALTER TABLE MOVE

First count the number of Rows per Block before the ALTER TABLE MOVE

SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;

Block-Nr Rows
---------- ----------
2066 3

Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:

SELECT distinct table_name FROM CHAINED_ROWS;


ALTER TABLE tbloptionaccesshistory MOVE
PCTFREE 20
PCTUSED 40
STORAGE (INITIAL 20K
NEXT 40K
MINEXTENTS 2
MAXEXTENTS 20
PCTINCREASE 0);

Table altered.

Again count the number of Rows per Block after the ALTER TABLE MOVE

SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM tbloptionaccesshistory
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;


Rebuild the Indexes for the Table

Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;

ERROR at line 1:
ORA-01502: index 'SCOTT.SYS_C003228' or partition of such index is in unusable
state


This is the primary key of the table which must be rebuilt.

ALTER INDEX SYS_C003228 REBUILD;
Index altered.

------------

SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);


-------------

ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
Table analyzed.


---------------------


SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);



------------------------



SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE TABLE_NAME IN (
SELECT distinct table_name FROM CHAINED_ROWS);



CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------

If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.




---------------

SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;

-----------------


---------------------------Detect all Tables with Chained and Migrated Rows------------------------


1) Analyse all or only your Tables

SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM user_tables
/



SELECT owner, table_name, chain_cnt FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;

SELECT 'ANALYZE TABLE ' ||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0
/


SELECT distinct table_name FROM CHAINED_ROWS;

2) Alter Table ......Move

SELECT DISTINCT 'ALTER TABLE ' ||table_name||' MOVE PCTFREE 20 PCTUSED 40 STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 20 PCTINCREASE 0);' FROM CHAINED_ROWS;


3) Rebuild Indexes
SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);


4) Analyze Tables
SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);

5) Show the RowIDs for all chained rows

This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREE

SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/



6) SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;




Conclusion

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

Row migration is typically caused by UPDATE operation

Row chaining is typically caused by INSERT operation.

SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.

To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view

To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.

Tuesday, December 13, 2011

Reason: Archivelog Generation Huge

There are many reasons for it and need to check out.
>Heavy DML statements.
>DML with Lobs and xml datatypes.
>Investigate the SQL by awr report.
> check the size of the redolog file it might be small.
> how many time log switch in an hour.
> there is specific time of archive log generations.

Resizing Recreating RedoLogs / Increase Redo log / Archivelog generation fast size


--------------------------------------------final steps--------------------------
select group#, status from v$log;

ALTER SYSTEM CHECKPOINT GLOBAL;

select group#, status from v$log;

alter database drop logfile group 1;

alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG') size 200M reuse ;

alter system switch logfile;
alter system switch logfile;

select group#, status from v$log;


1)
SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;


Make the last redo log CURRENT

select group#, status from v$log;

alter system switch logfile;

select group#, status from v$log;


ALTER SYSTEM CHECKPOINT GLOBAL;

ALTER DATABASE DROP LOGFILE GROUP 1;



2) Re-create dropped online redo log group


alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG' ) size 200m reuse;



3)
select group#, status from v$log;


GROUP# STATUS
---------- ----------------
1 UNUSED
2 INACTIVE
3 CURRENT


Force another log switch

alter system switch logfile;



select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 ACTIVE





4)
Loop back to Step 2 until all logs are rebuilt

alter database add logfile group 2 ('F:\NBSD1112\REDO02.LOG' ) size 200m reuse;




-----------------------------------SECOND METHOD-------------------

SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

GROUP# MEMBER BYTES
1 F:\NBSD1112\REDO01.LOG 52428800
2 F:\NBSD1112\REDO02.LOG 52428800
3 F:\NBSD1112\REDO03.LOG 52428800


Here is how i changed this to five 200M redo logs:

SQL> alter database add logfile group 4 ('F:\NBSD1112\REDO04.LOG') size 200M;
SQL> alter database add logfile group 5 ('F:\NBSD1112\REDO05.LOG') size 200M;

while running following sql commands, if you hit an error like this:

ORA-01623: log 3 is current log for instance RPTDB (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: 'F:\NBSD1112\REDO03.LOG'

you should run " alter system switch logfile;" until current log is 4 or 5.

Then execute "alter system checkpoint;"

SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;

then move (or maybe drop) old redo logs

RENAME F:\NBSD1112\REDO01.LOG F:\NBSD1112\REDO01_OLD.LOG
RENAME F:\NBSD1112\REDO02.LOG F:\NBSD1112\REDO02_OLD.LOG
RENAME F:\NBSD1112\REDO03.LOG F:\NBSD1112\REDO03_OLD.LOG

finally

SQL> alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG') size 200M;
SQL> alter database add logfile group 2 ('F:\NBSD1112\REDO02.LOG') size 200M;
SQL> alter database add logfile group 3 ('F:\NBSD1112\REDO03.LOG') size 200M;

Thursday, December 8, 2011

EXPLAIN PLAN


SQL> explain plan for
2 SELECT * FROM TBLBANKDETAIL;

Explained.

SQL> set linesize 160
SQL> set pages 300
SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------
Plan hash value: 3594628191

-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 10M| 253 (1)| 00:00:04 |
| 1 | TABLE ACCESS FULL| TBLBANKDETAIL | 100K| 10M| 253 (1)| 00:00:04 |
-----------------------------------------------------------------------------------

8 rows selected.

SQL>

Index DeFragmentation / Rebuild / Index Performance Tuning



Index Defragmentation Steps

Analyzing statistics
Validating the index
Checking PCT_USED
Dropping and rebuilding (or coalescing) the index

-----The following INDEX_STATS columns are especially useful:

height----- refers to the maximum number of levels encountered within the index.

lf_rows------- refers to the total number of leafs nodes in the index.

del_lf_rows --------refers to the number of leaf rows that have been marked deleted as a result of table DELETEs.

SQL>analyze index INDEX_NAME validate structure;


Then query INDEX_STATS view
1. If del_lf_rows/lf_rows is > .2 then index should be rebuild.
2. If height is 4 then index should be rebuild.
3. If lf_rows is lower than lf_blks then index should be rebuild.




----------validate index------

spool c:\index_validate.sql

select 'Analyze Index '||index_name||' validate structure;' from user_indexes;
spool off

@c:\index_validate.sql


-------------following query should be run after all analyze statement

Select Name,(DEL_LF_ROWS_LEN/LF_ROWS_LEN)*100 as index_usage From index_stats Where LF_ROWS_LEN!=0 order by Name ;


select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,(LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
from index_stats;

If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding



-------following scripts validate the indexes and find to rebuild and rebuild them-------

set serveroutput on size 100000

DECLARE
vOwner dba_indexes.owner%TYPE; /* Index Owner */
vIdxName dba_indexes.index_name%TYPE; /* Index Name */
vAnalyze VARCHAR2(100); /* String of Analyze Stmt */
vCursor NUMBER; /* DBMS_SQL cursor */
vNumRows INTEGER; /* DBMS_SQL return rows */
vHeight index_stats.height%TYPE; /* Height of index tree */
vLfRows index_stats.lf_rows%TYPE; /* Index Leaf Rows */
vDLfRows index_stats.del_lf_rows%TYPE; /* Deleted Leaf Rows */
vDLfPerc NUMBER; /* Del lf Percentage */
vMaxHeight NUMBER; /* Max tree height */
vMaxDel NUMBER; /* Max del lf percentage */
CURSOR cGetIdx IS SELECT owner,index_name
FROM dba_indexes WHERE OWNER NOT LIKE 'SYS%';
BEGIN
/* Define maximums. This section can be customized. */
vMaxHeight := 3;
vMaxDel := 20;

/* For every index, validate structure */
OPEN cGetIdx;
LOOP
FETCH cGetIdx INTO vOwner,vIdxName;
EXIT WHEN cGetIdx%NOTFOUND;
/* Open DBMS_SQL cursor */
vCursor := DBMS_SQL.OPEN_CURSOR;
/* Set up dynamic string to validate structure */
vAnalyze := 'ANALYZE INDEX ' || vOwner || '.' || vIdxName || ' VALIDATE STRUCTURE';
DBMS_SQL.PARSE(vCursor,vAnalyze,DBMS_SQL.V7);
vNumRows := DBMS_SQL.EXECUTE(vCursor);
/* Close DBMS_SQL cursor */
DBMS_SQL.CLOSE_CURSOR(vCursor);
/* Does index need rebuilding? */
/* If so, then generate command */
SELECT height,lf_rows,del_lf_rows INTO vHeight,vLfRows,vDLfRows
FROM INDEX_STATS;
IF vDLfRows = 0 THEN /* handle case where div by zero */
vDLfPerc := 0;
ELSE
vDLfPerc := (vDLfRows / vLfRows) * 100;
END IF;
IF (vHeight > vMaxHeight) OR (vDLfPerc > vMaxDel) THEN
DBMS_OUTPUT.PUT_LINE('ALTER INDEX ' || vOwner || '.' || vIdxName || ' REBUILD;');
END IF;

END LOOP;
CLOSE cGetIdx;
END;
/

Friday, April 8, 2011

PGA Tuning

Select Name,Value/1024/1024 From V$parameter where name like '%pga%';

Make a first estimate for PGA_AGGREGATE_TARGET, based on a rule of thumb. By default, Oracle uses 20% of the SGA size. However, this initial setting may be too low for a large DSS system.

You must then divide the resulting memory between the SGA and the PGA.

  • For OLTP systems, the PGA memory typically accounts for a small fraction of the total memory available (for example, 20%), leaving 80% for the SGA.

  • For DSS systems running large, memory-intensive queries, PGA memory can typically use up to 70% of that total (up to 2.2 GB in this example).

Good initial values for the parameter PGA_AGGREGATE_TARGET might be:

  • For OLTP: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 20%

  • For DSS: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 50%

    where total_mem is the total amount of physical memory available on the system.





The PGA_AGGREGATE_TARGET should be set to attempt to keep the ESTD_PGA_CACHE_HIT_PERCENTAGE greater than 95 percent. By setting this appropriately, more data will be sorted in memory that may have been sorted on disk. The next query returns the minimum value for the PGA_AGGREGATE_TARGET that is projected to yield a 95 percent or greater cache hit ratio:



Select Min(Pga_Target_For_Estimate/1024/1024) "recommended_pga"
from v$pga_target_advice
Where Estd_Pga_Cache_Hit_Percentage > 95;




alter system set pga_aggregate_target= "recommended_pga";

Kill Blocking Session

SELECT 'alter system kill session ''||ss.sid||','||ss.serial#||'';'
FROM v$process pr, v$session ss, v$sqlarea sqa
WHERE pr.addr=ss.paddr
AND ss.username is not null
AND ss.sql_address=sqa.address(+)
AND ss.sql_hash_value=sqa.hash_value(+)
AND ss.status='ACTIVE'
AND ss.blocking_session_status='VALID';

Wednesday, April 6, 2011

Check Unused Space

SQL> set serveroutput on

SQL> set pages 1000

SQL> set lines 160

SQL> DECLARE

2 alc_bks NUMBER;

3 alc_bts NUMBER;

4 unsd_bks NUMBER;

5 unsd_bts NUMBER;

6 luefi NUMBER;

7 luebi NUMBER;

8 lub NUMBER;

9 BEGIN

10 DBMS_SPACE.UNUSED_SPACE (

11 segment_owner => 'RNCRY'

12 , segment_name => 'COMS'

13 , segment_type => 'TABLE'

14 , total_blocks => alc_bks

15 , total_bytes => alc_bts

16 , unused_blocks => unsd_bks

17 , unused_bytes => unsd_bts

18 , last_used_extent_file_id => luefi

19 , last_used_extent_block_id => luebi

20 , last_used_block => lub

21 );

22

23 DBMS_OUTPUT.PUT_LINE('Allocated space = '|| alc_bts );

24 DBMS_OUTPUT.PUT_LINE('Actual used space = '|| unsd_bts );

25 EXCEPTION

26 WHEN OTHERS THEN

27 DBMS_OUTPUT.PUT_LINE(SUBSTR(SQLERRM,1,250));

28 END;

29 /

Allocated space = 8534360064

Actual used space = 46874624



PL/SQL procedure successfully completed.

Wednesday, February 23, 2011

Wait Events

http://www.scribd.com/doc/3321687/09-enqueues

SQL*Net message from client

The server process (foreground process) waits for a message from the client process to arrive.




db file scattered

The db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return.

A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.

* db file sequential read—A single-block read (i.e., index fetch by ROWID)

* db file scattered read—A multiblock read (a full-table scan, OPQ, sorting)


read by other session
read by other session occurs when two users need access to the same block of data. The first user reads the data from disk and places it in the buffer cache. The second user has to wait for the first users operation to complete so they are placed in to waiting. This is when the read by other session wait occurs. Unfortunately this is one of those events we need to "catch in the act" to properly resolve.
http://www.rampant-books.com/art_read_by_other_session.htm



log file sync
When a user session commits, the session's redo information needs to be flushed to the redo logfile. The user session will post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session.

log file parallel write
Writing redo records to the redo log files from the log buffer.


db file parallel write
The db file parallel write Oracle metric occurs when the process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk, and is waiting for all requests to complete.




PX Deq Credit: send blkd -----------------



direct path read
direct path read waits only when you are doing a parallel full-scan.



enq: RO - fast object reuse



Buffer Busy Waits
A buffer busy wait occurs if multiple processes want to access a buffer in the buffer cache concurrently.
The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.


The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists to a table and index, implementing Automatic Segment Storage Management (ASSM, a.k.a bitmap freelists), and, of course, and adding a missing index to reduce buffer touches.






rdbms ipc message

The background processes (LGWR, DBWR, LMS0) use this event to indicate that they are idle and are waiting for the foreground processes to send them an IPC message to do some work.




Streams AQ: waiting for messages in the queue

The session is waiting on an empty OLTP queue (Advanced Queuing) for a message to arrive so that the session can dequeue that message.





library cache lock
Oracle's library cache is nothing more than an area in memory, specifically one of three parts inside the shared pool. The library cache is composed of shared SQL areas, PL/SQL packages and procedures, various locks & handles, and in the case of a shared server configuration, stores private SQL areas. Whenever an application wants to execute SQL or PL/SQL (collectively called code), that code must first reside inside Oracle's library cache. When applications run and reference code, Oracle will first search the library cache to see if that code already exists in memory.


1. situation
library cache lock / pins is happen when object is pin in memory (executing , compile ...), because is executed and another session want to use id (compilation , grant ...)
2. situation
first session make long DML and later second session try DDL (ALTER TABLE)



Time Model Statistics

The goal of a DBA would be to reduce the DB time number to be as low as possible for any given time period. Obviously DBAs constantly try and reduce this number by eliminating wait events, but now we have a bit more incentive to reduce DB time by tuning SQL, applications, architecture, database design, instance layout, etc. –realizing that if we can produce a result set faster then DB time will also be reduced.

Wednesday, October 6, 2010

When to Rebuild an Index?

When to Rebuild an Index?

It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur,creating a bottleneck in performance. Here is a sample procedure on how to identify the such indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.


For example:

SQL> analyze index emp_empno_pk compute statistics;
Index analyzed.

2. Run the query below to find out how skewed each index is. This query checks on all indexes that are on emp table.


select index_name, blevel,decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from user_indexes;

select * from dba_indexes where blevel > 4;



3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. If the BLEVEL were to be more than 4, it is recommended to rebuild the index.

Note: If you do not analyze the index, the index_check.sql script will show "BLEVEL HIGH" for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table.



SQL> analyze index emp_empno_pk validate structure;
Index analyzed.

5. Run the following query to find out PCT_DELETED ratio.
SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
2 (LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
3 from index_stats
4 where NAME='EMP_EMPNO_PK';



The PCT_DELETED column shows the percent of leaf entries (i.e. index entries) that have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.

The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in (10000-9000) x 100 / 10000 = 10. This shows a good distribution of values. If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.




In general, indexes and tables should be rebuilt when they become too fragmented.
In practice, you probably will not have need to do it with Oracle 10g. Fragmentation occurs on tables and indexes with lots of changes to structure (adding/removing columns) and lots of data changes (insert, update, delete).

From v10, Oracle have number of automated processes that take care about database performance. One of them is "Segment advisor" that runs automatically.


-----------------------

Wednesday, September 22, 2010

IDLE Session > Sniped Session > Kill > Release Resource

Making Idle Session SNIPED:

An idle session can be setup to become sniped after x minutes by setting the initialization parameter resource_limit = true in the init.ora and idle_time in the user profile. You can make user session becomes sniped after 1 hours of idle time by running below command:


select * from v$parameter where name like '%resource_limit%';

select * from dba_profiles where resource_name like 'IDLE_TIME';


alter system set resource_limit=true scope=spfile;

alter profile LD_BACKOFFICE limit IDLE_TIME 30;


after 30 minutes status=inactive will become sniped


--------------------Finding Sniped Idle Session--------------------------

SELECT
p.spid "Thread ID",
s.sid "Session ID",
s.serial# "Serial Num",
b.name "Background Process",
s.sql_id "SQL ID",
s.username "Session User",
s.osuser "OS User",
s.status "Status",
s.program "Program"
FROM
v$process p,
v$bgprocess b,
v$session s
WHERE
s.status = 'SNIPED'
AND
s.paddr = p.addr
AND b.paddr(+) = p.addr
AND b.name IS NULL
ORDER BY s.username ;


-----------------release resource from idle inactive / sniped / killed session-------

orakill SID SPID



-------------script---Sniped.sql--------------------------

store set c:\prevsqlsettings.sql replace

set pagesize 0

set feedback off

set trimspool on

set termout off

set verify off

spool c:\killSniped.bat

select 'orakill ' || i.instance_name || ' ' || spid from v$instance i,v$process p, v$session s,v$bgprocess b where p.addr=s.paddr AND b.paddr(+) = p.addr AND b.name IS NULL and s.status='SNIPED' and s.username != 'SYSTEM' and s.username != 'SYS' and s.username != 'DBSNMP' and s.username != 'REPADMIN' and s.username != 'WMSYS' and s.username != 'TSMSYS' and s.username != 'OUTLN' and s.username != 'ORACLE_OCM' and s.username != 'LDBO' and s.username != 'SNSEXPORT' and s.username != 'RENEXPORT' and s.username != 'CLLVL';

spool off

host c:\killSniped.bat

@c:\sqlsettings

host del c:\killSniped.bat

host del c:\prevsqlsettings.sql

exit


------------Job Schedule-----sniped.bat--------------

@echo
set oracle_sid=sns6
sqlplus ldbo/ldbo@sns1011srv @c:\sniped.sql
exit

--------------------------------------------------------Just for Knowledge-------------------------------------------



Then IDLE_TIME is set in the users' profiles or the default profile. This will kill the sessions in the database (status in v$session now becomes SNIPED) and they will eventually disconnect. It does not always clean up the Unix session (LOCAL=NO sessions). At this time all oracle resources are released but the shadow processes remains and OS resources are not released. This shadow process is still counted towards the parameters of init.ora.

This process is killed and entry from v$session is released only when user again tries to do something. Another way of forcing disconnect (if your users come in via SQL*Net) is to put the file sqlnet.ora on every client machine and include the parameter "SQLNET.EXPIRE_TIME" in it to force the close of the SQL*Net session.


SQLNET.EXPIRE_TIME =10


sqlnet.expire_time

sqlnet.expire_time parameter: The database is to detect dead TCP connections, not idle client applications
if you kill an idle session, you don't have a dead connection. A dead connection occurs if you switch off or disconnect your client such that it cannot send a proper FIN to close the TCP/IP connection.


Sqlnet.expire_time basically instructs the Server to send a probe packet every set minutes to the client , and if it finds a terminated connection or a connection that is no longer in use, causes the associated server process to terminate on the server. Sqlnet.expire_time basically instructs the Server to send a probe packet every set minutes to the client, and if it finds a terminated connection or a connection that is no longer in use, causes the associated server process to terminate on the server.
A valid database connection that is idle will respond to the probe packet causing no action on the part of the Server , whereas the resource_limit will snipe the session when idle_time is exceeded. The 'sniped' session will get disconnected when the user(or the user process) tries to communicate with the server again. A valid database connection that is idle will respond to the probe packet causing no action on the part of the Server, whereas the resource_limit will snipe the session when idle_time is exceeded. The 'sniped' session will get disconnected when the user (or the user process) tries to communicate with the server again.
But again,as you mentioned, expire_time works globally while idle_time profile works for that user. You can use both of them to make sure that the client not only gets sniped but also gets disconnected if the user process abnormally terminates. But again, as you mentioned, expire_time works globally while idle_time profile works for that user. You can use both of them to make sure that the client not only gets sniped but also gets disconnected if the user process abnormally terminates.



------------maual kill session----------------

alter system kill session '526,67';


-------------------------Last activity / work time ----------

select username,to_char(logon_time, 'DD-Mon-YYYY HH24:MI:SS') Logon_time,last_call_et,
to_char(sysdate-(last_call_et/(60*60*24)),'hh24:mi:ss') last_work_time
from v$session
where username is not null;


------------find sessions which are running for more than 2 hours----------------

SELECT
S.STATUS "Status",
S.SERIAL# "Serial#",
S.TYPE "Type",
S.USERNAME "DB User",
S.OSUSER "Client User",
S.SERVER "Server",
S.MACHINE "Machine",
S.MODULE "Module",
S.CLIENT_INFO "Client Info",
S.TERMINAL "Terminal",
S.PROGRAM "Program",
P.PROGRAM "O.S. Program",
s.logon_time "Connect Time",
lockwait "Lock Wait",
si.physical_reads "Physical Reads",
si.block_gets "Block Gets",
si.consistent_gets "Consistent Gets",
si.block_changes "Block Changes",
si.consistent_changes "Consistent Changes",
s.process "Process",
p.spid, p.pid, si.sid, s.audsid,
s.sql_address "Address", s.sql_hash_value "Sql Hash", s.Action,
sysdate - (s.LAST_CALL_ET / 86400) "Last Call"
FROM
V$SESSION S,
V$PROCESS P,
sys.V_$SESS_IO si
WHERE
S.paddr = P.addr(+)
and si.sid(+)=s.sid and S.USERNAME like 'APL%'
AND s.LAST_CALL_ET/60 >= 2.1
ORDER BY 5
DESC ;


----------------Kill all user sessions except me----------------


1 declare
2 sql_stmt VARCHAR2(200);
3 cursor c1 is select sid, serial# from v$session where
username is not null

4 begin
5 for sessions in c1 loop
6 sql_stmt := 'alter system kill session ' || '''';
7 sql_stmt := sql_stmt || to_char(sessions.sid)
||', ';
8 sql_stmt := sql_stmt || to_char(sessions.serial#)
|| '''';
9 dbms_output.put_line(sql_stmt);
10 begin -- Subblock in order to continue after
-- exception when it will try
-- to kill my session
11 execute immediate sql_stmt;
12 end;
13 end loop;
14* end;



----------------------------alter kill session --job schedule-------------------

creating a procedure to kill sesssion and schedule it nightly be the best solution?

Create or replace procedure KillSession(idleTime in Number)
as cursor c1 is
select sid,serial#,trunc((last_call_et - trunc(last_call_et/3600,0)*3600) / 60,0) TMIN from
v$session
where status='INACTIVE' and username not in ('SYS');
VString varchar2(50);
begin
for c1rec in c1 loop
if c1rec.TMIN > idleTime then
Vstring := 'alter system kill session

'||chr(39)||c1rec.sid||','||c1rec.serial#||chr(39);
execute immediate(Vstring);
end if;
end loop;
end;

------------------------------

Tuesday, April 20, 2010

Reuse statements, cursor_sharing=force

Reuse statements

select * from emp where ename = :EMPNAME


select * from v$parameter where name like '%cursor_sharing%';

cursor_sharing=force

The kernel parameter cursor_sharing defaults to EXACT, but can be set to FORCE or SIMILAR in order to have the database convert literals to bind variables before parsing the statement.

Setting cursor_sharing =force greatly reduced the contention on the library cache and reduced CPU consumption. The end users reported a 75 percent improvement in overall performance.

Our queries use a lot of bind variables, by design. In recent benchmarks, CURSOR_SHARING=FORCE was helpful for benchmark runs reducing query execution time by several times.
However, for a specific query involving bind variables, using CURSOR_SHARING=FORCE was very much slower than if executed while CURSOR_SHARING=EXACT.


This is probably a bug. Cursor_sharing = force is not really reliable. We hash the value of the statement and then go looking for a matching hash, regardless of how many users are on the system or how many times the statement has been issued or even what the statement looks like. Force means force, regardless of what is going on, we are going to force the
sharing of cursors. This leads to problems like what you are seeing and the use of suboptimal plans. You can attempt to tune around the sub-optimal plan but you may find that there are a number of queries with that problem. The other possibility is to set cursor_sharing = exact. That will prevent this problem from occuring.

Followers