Saturday, July 27, 2013

ORA-00600 internal error code, arguments [13013] [5001]

http://oraclemamukutti.blogspot.in/2011/06/ora-00600-13013-5001.html




ORA-600 [13013] [a] [b] [c] [d] [e] [f]


This format relates to Oracle Server 8.0.3 to 10.1

Arg [a] Passcount
Arg [b] Data Object number
Arg [c] Tablespace Relative DBA of block containing the row to be updated
Arg [d] Row Slot number
Arg [e] Relative DBA of block being updated (should be same as [c])
Arg [f] Code




ORA-00600: internal error code, arguments: [13013], [5001]

ORA-00600: internal error code, arguments: [13013], [5001], [268], [8447338], [5], [8447338], [17], [], [], [], [], []


Select object_name,object_type,owner from dba_objects where data_object_id=268;


select * from dba_objects where object_name='SMON_SCN_TIME';

select * frOm dba_indexes where table_name='SMON_SCN_TIME';


Select dbms_metadata.get_ddl('INDEX','SMON_SCN_TIME_TIM_IDX','SYS') from dual;

Select dbms_metadata.get_ddl('INDEX','SMON_SCN_TIME_SCN_IDX','SYS') from dual;


 drop index "SYS"."SMON_SCN_TIME_TIM_IDX";


  CREATE UNIQUE INDEX "SYS"."SMON_SCN_TIME_TIM_IDX" ON "SYS"."SMON_SCN_TIME" ("TIME_MP")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;

 drop index "SYS"."SMON_SCN_TIME_SCN_IDX";

  CREATE UNIQUE INDEX "SYS"."SMON_SCN_TIME_SCN_IDX" ON "SYS"."SMON_SCN_TIME" ("SCN")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;


analyze table SYS.SMON_SCN_TIME validate structure cascade online;






======================================================================================================


SQL>Select object_name,object_type,owner from dba_objects where data_object_id=<value reported in argment b>

Once the Object is identified run the following :

The below command check if the table has corruption or not .

SQL> Analyze table <owner>.<table name> validate structure online ;

If this goes fine table doesnot have corruption. For next command.

If the above command fails with ORA-1498 go to Section 3


The below command check if table/index has corruption or not

SQL>Analyze table <owner>.<table name> validate structure cascade online ;


If the above command errors out with ora-1499 it indicates a corruption in index.

Go to section 2 for resolution

Run dbverify on the datafile reported in the error

Arg [c] in the ora-0600[13013] would give the Relative DBA

For example

ORA-00600: internal error code, arguments: [13013], [5001], [57353], [155254965], [261],[155254965], [17], []


Arg [c] --> rdba-->155254965

Use this value and find the file and block number for this dba

select dbms_utility.data_block_address_file(155254965) Rfile# ,dbms_utility.data_block_address_block(155254965) "Block#" from dual;

RFILE# Block#
---------- ----------
37 65717

You an run dbveirfy on datafile with rfile#=37

SQL>Select name from v$datafile where rfile#=37

dbv file=<location of datafile> blocksize=<db_block_size>








Section 2 >How to resolve if a Index is corrupted

You would need to drop and recreate the index

Ensure before dropping the Index

SQL>Spool /tmp/createindex.sql

SQL>Set long 100000000

SQL>Select dbms_metadata.get_ddl('INDEX','<Index name>',<'user name>') from dual

SQL>Spool off

Refer the Following note to Identify the index

Note 563070.1

Title: ORA-1499. Table/Index row count mismatch

Please note if there is just one index in the table then you can use dbms_metadata.get_ddl to get the script. of the index and drop and recreate it.


Section 3> How to resolve if table is corrupted

Option a> Backup is available

Ora-1498 would be reported on the table.

The trace file from Ora-1498 would contain following information

Example

Block Checking:DBA = 1066265208,Block Type = KTB-managed data block --->
file 254,block 911992
data header at 0xc00000010118e07c
kdbchk: avsp(816) > tosp(812)
Block header dump: 0x3f8dea78
Object id on Block? Y
seg/obj: 0x155b452 csc: 0x05.7b2b4ee6 itc: 3 flg: E typ: 1 - DATA
brn: 0 bdba: 0x3f8dde0c ver: 0x01
inc: 0 exflg: 0

Note the DBA value reported in the trace file DBA = 1066265208
Convert this to find the file number and block number having issue

Sql>Select dbms_utility.data_block_address_file('1066265208') from dual ;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE('1066265208')
--------------------------------------------------
254

Sql>Select dbms_utility.data_block_address_block('1066265208') from dual ;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK('1066265208')
---------------------------------------------------
911992


Run dbverify on the datafile containing the table

dbv file=<location of datafile> blocksize=<db_block_size>

Corruption would be reported on the block.

If you have an Rman backup do a Rman block recovery (Take the file number and block number affected from the trace file)

Rman> Blockrecover datafile <no> block <block number>

Or

If you have a user managed backup you can do an restore and recovery from good copy of the problematic datafile




Option b> Backup is not available

If no backups are available then use event 10231 at session level to create a salvage table

SQL>alter session set events '10231 trace name context forever, level 10'
SQL> Create table <owner>.salvage_table as select * from <Corrupted table> ;

Rename the Original table to old.

Rename salvage table to Original table name



or

You can use dbms_repair script. to mark the block soft corrupt.

Note 556733.1

DBMS_REPAIR SCRIPT.


How to resolve when Smon terminates the instance due to Ora-00600[13013]


If Smon is terminating the instance then.

Set event 10513 and startup the database

event="10513 trace name context forever, level 2"


SQL>Startup mount ;

SQL>Show parameter event

SQL>Alter datatabase open ;

Identify the object involved using information from Section 1.



How to resolve the issue if the object involved belongs to system tablespace


System objects are very important.
Please open a Service request with Oracle support if system tables are involved.






Identifying the row having issue when the table is having corruption.


1>Once the error occurs, using the ORA-600 arguments, gather the following information :

ORA-00600: internal error code, arguments: [13013], [5001], [57353], [155254965], [261],
[155254965], [17], []


arg b : [57353] - it is the OBJECT_ID
arg c : [155254965] - it is the block address in Decimal
arg d : [261] - is it the slot number

2. Translate the the block address in Decimal to a file# and block #

select dbms_utility.data_block_address_file(155254965) Rfile# ,dbms_utility.data_block_address_block(155254965) "Block#" from dual;

RFILE# Block#
---------- ----------
37 65717

The Relative file is 37
The block number is 65717

Find the data_object_id for this object

Argument b is Object_id--> 57353

SQL>Select data_object_id ,object_name,owner from dba_objects where object_id=57353 ;

3. Create the rowid using dbms_rowid.rowid_create(1,DATA_OBJECT_ID,FILE#,BLOCK#,SLOT#)

In this case :

select dbms_rowid.rowid_create(1,57353,37,65717,261) from dual;
DBMS_ROWID.ROWID_C
------------------
AAAOAJAAlAAAQC1AEF

4. You can select from table and identify the record causing the issue

SQL> Select * from <owner>.<table name> where rowid='AAAOAJAAlAAAQC1AEF';




Are there any Known Bug

There is a internal bug number 5085288

fixed in 11.1

Details

ORA-600 [13013] [5001] error can occur on a MERGE command if the DELETE pass encounters a consistent read (CR) error due to the update pass having updated the same
row and column previously.

Check for availability of one off patch usingpatch 5085288

Bug 4549673

Abstract: ORA-30926 / OERI:13030 during update
Fixed-Releases: 9208 A204 B106

Details:
ORA-30926 (in Oracle 9i) or ORA-600 [13030] (in Oracle10g) can occur
during an update DML. This can occur if an internal ORA-1551 error
occurs and is trapped (1551 errors are not visible to client
code and are trapped and handled internally)

Fixed In Ver: 11.0
Check for availability of one off patch usingpatch 5085288





ORA-00600 internal error code, arguments [kdsgrp1], [], [], [], [], [], [],[]


ORA-00600: internal error code, arguments: [kdsgrp1] , This error may occur on a RAC database when (re)building index online.

The fixes for this bug are in Metalink Note : 285586.1

As a workaround please try to rebuild the index either offline or online with as little as possible activity on the affected table.

If error continues check the table for chained rows.


---------------------------------


If this error comes up, you have to do the following:

use adrci to package the incident. How to do that is described here.
adrci
show incidents
ips pack incident … to …

afterwards check into the trace file in the package.
<the package zip>\diag\rdbms\$ORACLE_SID\$ORACLE_SID\incident\incdir_<some kind of id>look into the trc files searching right in the beginning for an sql statement.

If you find a select with bind variables, check the trace furthermore for "bind" or "variable" to get the variable which has been used in that specific statement.

Find out, which tables are involved in the specific SQL and analyze each by using followin syntax

ANALYZE TABLE <SCHEMA.TABLE_NAME> VALIDATE STRUCTURE CASCADE;

You should get an error like, this:

ERROR at line 1:
ORA-01499: table/index cross reference failure - see trace file

go to the tracefile, that just popped up in the background dump destination.

In that file, look for the string "mismatch".
You should now arrive at a line, which explains what exactly is wrong.

In my case, I found a mismatch between an index of the table and the table itself.
The tracefile with the mismatch pointed out the object_id of the specific index, which was corrupt.

You should be able to identify it by

select owner, object_name, object_type from dba_objects where object_id=<OBJECT_ID>;

If you rebuild/recreate indexes, the analyze table should run through without any error message.


To rebuild an index, use
alter index SCHEMA.INDEX_NAME rebuild online;

or get the metadata and recreate it using

set long 999999
set lines 160 pages 9999
col text for a150
select dbms_metadata.get_ddl('INDEX','<INDEX_NAME>','<INDEX_OWNER>') as text from dual;
replace the "create index" with "create or replace index"
and run it.

To check the status of all other tables, an idea is to extract the validate.sql script out of
Note:100419.1 SCRIPT: VALIDATE.SQL to ANALYZE .. VALIDATE STRUCTURE objects in a Tablespace

Then, log as sys user and run this script:

sql> @validate
This will create the "ValidateStructure" package.

After this, open a spool file and enable SERVEROUT

sql> spool myoutput.log
sql> execute dbms_output.enable(1000000);
sql> set serveroutput on
and run:

sql> execute ValidateStructure.TS('TABLESPACE_NAME', TRUE);
This will run until all requested items are scanned.

Errors from the ANALYZE commands are output to DBMS_OUTPUT and so any failing objects are listed when all TABLES / CLUSTERS have been analyzed.
More detailed output from failing ANALYZE commands will be written to the user trace file in USER_DUMP_DEST

Although, initially written for 10.2, this should still work fine against 11g databases.




Note 1413928 - Index corruption wrong results after rebuild index ONLINE

Other terms
Wrong Results, Corruption (Index), ORA-08102 / ORA-01499,
ORA-00600 [kdsgrp1], ORA-00600 [qertbFetchByRowID], ora-01452

Reason and Prerequisites
Wrong results or the next errors can be produced by SQL statements:

ORA-8102 by a delete/update
ORA-1499 by "analyze table validate structure cascade"
ORA-600 [kdsgrp1]
ORA-600 [qertbFetchByRowID]

This is Oracle Bug 7329252 which is fixed in Oracle release 10.2.0.5.

Solution
Install the relevant patch for your platform.
You can find this on SAP Service Marketplace in the software center

ora-00942 ora-06512 sys.dbms_snapshot


Solution

1) compile mv

2) recreate mv

---------
ORA-942 on Materialized View Refresh after Master Mview Recreated on Different Table [ID 867042.1]

exec dbms_mview.refresh(' ',C);

ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2563
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2776
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2745
ORA-06512: at "LDBO.SP_MVRKFOUTILIZATION", line 4
ORA-06512: at line 1

In this case, you do not know which of the tables within the materialized view is throwing the ORA-00942 error.  This user was advised to enable SQL Trace (TKPROF) st the session level to determine the exact table that is seeing the ORA-00942 error.


 could solve an occurance of this problem by calling
ALTER MATERIALIZED VIEW my_mview COMPILE;
before refreshing the mview. But I do not know why this was necessary in this case.


Possibly either of
a. the source table itself get's DROPped and reCREATEd -- and an MV Refresh within the DROP-CREATE cycle would error with ORA-942
b. grants on the source table get DROPped and reCREATEd -- and ....same as above ......
but dont know which table
The MV Definition (which is the QUERY column in USER_MVIEWS/DBA_MVIEWS) shows the query that the MV uses.
(QUERY is a LONG column so you must SET LONG 1000000 in sqlplus before querying the USER_MVIEWS/DBA_MVIEWS view



SELECT * FROM ALL_SYNONYMS WHERE SYNONYM_NAME IN ('DBMS_MVIEW',> 'DBMS_SNAPSHOT');

SELECT * FROM ALL_TAB_PRIVS_RECD WHERE GRANTEE = 'PUBLIC' AND TABLE_NAME = 'DBMS_SNAPSHOT';

ORA-29273 HTTP request failed ORA-24247

drop and recreate acl


begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_http_access.xml');
end;
/

commit;
begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_inaddr_access.xml');
end;
/

commit;
begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_mail.xml');
end;
/
commit;

begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_http.xml');
end;
/
commit;




Exec dbms_network_acl_admin.create_acl ('utl_http_access.xml','Normal Access','LDBO',TRUE,'connect',NULL,NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_http_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_http_access.xml', '*',NULL,NULL);
Commit ;
               
Exec dbms_network_acl_admin.create_acl ('utl_inaddr_access.xml','Normal Access','LDBO',TRUE,'resolve',NULL, NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_inaddr_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_inaddr_access.xml', '*',NULL,NULL);
commit;

Exec dbms_network_acl_admin.create_acl ('utl_mail.xml','Allow mail to be send','LDBO',TRUE,'connect' );
Exec dbms_network_acl_admin.add_privilege ('utl_mail.xml','LDBO',TRUE,'resolve');
Exec dbms_network_acl_admin.assign_acl('utl_mail.xml','*',NULL,NULL);
commit ;


Exec dbms_network_acl_admin.create_acl ('utl_http.xml','HTTP Access','LDBO',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_http.xml','LDBO',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_http.xml','*',NULL,NULL);
commit;

Grant Execute on utl_inaddr to LDBO ;
Grant Execute on utl_http to LDBO ;

select acl, principal, privilege, is_grant from dba_network_acl_privileges;

SELECT * FROM DBA_NETWORK_ACLS;





LD file import- in temp file is generating in GBs

there is very much records in tables.

All records are coming.

plz delete the rows and reimport

ORA-01591 lock held by in-doubt distributed transaction string

ORA-01591: lock held by in-doubt distributed transaction string

Cause: Trying to access resource that is locked by a dead two-phase commit transaction that is in prepared state.

Action: DBA should query the pending_trans$ and related tables, and attempt to repair network connection(s) to coordinator and commit point. If timely repair is not possible, DBA should contact DBA at commit point if known or end user for correct outcome, or use heuristic default if given to issue a heuristic commit or abort command to finalize the local portion of the distributed transaction.




ORA-01591: lock held by in-doubt distributed transaction string
The first thing to do is to check the result of these queries:
Check the following table:

select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from pending_trans$;
select * from pending_sessions$;
select * from pending_sub_sessions$;

From the above queries keep all the value of LOCAL_TRAN_ID in each table and try to force commit or rollback.

ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
EXEC DBMS_TRANSACTION.rollback_force('LOCAL_TRAN_ID')
--this changed state prepared to forced_rollback after that
COMMIT ;
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
EXEC DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('LOCAL_TRAN_ID');
COMMIT;

Repeat each steps for each LOCAL_TRAN_ID, turn on recovery for distributed transaction
crosscheck these tables (must be empty):

select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from pending_trans$;
select * from pending_sessions$;
select * from pending_sub_sessions$;

ORA-14402 updating partition key column would cause a partition change

Cause:

You want to update a record and new value will belong to a different partition.


Solution:

Block must be physically re-located. You can not do this if row movement is not enabled:


ldbo@ari1112>  update ledger set branchcode='2816' Where firmnumber='ACML-00001' and Oowncode='26964
0000';
 update ledger set branchcode='2816' Where firmnumber='ACML-00001' and Oowncode='269640000'
        *
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change


ALTER TABLE <Table_Name> enable row movement;

ORA-01031 insufficient privileges with dba IIS application

iis reset after do changes in sqlnet.ora


Do you have:
SQLNET.AUTHENTICATION_SERVICES=(NTS)

In your sqlnet.ora?
Is your Windows user in the ORA_DBA group?

sho parameter emote_login_passwordfile
select * from v$pwfile_users;

Set
SQLNET.AUTHENTICATION_SERVICES = (ALL)
in sqlnet.ora file in $ORACLE_HOME/network/admin/sqlnet.ora

Also set
SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;

Bouce the database and then try to connect through sqlplus / as sysdba
Refer to following MOS:
Troubleshooting ORA-1031: Insufficient Privileges While Connecting As SYSDBA [ID 730067.1]


ORA-1031: Insufficient Privileges

This is one of very common and frequently occuring error . According to the docs note ,the cause of the ORA-01031  is :

Cause : An attempt was made to change the current username or password without the appropriate privilege. This error also occurs if attempting to install a database without the necessary operating system privileges. When Trusted Oracle is configure in DBMS MAC, this error may occur if the user was granted the necessary privilege at a higher label than the current login .


Action :  Ask the database administrator to perform the operation or grant the required privileges. For Trusted Oracle users getting this error although granted the appropriate privilege at a higher label, ask the database administrator to re-grant the privilege at the appropriate label .

As we see the above action , nothing is explained in details and only mention about the privileges issues . But ORA-1031 may not only occurs with insufficient privileges but may other reasons too. Here i have try to cover all the possible reason of this error .

There are generally two method through which a user can connect to oracle database AS SYSDBA .
1.) OS authentication :
2.) Password file authentication :


1.) OS Authentication : OS authentication allows Oracle to pass control of user authentication to the operating system .The OS authentication is the process of verifying the identity of the user connecting to the database with the information managed by the OS. An OS user is able to use this authentication method if the following conditions are met:

i.) The  user is a member of a special group : The OS user should belong to the OSDBA group in order to login as sysdba. On Unix/Linux the default name of  these group is "dba" and on Windows the name of the group is "ORA_DBA" .


ii.) The OS authentication is allowed by the server settings(sqlnet.authentication_services is set correctly) : On Unix Parameter sqlnet.authentication_services must be set to (ALL) or to (BEQ, <other values>) for this to work. On Windows this parameter must be set to (NTS) or (NONE) or ALL.


If we have configured the Operating System authentication and an OS user is a member of the special groups OSDBA or OSOPER then the OS user does not have to provide any credentials while connecting to the database as SYSDBA or AS SYSOPER from the oracle Server host machine when using the bequeath protocol. As long as the session is not established through the listener the OS Authentication will be used and the credentials provided will be ignored. This means that the users who are able to use the OS authentication can use any username and any password to connect to the database locally on the Oracle Server using the below syntax:


$sqlplus / as sysdba
or
SQL> connect any_username/any_password AS SYSDBA


2.) Password File Authentication :  The password file provides a method to authenticate privileged users from a remote (over sqlnet / listener) location . By default the user SYS gets an entry in the passwordfile when we create it so we can connect to a remote database . The credentials provided when connecting remotely as sysdba are compared to the contents of the passwordfile. For example
$sqlplus sys/xxxx@db_name as sysdba


Password file authentication is enabled by setting the database parameter remote_login_password file to "shared" or "exclusive". For more about Password file Click Here


If we  grant the SYSDBA or SYSOPER privilege to any additional user then that user will also get an entry in the passwordfile: the hashed password of that user is then copied to the passwordfile , when this user connects,the effective user will be SYS .

Note: When both OS authentication and password file authentication are enabled then the OS Authentication will be used. This means that we can connect with any username/password combination.


Here we will considering all the possible issue related to ORA-1031


Case 1  :
One of the reason of getting ORA-1031 may be because the osuser is not the member of dba group .In case of  Linux , the osuser user must be the memeber of   "DBA"  group . To check the group the use the below command .
[oracle@Ramtech ~]$ id
uid=501(oracle) gid=502(oinstall) groups=501(dba),502(oinstall)


In case of Window, OS user must be a member of ora_dba group . Check whether the OS user is a member of ORA_DBA or not by using the below command  :
C:\>echo %username%                    ( for current osuser )
Neerajs


C:\>NET LOCALGROUP ORA_DBA
Alias name     ORA_DBA
Comment        Oracle DBA Group
Members
------------------------------------
Neerajs
NT AUTHORITY\SYSTEM
The command completed successfully.


If the osuser is not the member of the above group then add the osuser to the DBA and ORA_DBA in case of Linux and Window respectively.

Case 2 :
Check the value of  the SQLNET.AUTHENTICATION_SERVICES parameter in file $ORACLE_HOME/network/admin/SQLNET.ORA .


On Unix/Linux  : This parameter should not be set if no strong authentication method is used. If such a method is being used then set the parameter to one of the following  values:
SQLNET.AUTHENTICATION_SERVICES = (ALL)
 or
SQLNET.AUTHENTICATION_SERVICES = (BEQ,<the strong auth method>)
Where  <the strong auth method> can be any combination of the following values: TCPS, KERBEROS5, RADIUS .I will cover more about the sqlnet parameter in my later post .


On Windows : This parameter should be set to NTS or if needed we can add other strong authentication methods besides NTS as such NONE .
SQLNET.AUTHENTICATION_SERVICES = (NTS)
SQLNET.AUTHENTICATION_SERVICES = (NTS,TCPS)


Note: If the parameter is set to NONE then the OS authentication will be disabled and the user will have to provide a valid username/password combination to be able to connect to the database. On Windows the user who is not able to connect as sysdba using OS authentication might be a domain user. Check the following if we  are in this scenario:

A.) It is important that this user is a direct member of the local ORA_DBA group .
B.) Oracle Service must be started as a user who is able to check the group membership for any domain user who might be connecting as sysdba locally.
C.) Check whether the clocks of the RDBMS Server and of the Active Directory Server are perfectly synchronized. Even small clock drifts can cause issues to the underlying kerberos authentication mechanism used by default on Windows. In these cases the ORA-1031 would be most of the times intermittent.
D.) Check whether the Oracle Service is started by an user whose name contains non ASCII characters .


CASE  3 :
Check the value of parameter remote_login_passwordfile. This has to be set to either EXCLUSIVE or SHARED .


SQL> show  parameter  remote_login_passwordfile
NAME                                           TYPE           VALUE
-------------------------------            --------      ---------------
remote_login_passwordfile         string        EXCLUSIVE


If the parameter is not set correctly then modify it and then restart the database:
 SQL> alter system set remote_login_passwordfile=exclusive scope=spfile ;


Check whether the password file with the correct name exists in the right directory/folder and has the right ownership and permissions .
On Unix/Linux  : The password file with the name  orapw<ORACLE_SID> must exist in directory $ORACLE_HOME/dbs. If it does not exist then recreate it using the orapwd command .


[oracle@Ramtech dbs]$orapwd file=$ORACLE_HOME/dbs/orapw<sid> password=<password> force=y ignorecase=n
[oracle@Ramtech dbs]$ ls -l orapw*
-rw-r----- 1 oracle oinstall 1536 Jan 10 14:44 orapwcomcast


On Windows : The default location of the password file on Windows is folder %ORACLE_HOME%/database and the name of the password file must be pwd<%ORACLE_SID%>.ora. When the passwordfile authentication is being used Oracle searches for the password file in the following locations(in this exact order):

The folder pointed to by the registry key  HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HomeXX\ORA_<%ORACLE_SID%>_PWFILE
The folder pointed to by the registry key HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HomeXX\ORA_PWFILE
The default location ( %ORACLE_HOME%\database)


If the password file does not exist in the right folder then create it using the orapwd command:


C:\> cd  %ORACLE_HOME/database
C:\> orapwd file=pwd<sid>.ora password=<password> force=y nosysdba=n


Note: Make sure that the password file exists in the folder specified by the registry keys if these are set.


CASE 4 :
Check whether the user was granted the SYSDBA privilege. Sometimes , we may get this error when we don't have sysdba privileges and try to connect as sysdba when sqlnet.authentication_services is NONE . For example :
C:\>sqlplus scott/tiger@noida as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 10 17:56:27 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
ERROR:
ORA-01031: insufficient privileges


Check the v$pwfile_users  view to find the user which is having sysdba or sysoper privileges .
SQL> select * from v$pwfile_users;
USERNAME         SYSDB   SYSOP     SYSAS
------------          -------     --------     --------
SYS                    TRUE     TRUE       FALSE


If the user is not granted the privilege then log as sys user and run:
SQL> grant SYSDBA to scott ;
where scott is the user wewant to use to connect as sysdba.


CASE   5 :  If  the problem is not solved after reviewing the above notes open a service request with Oracle Support .


ORA-00600 internal error code, arguments [2662]


ORROR at line 1:
oRA-00603: ORACLE server session terminated by fatal error
oRA-00600: internal error code, arguments: [2662], [0], [6202286], [0], [6211768], [12582976], [], [], [], [], [], []
oRA-00600: internal error code, arguments: [2662], [0], [6202285], [0],
6211768], [12582976], [], [], [], [], [], []
oRA-01092: ORACLE instance terminated. Disconnection forced
oRA-00600: internal error code, arguments: [2662], [0], [6202283], [0],
6211768], [12582976], [], [], [], [], [], []
rocess ID: 4168
ession ID: 391 Serial number: 3




ORA-00600: internal error code, arguments: [2662], [0], [103516293]

1. create a pfile from spfile. Set parameter

_ALLOW_RESETLOGS_CORRUPTION = true
UNDO_MANAGEMENT = MANUAL

2. startup mount pfile=”;
3.

recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

recover database using backup controlfile;


4. Enter CANCEL to cancel recovery - there are NO archive logs applied.

5. Enter ALTER DATABASE OPEN RESETLOGS


==============solutions

restart oracle services many times


==============

http://myoracledbablogon.blogspot.in/2010/09/all-that-ive-found-to-recover-database.html


ERROR:            


  Format: ORA-600 [2662] [a] [b] [c] [d] [e]

VERSIONS:
  versions 6.0 to 10.1

DESCRIPTION:

  A data block SCN is ahead of the current SCN.

  The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION

SUGGESTIONS:      
   
  There are different situations where ORA-600 [2662] can be raised.

  It can be raised on startup or duing database operation.

  If not using Parallel Server, check that 2 instances have not mounted
  the same database.

  Check for SMON traces and have the alert.log and trace files ready
  to send to support.

  Check the SCN difference [argument d]-[argument b].

  If the SCNs in the error are very close, then try to shutdown and startup
  the instance several times.

  In some situations, the SCN increment during startup may permit the
  database to open. Keep track of the number of times you attempted a
  startup.

  If the Known Issues section below does not help in terms of identifying
  a solution, please submit the trace files and alert.log to Oracle
  Support Services for further analysis.



REDO UNDO CORRUPT DUE TO POWER FAILURE ORA-00333 ORA-600 [4194]




PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

cancel

alter database open resetlogs;

it will recreate redologs.


restart oracle services


--------------------IF ERROR: ORA-00600: internal error code, arguments: [2662]

restart oracle services many times

Format: ORA-600 [2662] [a] [b] [c] [d] [e]

 The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION


--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

-----------------------------add following in pfile


undo_management = manual
event = '10513 trace name context forever, level 2'

--------------
Set the Oracle event 10513 before opening the database. This event disables Oracle from rolling back an uncommitted transaction and should be used very carefully.


select file_name,tablespace_name from dba_data_files;

startup restrict pfile=


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'D:\APX1213D\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1],[13836], [16279], [16685], [], [], [], [], [], [], []



PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

cancel

alter database open resetlogs;

it will recreate redologs.


restart oracle services


--------------------IF ERROR: ORA-00600: internal error code, arguments: [2662]

restart oracle services many times

Format: ORA-600 [2662] [a] [b] [c] [d] [e]

 The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION


--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

-----------------------------add following in pfile


undo_management = manual
event = '10513 trace name context forever, level 2'

--------------
Set the Oracle event 10513 before opening the database. This event disables Oracle from rolling back an uncommitted transaction and should be used very carefully.


select file_name,tablespace_name from dba_data_files;

startup restrict pfile=


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'D:\APX1213D\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

ORA-01157 ORA-01110 Undo Corrupt


===========================
undo_management = manual
pfile

shutdown immediate;
startup

alter database datafile 'd:\rkdatabase\undotbs01.ora' offline drop;

alter database open;

drop tablespace undotbs1;

create undo tablespace undotbs1 datafile 'd:\rkdatabase\undotbs02.ora' size 500m autoextend on next 10m maxsize unlimited;

undo_management = auto


shutdown immediate
startup


===========================

Recovery of UNDO tablespace in a Non Archive Log mode Database

SQL> startup;
ORACLE instance started.

Total System Global Area  167772160 bytes
Fixed Size                  1218316 bytes
Variable Size              67111156 bytes
Database Buffers           96468992 bytes
Redo Buffers                2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/u01/oradata/burp/undotbs01.dbf'


Since the database in Non Archive Log mode, we cannot open the database by taking the datafile offline using ‘alter database datafile 2 offline’,but we need to use ‘…OFFLINE FOR DROP‘. This will mark the datafile for subsequent dropping.A datafile once marked ‘OFFLINE FOR DROP’ can never be brought online.

SQL>alter database datafile 2 offline for drop;
SQL>alter database open;

Now we need to create another undo tablespace for the transactions to use. Once the new undo tablespace is created,shutdown the database and edit the init.ora file and change the parameter undo_tablepace=<new_undo_tablespace>

SQL>create undo tablespace UNDOTBS2 datafile '/path/undotbs02.dbf' size 20M;

SQL>shutdown immediate;
edit the init.ora file and set paramter undo_tablespace=UNDOTBS2

SQL>startup;
Now we will drop UNDOTBS1 as it is nolonger used.

SQL>drop tablespace UNDOTBS1 including contents and datafiles;
While database is OPEN
+++++++++++++++++++

SQL>create undo tablespace UNDOTBS2 datafile '/path/undotbs02.dbf' size 20M;


SQL>shutdown immediate;
edit the init.ora file and set paramter undo_tablespace=UNDOTBS2

SQL>startup;
Now we will drop UNDOTBS1 as it is nolonger used.

SQL>drop tablespace UNDOTBS1 including contents and datafiles;





======================================or

 alter system set undo_management = manual scope=spfile;


shut immediate

startup pfile

-----This is to confirm no backups available

rman target /
RMAN> restore tablespace undotbs1;
RMAN> exit

SQL> alter database datafile 'C:\APP\ORACLE11GR2\ORADATA\ORCL\DATAFILE\O1_MF_UNDOTBS1_7OS7JZJV_.DBF' offline drop;

alter database open;

drop tablespace undotbs1;


create UNDO tablespace undotbs1 datafile 'C:\APP\ORACLE11GR2\ORADATA\ORCL\DATAFILE\O1_MF_UNDOTBS1_new.dbf' size 25m autoextend on next 1m maxsize 50m;



alter system set undo_management = auto scope=spfile;





Thursday, March 7, 2013

YEP1314 Process




Please check required Space on storage for DB FY1314 = 1.5  * Database size of last year database FY1213


And follow following steps to create FY 1314 DB and database objects structure

1) Take Export Full Dump of apx1213 DB
Note: Please remove backup tables like (sauda050612 ldfibs070712) before take export

2) Create apx1314 Database same as last year database apx1213 (tablespace name, datafiles configuration should be same)

Email: Create Database Template for Database(APX1314) using existing Database(APX1213)

select file_name,tablespace_name,bytes/1024/1024 "Size MB",status,autoextensible from dba_data_files;
select file_name,tablespace_name,bytes/1024/1024 "Size MB",status,autoextensible from dba_temp_files;

Tablespace                 Datafile Size       Auotextend
SYSTEM SYSTEM01.ORA 500M 100M
SYSAUX SYSAUX01.ORA 500M 100M
UNDOTBS1 UNDOTBS01.ORA 1000M 500M
INDX INDEX01.ORA same as before same as before
USR USERS01.ORA same as before same as before
TEMPORARY TEMP01.ORA 1000M 500M


Keep Redolog size same as last year DB

SELECT LOG_MODE FROM V$DATABASE;
Archive log should be enabled


Tick on Use Bigfile tablespace Datafiles for USERS01.ORA, INDEX01.ORA. Or you can add datafiles whenever required

Parameters: db_block_size, sga_target, sga_max_size, utl_file_dir, processes, audit_trail, job_queue_processes, cursor_sharing, open_cursors and other should be same as last year.


3) Create Schema User LDBO using only attached scripts createuserLDBO10g_run_from_DB_server.SQL (Note: Don’t use previously having createuserldbo.sql)

Note: attached createuserLDBO10g_run_from_DB_server.SQL have all housekeeping job like analyze, purge queue, compile


Same time, Please scheduler database backup dump job using following or attached script and run, check & verify and confirm to Kshitij

/*
Note: Please change the details as per you client database name net services name (apx1213srv), user(ldbo) password (ldbo), oracle installation location(D:\oracle\product\10.2.0\db_1\BIN), dump location(E:\exp1314)
There should be entry of Net Manager(d:\oracle\product\10.2.0\db_1\network\ADMIN\tnsnames.ora) in oracle database server
Also create folder where you want backup like following E:\exp1314
Change the backup job start as per server availability
*/

conn sys@apx1314srv as sysdba

begin
dbms_scheduler.create_job
(job_name => 'exp1314',
job_type => 'EXECUTABLE',
number_of_arguments => 5,
job_action => 'D:\oracle\product\10.2.0\db_1\BIN\exp.exe',
start_date => '01-APR-13 11:00.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
comments=>'export dump');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',1,'system/oracle@apx1314srv');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',2,'full=Y');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',3,'file=E:\exp1314\exp1314.DMP');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',4,'log=E:\exp1314\exp1314.log');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',5,'STATISTICS= NONE');
end;
/

exec dbms_scheduler.run_job('exp1314');

4) Login into LDBO user at FY1314 db and drop and recreate Database Link

Match with last year database

Con ldbo/ldbo@apx1213srv
select owner,db_link,username,host from dba_db_links;

Con ldbo/ldbo@apx1314srv
select owner,db_link,username,host from dba_db_links;

Con ldbo/ldbo@apx1314srv

Spool c:\yep1314\drop_links.sql
SELECT 'DROP ' || decode (owner,'PUBLIC','PUBLIC DATABASE LINK ','DATABASE LINK ') || DB_LINK || ';' FROM DBA_DB_LINKS  WHERE USERNAME !=' ';
@ c:\yep1314\drop_links.sql


Con ldbo/ldbo@apx1213srv
Spool c:\yep1314\create_links.sql
SELECT 'CREATE ' || decode (owner,'PUBLIC','PUBLIC DATABASE LINK ','DATABASE LINK ') || DB_LINK || ' CONNECT TO ' || USERNAME || ' IDENTIFIED BY ' || '<PLEASE PUT PASSWORD>' || ' USING ' || '<PLEASE PUT DATABASE STRING>' || ';' FROM DBA_DB_LINKS  WHERE USERNAME !=' ';

/* put the password in above file and also change the connect string*/

Con ldbo/ldbo@apx1314srv
@ c:\yep1314\create_links.sql

Database link creation is below
Note: Please change the password and connect string as per client software

CREATE PUBLIC DATABASE LINK "LNK_RAKSHAK"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

 CREATE PUBLIC DATABASE LINK "LNK_CCM"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_DIGITAL"
  CONNECT TO "LDBO" IDENTIFIED BY ldbo USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_PREVIOUSYEARBALANCE"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1213SRV';



5) Create Database structure (previously called optable) using exp dump (rows=N) or expdp (content=metadata_only)

imp ldbo/ldbo@apx1314srv FILE=D:\apx1213.dmp ROWS=N FULL=Y CONSTRAINTS=Y GRANTS=Y INDEXES=Y STATISTICS=NONE

or

impdp ldbo/ldbo@apx1314SRV directory=DPUMP_DIR1 dumpfile=apx1213.DMP LOGFILE=apx1314metadata.LOG content=metadata_only EXCLUDE=TABLESPACE,DB_LINK,SEQUENCE,STATISTICS,USER,GRANT



6) Login into LDBO user at FY1314 and drop and recreate Sequence
Sequences should be started from 1. Therefore we drop and recreate Sequences


Con ldbo/ldbo@apx1314srv

SET LINE 9999
SPOOL C:\YEP1314\SEQ1314.SQL
select 'DROP SEQUENCE ' || Sequence_Name || ';' from dba_Sequences where Sequence_Owner='LDBO';
SPOOL OFF
@ C:\YEP1314\SEQ1314.SQL

Create Sequence LDBO.Dematuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Saudauniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Billuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Jobbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Delbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Turnbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Fwdbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Extbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Dpuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Bankuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Payrequniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Riskuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Portuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Axiscms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Pmsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Debarreduniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Bkrecouniquenumber minvalue 50000000 maxvalue 9999999999 start with 50000000 increment by 1 nocache;
Create Sequence LDBO.Sq_Smsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Offlinenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Phytodemnumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Ipouniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Trademodification minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_SpicePoa minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Idbicms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Scripuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Clientuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Brokeruniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Nsdlcdsldpuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Orarupee minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Rtgssequence minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Bobbentopool minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Clientcode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Familycode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_NigPortclose Minvalue 1 maxvalue 9999999999 Start With 1 Increment by 1 noCache;
Create Sequence LDBO.Seqclientsequence minvalue 1 maxvalue 99 start with 1 increment by 1;
Create Sequence LDBO.Sq_collateral minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Nsdlkra minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.LdOraclebatchid minvalue 1 maxvalue 9999999999 nocache;
Create Sequence LDBO.Sq_Limittokenno minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.SQ_CDSLXMLUNIQUENUMBER minvalue 10000 maxvalue 99999;


/*Missing Sequence */
select object_name from user_objects@LNK_PREVIOUSYEARBALANCE where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%'
minus
select object_name from user_objects where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%';


7) Create splogin,tglogin manually from SYS user only (already present in createuserLDBO10g_run_from_DB_server.SQL )

SELECT owner,object_name FROM Dba_Objects WHERE OBJECT_NAME IN ('SPLOGININFORMATION','TGLOGININFORMATION');
Owner                  Object_name
SYS                         SPLOGININFORMATION
SYS                         TGLOGININFORMATION

8)  Compile All Invalid Objects

conn sys@apx1314srv as sysdba

EXEC sys.UTL_RECOMP.recomp_serial;

select object_name  from dba_objects where owner='LDBO' and status='INVALID';

SELECT owner,object_name,status FROM Dba_Objects WHERE OBJECT_NAME IN ('SPLOGININFORMATION','TGLOGININFORMATION');


9) Login into LDBO user at FY1314 db and Drop and recreate Database directory

 Match with last year database and change the path as per your requirement

conn sys@apx1213srv as sysdba
select 'create or replace directory '||DIRECTORY_NAME||' as '||''''||DIRECTORY_PATH||''''||';' from dba_directories;


10) if Synonym is missing then create synonym from last year database apx1213 and run into current year apx1314

conn sys@apx1314srv as sysdba

Select synonym_Name From Dba_Synonyms@LNK_PREVIOUSYEARBALANCE Where table_Owner='LDBO'
Minus
Select synonym_Name From Dba_Synonyms Where table_Owner='LDBO';


conn sys@apx1213srv as sysdba

set pagesize 0
set linesize 30000
set long 500000
set longchunksize 500000
set trimspool on
set feed off
exec dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
exec dbms_metadata.set_transform_param( DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );

SPOOL C:\YEP1314\SYNONYMS1213.SQL
select dbms_metadata.get_ddl('SYNONYM',synonym_name, 'PUBLIC') metadata
from dba_synonyms
where owner = 'PUBLIC'
  and table_owner = 'LDBO'
order by synonym_name;
SPOOL OFF

conn sys@apx1314srv as sysdba
@C:\YEP1314\SYNONYMS1213.SQL

11) Create Users from Last years


/* Missing Users */

conn sys@apx1213srv as sysdba

select username from dba_users@LNK_PREVIOUSYEARBALANCE
minus
select username from dba_users;

conn sys@apx1213srv as sysdba

set pagesize 0
set linesize 30000
set long 500000
set longchunksize 500000
set trimspool on
set feed off
exec dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
exec dbms_metadata.set_transform_param( DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );


spool c:\yep1314\user_script.sql
SELECT DBMS_METADATA.GET_DDL('PROFILE',profile) FROM DBA_PROFILES WHERE profile != 'DEFAULT' GROUP BY PROFILE;
SELECT DBMS_METADATA.GET_DDL('USER', USERNAME) FROM DBA_USERS;
SELECT DBMS_METADATA.GET_GRANTED_DDL('TABLESPACE_QUOTA', USERNAME) DDL FROM DBA_USERS where username in (select username from dba_ts_quotas);
SELECT DBMS_METADATA.GET_DDL('ROLE', role) FROM dba_roles;
spool off

conn sys@apx1314srv as sysdba

@c:\yep1314\user_script.sql


12) Create Object roles and user roles from last year using db link lnk_previousyearbalance

Conn ldbo/ldbo@apx1314srv


BEGIN
 FOR cur_rec IN (SELECT role
                  FROM   dba_roles@lnk_previousyearbalance) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'CREATE ROLE ' || cur_rec.role ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,privilege
                  FROM   DBA_SYS_PRIVS@lnk_previousyearbalance where grantee not in ('SYS','SYSTEM','SYSMAN','TSMSYS','WMSYS','RECOVERY_CATALOG_OWNER','RESOURCE','OUTLN','ORACLE_OCM','OEM_MONITOR','OEM_ADVISOR','MGMT_USER','IMP_FULL_DATABASE','EXP_FULL_DATABASE','DBA','CONNECT','AQ_ADMINISTRATOR_ROLE','DBSNMP','SCHEDULER_ADMIN')) LOOP
    BEGIN
      EXECUTE IMMEDIATE ('Grant ' || cur_rec.privilege || ' to ' || cur_rec.grantee );
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name
                  FROM   dba_tab_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant ' || cur_rec.privilege || ' on ' || cur_rec.table_name || ' to ' || cur_rec.grantee ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name,column_name
                  FROM   dba_col_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.PRIVILEGE || '('|| cur_rec.COLUMN_NAME ||') on '|| cur_rec.TABLE_NAME || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,granted_role
                  FROM   dba_role_privs@lnk_previousyearbalance Where Grantee!='SYSTEM' and Grantee!='SYS' and Grantee!='DBSNML' and Grantee!='REPADMIN') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.granted_role || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

13) If you have partitioned tables in last year then create DEMATMAIN table as partitioned (partitions should be same as TRANSACTIONS table) which is attached

Drop the dematmain table and recreate partitioned DEMATMAIN table using attached script

Conn ldbo/ldbo@apx1314srv

Select TABLE_NAME From Dba_Part_Tables@LNK_PREVIOUSYEARBALANCE where owner='LDBO'
Minus
Select TABLE_NAME From Dba_Part_Tables where owner='LDBO';


14)  Match Last and Current Year Database Objects (Run the following command in last and current year database and match the records)

Select Object_Type,Count(*) From User_Objects Group By Object_Type Order By 1;
select COUNT(*)  from user_objects where status='INVALID';
Select count(*) From Dba_Synonyms Where Table_Owner='LDBO';
select count(*) from dba_Constraints where owner='LDBO';
SELECT * FROM DBA_DB_LINKS;
SELECT * FROM DBA_DIRECTORIES;
Select TABLE_NAME From Dba_Part_Tables where owner='LDBO';


Conn ldbo/ldbo@apx1314srv

/*Missing Object */
Select object_Name From Dba_objects@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO'
Minus
Select object_Name From Dba_objects Where Owner='LDBO';

/*invalid objects */
Select object_Name From Dba_objects@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO' and status='INVALID'
Minus
Select object_Name From Dba_objects Where Owner='LDBO' and status='INVALID';

/*Missing Tables */
Select table_Name From Dba_tables@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO'
Minus
Select table_Name From Dba_tables Where Owner='LDBO';

/*Missing Synonyms */
Select synonym_Name From Dba_Synonyms@LNK_PREVIOUSYEARBALANCE Where table_Owner='LDBO'
Minus
Select synonym_Name From Dba_Synonyms Where table_Owner='LDBO';

/*Missing Sequences */
select * from user_objects@LNK_PREVIOUSYEARBALANCE where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%'
minus
select * from user_objects where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%';

/*Missing Constraints */
Select Constraint_Name From Dba_Constraints@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO' and (Constraint_Name not like 'SYS%' and Constraint_Name not like 'BIN%')
Minus
Select Constraint_Name From Dba_Constraints Where Owner='LDBO' and (Constraint_Name not like 'SYS%' and Constraint_Name not like 'BIN%');

/* Missing Users */
select username from dba_users@LNK_PREVIOUSYEARBALANCE
minus
select username from dba_users;

/* Missing Roles */
select role from dba_roles@LNK_PREVIOUSYEARBALANCE
minus
select role from dba_roles;

select GRANTEE from dba_tab_privs where owner='LDBO'@LNK_PREVIOUSYEARBALANCE
minus
select GRANTEE from dba_tab_privs where owner='LDBO';



15) Import Lookup tables data using DB link LNK_PREVIOUSYEARBALANCE

Conn ldbo/ldbo@apx1314srv

SET LINE 9999
SET FEEDBACK OFF

Spool c:\yep1314\lookuptables.sql

SELECT 'ALTER TABLE "' || a.table_name || '" DISABLE CONSTRAINT "' || a.constraint_name || '";'
FROM   dba_constraints a
Where  A.Owner='LDBO' And A.Constraint_Type In ('R')
And    (A. Table_Name Like '%SMSPROVIDER%' OR A.Table_Name Like '%LOOKUP%');

select 'INSERT INTO ' || TABLE_NAME  || ' SELECT * FROM ' || TABLE_NAME || '@LNK_PREVIOUSYEARBALANCE;' from dba_tables where owner='LDBO'
and (Table_Name Like '%SMSPROVIDER%' OR Table_Name Like '%LOOKUP%');
Commit;
SELECT 'ALTER TABLE "' || a.table_name || '" ENABLE CONSTRAINT "' || a.constraint_name || '";'
FROM   dba_constraints a
Where  A.Owner='LDBO' And A.Constraint_Type In ('R')
And    (A. Table_Name Like '%SMSPROVIDER%' OR A.Table_Name Like '%LOOKUP%');
spool off

@ c:\yep1314\lookuptables.sql
INSERT INTO TBLSOFTWAREACCESSLEVEL select * from TBLSOFTWAREACCESSLEVEL@LNK_PREVIOUSYEARBALANCE;
Commit;

/* Check and verify constraints, it should not be disabled */

select Constraint_Name from Dba_Constraints Where Owner='LDBO' AND status !='ENABLED';

Also check the row count in Lookup tables

16) Analyze whole database after all above steps

conn sys@apx1314srv as sysdba

exec DBMS_SCHEDULER.run_job ('ANALYZE_FULL');

Note: After Master transmission and other bulk transmission activities , it is required to run ANALYZE_FULL job for performance


Thursday, February 14, 2013

AWR Analysis


http://www.toadworld.com/Experts/GuyHarrisonsImprovingOraclePerformance/ResolvingOracleContention/tabid/257/Default.aspx
http://www.dba-oracle.com/art_sql_tune.htm
http://www.pafumi.net/Tips_for_Effective_Queries.html


Rows processed—Queries that process a large number of rows will have high I/O and may also have impact on the TEMP tablespace.

Buffer gets—High buffer gets may indicate a resource-intensive query.

Disk reads—High disk reads indicate a query that is causing excessive I/O.

Memory KB—The memory allocation of a SQL statement is useful for identifying statements that are doing in-memory table joins.

CPU secs—This identifies the SQL statements that use the most processor resources.

Sorts—Sorts can be a huge slowdown, especially if they’re being done on a disk in the TEMP tablespace.

Executions—The more frequently executed SQL statements should be tuned first, since they will have the greatest impact on overall performance.

http://docs.oracle.com/cd/B10500_01/server.920/a96536/apc2.htm
http://savvinov.com/2012/04/06/awr-reports-interpreting-cpu-usage/
http://dbaregistry.blogspot.in/2010/01/oracle-wait-events.html
http://www.dba-oracle.com/t_online_redo_logs_raid.htm
http://samadhandba.wordpress.com/


STATISTICS_LEVEL should be TYPICAL or ALL for getting AWR report

---------------How to reduce buffer gets
Buffer gets—High buffer gets may indicate a resource-intensive query.

you should be able to reduce buffer gets by reducing read operations This can sometimes be done by indexed lookups, sometimes not. It depends on the situation.

Buffer gets means that the data block is in memory and Oracle is returning data from the buffer cache and NOT having to go to disk, which is bad

when the value of buffer gets for SQL statement increased ,The amount of Oracle CPU has been increased .
-------------------

RAM is indirectly proportional to disk IO
if RAM has high memory then disk IO will be low

RAID 1 is best for Oracle DB.
----------------
Check DB Time
If DB time is significantly higher than the elapsed time then things are waiting.
 example, the Elapsed Time is around 60 minutes while the DB Time is around 700 minutes. This means that 700 minutes of time is spent by the sessions on waiting.
---------------------

Look for the Top 5 Timed Foreground Events. Investigate what is triggering these events? (Ignore idle events)

SQL ordered by Elapsed Time => Always helps to find the time consuming sql (Observe the elapsed time as well as number of executions also).
SQL ordered by Reads should also be noticed for I/O intensive queries.


------------------------
PGA Memory Advisory
When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0
when Estd PGA Cache Hit %=99% starts means this is sufficient pga for DB.
------------------------
Tablespace IO Stats

5-10 msec is normal
not exceed from 20msec

------------------------
File IO Stats depends on disk storage
8-10 msec is normal

------------------------
Enqueue Activity
Enqueue Type (Request Reason)
 TM lock
this could be caused by missing indexes on foreign key columns.
Problem with standard edition, fine with enterprise edition
The foreign-key issue is that DML on the child table locks the parent table
-----------------

“Starting with Oracle 10.2,  you may notice a significant degradation in relative performance when you combine a GROUP BY with an ORDER BY on the same columns.

When a GROUP BY is associated with an ORDER BY on the same columns,  the Oracle optimizer may choose a SORT GROUP BY rather than the usually more efficient HASH GROUP BY.

Using the SORT GROUP BY avoids adding a SORT ORDER BY to the plan, but the overall result is usually disappointing.

To get a better result, you can perform the GROUP BY in an in-line view and perform the ORDER BY in the outer query.  Use the NO_MERGE hint to prevent the two operations from being combined.”
-------------

Performance issue is caused by soft parsing, which can be solved by holding cursors (session_cached_cursors). It may be due to oversized shared pool, which is causing delay in searching child cursors. My second thought is, there is large number of reloads, which can be due to under-sized shared pool, if invalidation activities are not going (CBO statistics collection, DDL etc), cursors are being flushed frequently.

-------------
Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints.

If the recursive CPU usage percentage is a large proportion of total CPU usage, you may want to make sure the shared pool memory allocation is adequate. However, a PL/SQL-based application will always have a significant amount of recursive CPU usage.

To reduce the recursive CPU usage, make sure that shared pool is sized correctly. Also you can check the parameters like

CURSOR_SHARING – values can be EXACT, SIMILAR, FORCE
SESSION_CACHED_CURSORS – value can be anything numeric like 500.
-------------------
Remove unnecessary large-table full table scans Unnecessary full table scans cause a huge amount of unnecessary I/O, and can drag down an entire database.
---------------------
Load Profile

Parse should be near 0

Parse activity statistics should be checked carefully because they can immediately indicate a problem within the application. For example, a database has been running several days with a fixed set of applications, it should, within a course of time, parse most SQLs issued by the applications, and these statistics should be near zero.

If there are high values of Soft Parses  or especially Hard Parses statistics, such values should be taken as an indication that the applications make little use of bind variables and produce large numbers of unique SQLs. However, if the database serves developmental purposes, high vales of these statistics are not bad.

% Blocks changed per Read:
The % Blocks changed per Read statistic indicates that only 4.85 percent of all blocks are retrieved for update, and in this example, the Recursive Call % statistic is extremely high with about 90 percent. However, this fact does not mean that nearly all SQL statements executed by the database are caused by parsing activity, data dictionary management, space management, and so on.

It is also useful to check the value of the Rollback per transaction % statistic. This statistic reports the percent of transactions rolled back. In a production system, this value should be low. If the output indicates a high percentage of transactions rolled back, the database expends a considerable amount of work to roll back changes made.  This should be further investigated in order to see why the applications roll back so often.

Rollback per transaction %  should be low.

---------------------

CPU time represents time spent on CPU and does not include time waiting for CPU.

CPU usage is described by “CPU time” (or “DB CPU”) statistics. Somewhat counterintuitively, AWR report showing CPU time close to 100% in the top timed events section does not necessarily indicate a problem. It simply means that database is busy using CPU to do work for its users. However, if CPU time (expressed in CPU seconds) becomes commensurate to the total CPU power available on the host (or shows consistent growth patterns), then it becomes a problem, and a serious one: this means that at best, Oracle processes will wait lots of time to get on CPU runqueue. In the worst case scenario, the host OS won’t have adequate resources to run and may eventually hang.

DB CPU usage (% of CPU power available) = CPU time / NUM_CPUS / elapsed time

If DB CPU usage is at 80-90% of the capacity (or 70-80% and growing) then you try to reduce CPU usage or if not possible, buy more CPU power before the system freezes.

----------------------Instance Efficiency Percentage
 these all will be above 90%

Parse CPU to Parse Elapsd %: 0.00 % shows how much time the CPU is parsing SQL statments, so the lower the better.

The Instance Efficiency Percentage report section contains ratios or calculations that may provide information regarding different structures and operations in the Oracle instance. Database tuning must never be driven by hit ratios. Hit ratios only provide additional information to help the DBA understand how the instance is operating.

http://www.dba-oracle.com/oracle10g_tuning/t_instance_efficiency_percentage.htm


The following list includes the meanings of particular hit ratios:
Buffer Hit Ratio: Measures how many times a required block was found in memory rather than having to execute an expensive read operation on disk to get the block.
Buffer Nowait %: Shows the percentage of times when data buffers were accessed directly without any wait time.
Library Hit %: Shows the percentage of times when SQL statements and PL/SQL packages were found in the shared pool.
Execute to Parse %: Shows how often parsed SQL statements are reused without reparsing.
Parse CPU to Parse Elapsd %: Gives the ratio of CPU time spent to parse SQL statements.
Redo NoWait %: Shows whether the redo log buffer has sufficient size.
In-memory Sort %: Shows the percentage of times when sorts are performed in memory instead of using temporary tablespaces.
Soft Parse %: Shows how often sessions issued a SQL statement that is already in the shared pool and how it can use an existing version of that statement.
Latch Hit %: Shows how often latches were acquired without having to wait.
% Non-Parse CPU: Shows the percentage of how much CPU resources were spent on the actual SQL execution.

In the above list of statistics, special attention should be paid to parse-related statistics. The Instance Efficiency Percentage report provided previously shows that about 95 percent of the parses are soft as indicated by the Soft Parse %. This is good enough, indicating that the SQL statements are actively reused by Oracle.


The next interesting item to review is the Parse CPU to Parse Elapsd % statistic. In this case, it is about three percent, which is very low. This fact reveals that Oracle waits for some resources during parsing of SQL statements. This should be investigated further to find the cause.

In this case, % Non-Parse CPU statistic is about 97 percent, which is quite high.  This indicates Oracle utilizes the CPU mostly for statement execution but not for parsing.

As a rule of thumb, one should always minimize the number of hard parses in your production database. This reduction yields the benefit of minimizing CPU overhead spent performing costly parse work.


------------------Shared Pool Statistics ------------------------

Idealy this should be lesser. If it is very high like beyond 90, this shows the contention in the shared pool.

This following sample report section shows shared pool related statistics:

Shared Pool Statistics        Begin    End
                              ------  ------
             Memory Usage %:   92.70   92.49
    % SQL with executions>1:   86.73   84.20
  % Memory for SQL w/exec>1:   84.12   71.86

In this example, the Memory Usage % statistic shows that almost all, approximately 92 percent, of the shared pool memory is consumed. This could indicate that the system experiences some overhead while aging out old shared memory structures like cursors, PL/SQL programs, and so on. This places additional overhead on the CPU to perform reparsing aging-out. The size of the shared pool should be increased appropriately to eliminate such overhead. In general, this statistic should be near 70 percent after the database has been running a long time.
If it is quite low, memory is being wasted.

Memory Usage % >: The sweet point for the shared pool Memory Usage % is at about ~70%-80% because of aging out. You are at ~90%. It would be advisable to increase the shared pool, but if you're using 10g SGA_TARGET and SGA_MAX_SIZE, then that might require increasing those parameters.

Memory Usage % >90% then increase sga

The % SQL with executions>1 statistic indicates how many SQL statements are executed more than one time. This measures how well production applications are tuned and how well they make use of bind variables.

% Memory for SQL w/exec>1: the percentage of SQLs executed more then once, that were in Memory


-------------------------------------------Buffer Pool Advisory-----------
If you check the column Size for Est(M) for Size Factor 1.0 it is 596M which means 596MB of memory has been allocated by Oracle either automatically if sga_target > 0 OR manually in the parameter file.

Simultaneously we need to check the column “Estimated Physical Reads” which has a value of 37,612,741.

You will notice that increasing buffer cache size will reduce the number of “Estimated Physical Reads”.

Size Factor ”1.0” also has Estimated Phys Read Fator ”1.0” with “Estimated Physical Reads” as37,612,741.

Note how ‘Size Factor’ increases and ‘Estimated Phys Read Factor’ decreases.


 In cases when sga_target > 0 and db_cache_size is determined automatically, similar data would mean that allocated by Oracle size of the buffer cache is fairly balanced, i.e. not too small to have excessive number of physical reads and not too big to have unnecessary overhead while serving a bigger buffer cache.


--------------------------------Top 5 Timed Events
Sometimes, a major external problem such as bad disk RAID can cause the entire database to run slowly, so it is always a good idea to start with a listing of the AWR top five wait event listing.

This tell you where the highest resources are being consumed.

db file sequential read would indicate that there where full table scans being performed.

 Interpretation of CPU Time(s) 1033

We have 60*60=3600 CPU Seconds to use in a particular interval for single CPU in 1 hour snap


In the example we have 8 CPU (Num_CPUs under the Operating System Statistics of AWR) which relates to    60*60*8= 28800 CPU seconds to use in 1 hr interval. ( Single Database Machine is running on machine)

(1033/28800)*100 = 3.58% of Total CPU

Network Bottleneck
A slow or un-tuned network can be a nightmare.  Network overload can also be caused by Oracle TNS issues or by an inefficient application that makes too many trips to Oracle to fetch screen data.

A CPU-bound database may be due to a real overload of the CPU banks, in which case the problem can be remedied with more or faster processors.  High CPU consumption can also be due to un-tuned SQL causing excessive logical I/O in the form of consistent gets or by contention within the shared pool, such as the total lack of SQL bind variables.



----------------------------Tablespace IO Stats
This shows the IO Statistics for each tablespaces in the database.

As the thumb rule, the Av Rd(ms) [Average Reads in milliseconds] should not cross beyond 30, add myself(not greater that 30) which is considered to be IO bottleneck.

Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
TEMP 3,316,0 4.91 1 28,840 8 0 0
DAT1 520,120 146 16.06 1.21 185,846 52 902 13
DAT3 93,411 26 42.82 2.98 13,442 4 16 23.13
DAT2 98,171 28 91.97 7.97 5,333 2 325 34.89

In the above example, the Av Rd(ms) is high in all tablespaces indicating the IO contention.

------------------------------------------------Wait Events
---------------db file sequential reads
Reason:
Use of an unselective index
Fragmented Indexes
High I/O on a particular disk or mount point
Bad application design
Index reads performance can be affected by  slow I/O subsystem and/or poor database files layout, which result in a higher average  wait time

Action:
Check indexes on the table to ensure that the right index is being used
Check the column order of the index with the WHERE clause of the Top SQL statements
Rebuild indexes with a high clustering factor
Use partitioning to reduce the amount of blocks being visited
Make sure optimizer statistics are up to date
Relocate ‘hot’ datafiles
Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool

Suggestion
The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
Significant db file sequential read wait time is most likely an application issue.
If the DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the number of blocks in the table, then most of the rows in the table are ordered. This is desirable.
However, if the clustering factor approaches the number of rows in the table, it means the rows in the table are randomly ordered and thus it requires more I/Os to complete the operation. You can improve the index’s clustering factor by rebuilding the table so that rows are ordered according to the index key and rebuilding the index thereafter.

----------------------db file scattered reads
Reason:
The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT) to be  read into the SGA from disk.
Full Table scans
Fast Full Index Scans

Action:
Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT
Partition pruning to reduce number of blocks visited
Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical and logical reads.
The objective is to reduce the demands for both the logical and physical I/Os, and this is best achieved through SQL and application tuning.
Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

----------------log file parallel write
Reason:
LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
I/O wait on sub system holding the online redo log files

Action:
Reduce the amount of redo being generated
Do not leave tablespaces in hot backup mode for longer than necessary
Do not use RAID 5 for redo log files
Use faster disks for redo log files
Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention
Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

----------------------------------------------log file sync
Reason:
Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
 Oracle metric indicates the process is waiting for LGWR to finish flushing the log buffer to disk.

Action:
Tune LGWR to get good throughput to disk eg: Do not put redo logs on RAID5 Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations
Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).

See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options in order to reduce the amount of redo being written.
Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time.


Log File Sync waits occur when sessions wait for redo data to be written to disk
typically this is caused by slow writes
or committing too frequently in the application
CPU overburning(very high demand => LGWR on run queue)
improper Operating System configuration(check 169706.1)
BUGs in Oracle(especially with RAC option) and 3rd Party software(like ODM/DISM)


Recommendations:
tune LGWR process to get good throughput, especially when ‘log file parallel write‘ high too:
do not put redo logs on RAID 5 without good write cache
do not put redo logs on Solid State Disk (SSD)
if CPUs are overburned(check runqueue with vmstat):
check for non-oracle system activity, like GZIP or BZIP2 running in business hours…
lower instance’s CPU usage(for example, tune SQL for LIOs)
increase LGWR priority(renice or _high_priority_processes),
decrease COMMITs count for applications with many short transactions
use COMMIT [BATCH] NOWAIT(10g+) when possible
do some processing with NOLOGGING(or may be even with _disable_logging=TRUE if just testing performance benchmark/impact), but think about database recoverability
lower system’s CPU usage or increase LGWR priority
check if there is some 3rd party software, or utilities like RMAN, activity on the same disks as redo logs placed, like trace/systemstate dump files, e.t.c
trace LGWR as the last option for troubleshooting OS/3rd party issues


---------------buffer busy waits
Reason:
Buffer busy waits are common in an I/O-bound Oracle system.
The two main cases where this can occur are:
Another session is reading the block into the buffer
Another session holds the buffer in an incompatible mode to our request
These waits indicate read/read, read/write, or write/write contention.
The Oracle session is waiting to pin a buffer.
A buffer must be pinned before it can be read or modified. Only one process can pin a buffer at any one time.
This wait can be intensified by a large block size as more rows can be contained within the block
This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is "busy
It is also often due to several processes repeatedly reading the same blocks (eg: if lots of people scan the same index or data block)

Action:
The main way to reduce buffer busy waits is to reduce the total I/O on the system
Depending on the block type, the actions will differ
Eliminate HOT blocks from the application.
Check for repeatedly scanned / unselective indexes.
Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
 Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes).
 Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .
Reduce the number of rows per block
Increase of number of FREELISTs and FREELIST GROUPs

----------------------------------------------free buffer waits
Reason:
This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in the cache
Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk
DBWR is unable to keep up to the write requests
Checkpoints happening too fast – maybe due to high database activity and under-sized online redo log files
Large sorts and full table scans are filling the cache with modified blocks faster than the DBWR is able to write to disk
If the  number of dirty buffers that need to be written to disk is larger than the number that DBWR can write per batch, then these waits can be observed

Action:
Reduce checkpoint frequency  - increase the size of the online redo log files
Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA
Set disk_asynch_io = true set If not using asynchronous I/O increase the number of db writer processes or dbwr slaves
Pre-sorting or reorganizing data can help

------------------------enqueue waits
Reason:
This wait event indicates a wait for a lock that is held by another session (or sessions) in an incompatible mode to the requested mode.
TX Transaction Lock
Generally due to table or application set up issues
This indicates contention for row-level lock. This wait occurs when a transaction tries to update or delete rows that are currently locked by another transaction.
This usually is an application issue.

TM DML enqueue lock
Generally due to application issues, particularly if foreign key constraints have not been indexed.

ST lock
Database actions that modify the UET$ (usedextent) and FET$ (free extent) tables require the ST lock, which includes actions such as drop, truncate, and coalesce.
Contention for the ST lock indicates there are multiple sessions actively performing dynamic disk space allocation or deallocation in dictionary managed tablespaces

Action:
Reduce waits and wait times
Whenever you see an enqueue wait event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource
Waits for TM enqueue in Mode 3 are primarily due to unindexed foreign key columns
Create indexes on foreign keys  < 10g
Following are some of the things you can do to minimize ST lock contention in your database:
Use locally managed tablespaces
Recreate all temporary tablespaces using the CREATE TEMPORARY TABLESPACE TEMPFILE… command.

--------------------------------Cache buffer chain latch
Reason:
This latch is acquired when searching for data blocks Buffer cache is a chain of blocks and each chain is protected by a child latch when it needs to be scanned
Hot blocks are another common cause of cache buffers chains latch contention. This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch.
SQL statements with high BUFFER_GETS (logical reads) per EXECUTIONS are the main culprits
Multiple concurrent sessions are executing the same inefficient SQL that is going after the same data set

Action:
Reducing contention for the cache buffer chains latch will usually require reducing logical I/O rates by tuning and minimizing the I/O requirements of  the SQL involved. High I/O rates could be a sign of a hot block (meaning a block highly accessed).
Exporting the table, increasing the PCTFREE significantly, and importing the data. This minimizes the number of rows per block, spreading them over many blocks. Of course, this is at the expense of storage and full table scans operations will be slower
Minimizing the number of records per block in the table
For indexes, you can rebuild them with higher PCTFREE values, bearing in mind that this may increase the height of the index.


--------------------Direct Path Reads
Reason:
These waits are associated with direct read operations which read data directly into the sessions PGA bypassing the SGA
The "direct path read" and "direct path write" wait events are related to operations that are performed in PGA like sorting, group by operation, hash join
In DSS type systems, or during heavy batch periods, waits on "direct path read" are quite normal
However, for an OLTP system these waits are significant
These wait events can occur during sorting operations which is not surprising as direct path reads and writes usually occur in connection with temporary tsegments
SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than the work area in the PGA

Action:
Ensure the OS asynchronous IO is configured correctly.
Check for IO heavy sessions / SQL and see if the amount of IO can be reduced.
Ensure no disks are IO bound.
Set your PGA_AGGREGATE_TARGET to appropriate value (if the parameter WORKAREA_SIZE_POLICY = AUTO)
Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of SORT MERGE and NESTED LOOPS instead of HASH JOIN.
Make sure the optimizer selects the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely.


------------------------Direct Path  Writes
These are waits that are associated with direct write operations that write data from users’ PGAs to data files or temporary tablespaces
Direct load operations (eg: Create Table as Select (CTAS) may use this)
Parallel DML operations
Sort IO (when a sort does not fit in memory

Action:
If the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure <Parameter:DISK_ASYNCH_IO> is TRUE . This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is configured correctly.
Ensure no disks are IO bound


-------------------------Latch Free Waits
This wait indicates that the process is waiting for a latch that is currently busy (held by another process).
A latch is a fast, inexpensive and non-sophisticated lock. The latch is used when you need to serialize access to operations, functions and data structures in Oracle.
We could check the shared pool tuning to determine whether that might be a valid suggestion to increase the size of the shared pool. Cursor sharing can be implemented to prevent thrashing within the shared pool and the session_cached_cursors can be used to reduce the soft parses. The following Oracle initialization parameter changes resolved the performance degradation that resulted from the latch free wait event.


alter system set session_cached_cursors=150 scope=spfile;
alter system set shared_pool_size=900M scope=spfile;
alter system set cursor_sharing='FORCE' scope=spfile;

--------------------Library cache latch
The library cache latches protect the cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache
Application is making heavy use of literal SQL- use of bind variables will reduce this latch considerably
Action:
You can reduce the library cache latch hold time by properly setting the SESSION_CACHED_CURSORS parameter
Note:
Larger shared pools tend to have long free lists and processes that need to allocate space in them must spend extra time scanning the long free lists while holding the shared pool latch
Oversize shared pool is also problem.

--------------------Shared pool latch
The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool

Action:
Ways to reduce the shared pool latch are, avoid hard parses when possible, parse once, execute many.
The workaround is to set the initialization parameter CURSOR_SHARING to FORCE. This allows statements that differ in literal values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and hard parse.

-------------Log Buffer Space
Log Buffer Space wait event occurs when server processes write data into the log buffer faster than the LGWR process can write it out. The LGWR process begins writing entries to the online redo log file if any of the following conditions are true:

The log buffer reaches the _log_io_size threshold. By default, this parameter is set to one third of the log buffer size.
A server process performing a COMMIT o ROLLBACK posts to the LGWR process.
The DBWR process posts to the LGWR process before it begins writing.


Solution
Put log files on faster disks.
Look at tuning checkpoint or archive processes (log files switches).
Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.
Finally, check refresh methods when using Materialized Views to reduce logging activity.


----------------------------------------------------------------------Wait Event Meaning----------------------------------------------------------------


db file sequential read  => tune indexing, tune SQL (to do less I/O), tune disks, increase buffer cache. This event is indicative of disk contention on index reads. Make sure all objects are analyzed. Redistribute I/O across disks. The wait that comes from the physical side of the database. It related to memory starvation and non selective index use. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to.
db file scattered read => disk contention on full table scans. Add indexes, tune SQL, tune disks, refresh statistics, and create materialized view. caused due to full table scans may be because of insufficient indexes or unavailability of updated statistics.
Oracle monitoring tools Make money blogging Web hosting servers Oracle Dba Job Interview
db file parallel read  => tune SQL, tune indexing, tune disk I/O, increase buffer cache. if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition.
db file parallel write  => if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition.
db file single write  => if you see this event than probably you have a lot of data files in your database.

control file sequential read
control file parallel write

log file sync    => committing too often, archive log generation is more. Tune applications to commit less, tune disks where redo logs exist, try using nologging/unrecoverable options, log buffer could be too large.
log file switch completion => May need more log files per group.
log file parallel write  => Deals with flushing out the redo log buffer to disk. Disks may be too slow or have an I/O bottleneck. Look for log file contention.
log buffer space   => Increase LOG_BUFFER parameter or move log files to faster disks. Tune application, use NOLOGGING, and look for poor behavior that updates an entire row when only a few columns change.
log file switch (checkpoint incomplete) => May indicate excessive db files or slow IO subsystem.
log file switch (archiving needed)    => Indicates archive files are written too slowly.
redo buffer allocation retries  => shows the number of times a user process waited for space in the redo log buffer.
redo log space wait time  => shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer.

buffer busy waits/ read by other session  => Increase DB_CACHE_SIZE. Tune SQL, tune indexing, we often see this event along with full table scans, if the SQL is inserting data, consider increasing FREELISTS and/or INITRANS, if the waits are on segment header blocks, consider increasing extent sizes.
free buffer waits  => insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
cache buffers lru chain  => Freelist issues, hot blocks.
no free buffers   => Insufficient buffers, dbwr contention.

latch free
latch: session allocation
latch: in memory undo latch  => If excessive could be bug, check for your version, may have to turn off in memory undo.
latch: cache buffer chains  => check hot objects.
latch: cache buffer handles  => Freelist issues, hot blocks.

direct path write => You wont see them unless you are doing some appends or data loads.
direct path reads => could happen if you are doing a lot of parallel query activity.
direct path read temp or direct path write temp => this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them.

library cache load lock
library cache pin => if many sessions are waiting, tune shared pool, if few sessions are waiting, lock is session specific.
library cache lock  => need to find the session holding the lock, look for DML manipulating an object being accessed, if the session is trying to recompile PL/SQL, look for other sessions executing the code.

undo segment extension  => If excessive, tune undo.
wait for a undo record   => Usually only during recovery of large transactions, look at turning off parallel undo recovery.

enque wait events   => Look at V$ENQUEUE_STAT





Followers