Saturday, July 27, 2013

Grant role access to all objects in a schema to another user

BEGIN
FOR i IN (SELECT object_name FROM user_objects WHERE object_type in ('PROCEDURE','PACKAGE','FUNCTION') )
LOOP
EXECUTE IMMEDIATE 'GRANT EXECUTE ON '||i.object_name||' TO kshitij';
END LOOP;
END;
/

BEGIN
FOR i IN (SELECT object_name FROM user_objects WHERE object_type in ('TABLE') )
LOOP
EXECUTE IMMEDIATE 'GRANT SELECT,INSERT,UPDATE,DELETE ON '||i.object_name||' TO kshitij';
END LOOP;
END;
/

BEGIN
FOR i IN (SELECT object_name FROM user_objects WHERE object_type in ('SEQUENCE') )
LOOP
EXECUTE IMMEDIATE 'SELECT ON '||i.object_name ||' TO kshitij';
END LOOP;
END;
/

Sessions Hang On Wait Event WAIT FOR EMON PROCESS NTFNS ID 1287435.1

Sessions Hang On Wait Event "WAIT FOR EMON PROCESS NTFNS" [ID 1287435.1]



Symptoms

RDBMS 10.2.0.4.0
OCI based DB change notification
EMON spin with high CPU usage
EMN0 locks affected multiple applications for a few hours.
Performance suffers until DBCN Database Change Notification is disabled.


Cause


Bug 7282408: EMN STUCK - OTHER SESSIONS HANG ON "WAIT FOR EMON TO PROCESS NTFNS"



Filename = awr_report_29315_29317.html
See the following error:
In Top 5 Timed Events, "wait for EMON to process ntfns " wait event is listing

Matches Bug 7282408 filed against 10.2.0.4.0 and fix is included in 11.1.0.6.0 RDBMS
WORKAROUND:
SQLNET.SEND_TIMEOUT=3

Values are in seconds. However the send timeout is a tunable parameter and there is no standard value which has been prescribed. The ideal value for this parameter would be such that this does not affect the normal working clients.





Solution


1) Set SQLNET.SEND_TIMEOUT=10

It was found that after the SEND_TIMEOUT parameter was adjusted and resolved the EMN process to stop spinning and de-register the subscribers; if customer tried to re-register, the subscribers were removed as soon as new changes took place and could not re-register successfully until the EMN process was killed.
Continued problem resolution found that unpublished Bug 6750296 'Incorrect handling of Connection Failures' was needed to address the incorrect handling of connection failure issues.

Bug 8605337 - Deadlock between gather stats job and concurrent DDL [ID 8605337.8]

Bug 8605337  Deadlock between gather stats job and concurrent DDL



Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions BELOW 12.1
Versions confirmed as being affected
11.2.0.1
11.1.0.7
11.1.0.6
Platforms affected Generic (all / most platforms affected)
Fixed:

This issue is fixed in
12.1 (Future Release)
11.2.0.2 (Server Patch Set)
Symptoms:

Related To:

Deadlock
ORA-60
Optimizer
PL/SQL (DBMS Packages)
DBMS_STATS
Description

ORA-60 can occur when gathering stats and some other concurrent DDL
is happening on the same table(s).

The deadlocking sessions will wait on following wait events:
- 'enq: TX - row lock contention' on tab$ table
- 'library cache lock' on a sys owned object

ORA-00439 feature not enabled Deferred Segment Creation 11g enterprise to standard edition downgrade


1293326.1


http://lifeandoracle.blogspot.in/2012/07/ora-00439-feature-not-enabled-deferred.html
http://ocpdba.wordpress.com/2009/10/09/deferred-segment-creation-segmentless-tables-11gr2-new-feature-and-dumb-questions/
http://asanga-pradeep.blogspot.in/2011/01/segment-creation-behavior-change.html
http://www.donotcommit.net/?p=51


------------------
1. Init.ora Parameter "DEFERRED_SEGMENT_CREATION" Reference Note [ID 1216282.1]
2. IMP-00003, ORA-00959 While Importing Data Into Existing Table Of 11gR2 Using Traditional Import [ID 1180873.1]
3. Bug 8795792 - DBMS_METADATA.get_ddl generates wrong keywords for compressed indexes in 11.2 [ID 8795792.8]
4. Exporting Schema In 11.2 Database With 10g Client Gives ORA-1455 [ID 1083330.1]
1293326.1
-----------------

then you must know you have just hit one of the new features of 11gR2. Unfortunately this feature makes the life of the casual DBA a little bit more complicated than necessary.

I found an interesting debate about whether this must be considered a feature or a bug in the OTN forums but I very much appreciated the last comment where Michiel provided a working solution without delving into philosophical matters. I remember doing the same thing when dealing with another EXPDP/IMPDP problem between versions 11.1.0.6 and 11.1.0.7.

---------------------------Solution 1--------------

-- on the source instance
EXPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2

-- on the target instance
IMPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2


---------------------------Solution 2--------------

For 11.2.0.1 one way to avoid the issue is to allocate segment manually before the export. This could be done with

alter table  allocate extent;

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';



---------------Solution 3--------

SQL> alter system set deferred_segment_creation=false;



===================
From 11g Release 2 all tables created without rows, do not create any segments (by default). If you want to change this behaviour, tweak the spfile parameter “deferred_segment_creation” to FALSE. Anyway the people installing Peoplesoft, SAP, Siebel and other thousand-table data models, really do thank for this new feature.

How does it work?

If the INITIAL extent of a table is 15 MegaBytes in size, but the table is empty, the table won’t ask for that 15 MB of space. As soon as the first row of data is inserted (even a few bytes), the Oracle space engine will run the storage space settings of the table, before inserting that line, making the table actually owner of that space. You can create a 100GB table onto a 100MB tablespace as long as the table is empty, because the space will only be reclaimed by the table when it receives the first row.

========================

In 11.2 the initialization parameter DEFERRED_SEGMENT_CREATION controls whether segment space is allocated at the time of the segment creation. This is set to true by default both on enterprise edition and standard edition. But on standard edition this parameter has no effect as this feature is not enabled. Initialization parameter behavior could be override at table with
SQL> CREATE TABLE x(a number) SEGMENT CREATION IMMEDIATE;
or
SQL> CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
But on standard edition this would still give an error.
SQL>  CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
CREATE TABLE x(a number) SEGMENT CREATION DEFERRED
*
ERROR at line 1:
ORA-00439: feature not enabled: Deferred Segment Creation
Problem happens when exporting some empty tables from an enterprise edition database to a standard edition database. According to the default behavior on enterprise edition when tables have no rows there won't be any segments. But default behavior on standard edition is that, tables are always created with segments.

In 11.2.0.1 when importing tables from enterprise to standard edition tables with no segments throws an error while in 11.2.0.2 tables get created with segments without an error.


========================


Oracle 11g2 does NOT export empty tables when using exp.exe

http://tsells.wordpress.com/2011/02/10/oracle-11g2-does-not-export-empty-tables-when-using-exp-exe/

Out of habit I have been using the older data import / export functions in lieu of using the Data Pump commands that Oracle recommends.  As of 11g2 I can no longer do this.  The exp.exe no longer exports tables that do not have an entry in DBA_SEGMENTS.  This is due to a new feature named “deferred segment creation”.

The syntax for the new Data Pump commands are different and require some additional setup / maneuvering on the Oracle Server itself to complete.  The files when importing now must be placed in the Data Pump Directory.  This can be found by running the following query against the database server.

select owner, directory_name, directory_path from dba_directories

================


The cause is a new feature of Oracle 11.2: deferred segment creation.
All newly created tables that has no rows in it (ever!), has no corresponding row in DBA_SEGMENTS, so somewhy won't get exported with (deprecated) exp utility. Use expdp instead.


================

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';

================

Don't forget: the deferred segment creation applies also to indexes and lobs and Indexes inherit table characteristics.
You can also query the SEGMENT_CREATED column of the USER_TABLES, USER_INDEXES, or USER_LOBS views. For non-partitioned tables, indexes and lobs, this column shows YES, if the segment is created.
However you should not take care of LOBs and LOB indexes extent allocations since they will be allocated automatically by the database once you allocate the corresponding table extent.

================Export was always failing due to the empty tables. Here is my dynamic sql======

DECLARE
CURSOR cur IS
SELECT 'ALTER TABLE ' || table_name || ' ALLOCATE EXTENT ' vSQL
FROM user_tables
where SEGMENT_CREATED = 'NO';
BEGIN
FOR c IN cur LOOP
BEGIN
EXECUTE IMMEDIATE c.vSQL;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line (SUBSTR (SQLERRM, 1, 250));
END;
END LOOP;
END;
/

ORA-24330 OCI INTERNAL ERROR

Solution: alter system set client_result_cache_size=0 scope=spfile;




Effect of client_result_cache_size On Client Applications Or OCI Based Applications [ID 1300727.1]


Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.2 and later   [Release: 11.2 and later ]
Information in this document applies to any platform.
Symptoms

- Receiving error 'ORA-24330: OCI INTERNAL ERROR' while running back end Procedures from SQL*Plus and also from the Application connected to the Database - Oracle 11g 11.2.0.2.

- Unable to log in through Application.
Changes

- Database was upgraded from Oracle 11g Release 2 - 11.2.0.1 to 11.2.0.2. The client_result_cache_size was set to 1GB.

- Parameter client_result_cache_size is not on by default and the recommended value is 0.


Cause

client_result_cache_size is supposed to reset once the client process terminates but in cases like transactions with millions of rows the cache gets full very soon and does not reset itself while the client process is still on and fails during that time with errors like 'ORA-24330: OCI INTERNAL ERROR'.

If client_result_cache_size is set to a non zero value, technically it would mean that you have turned on the setting client_result_cache_size and it does have a limit active until that client session that started is active.  For example; if the value is set to 1GB and the process that started client process is still actively processing million of rows then it will for sure get 'ORA-24330: OCI INTERNAL ERROR' when it reaches 1GB cache which is the set hard limit value.

So for higher volume of transactions the recommended value for client_result_cache_size is zero to avoid all such issues.

BUG 8480915 - ORA-24330: OCI INTERNAL ERROR IN ODP.NET STRESS WHEN CLIENT RESULT CACHE ENABLED

Base BUG 8418934 - KPOQGINV: CLIENT CACHING GOT DISABLED AFTER BADREGID ERROR
Solution

'client_result_cache_size' is an internal integer parameter and to change settings, please do:

1. Setting client_result_cache_size = 0 in the initialization (init.ora) parameter files followed by instance restart is the most reliable way.

client_result_cache_size=0 need to be set at the init.ora and bouncing the DB should effectively set that value.

If client_result_cache_size is set to 0 and it would mean that NO client_result_cache_size is on and nothing will be in cache.

(OR)

Client caching can be totally removed by commenting out the parameter setting client_result_cache_size from init.ora (or) by totally removing that entry.

Once the DB comes back on, you can check the value from SQL*Plus by doing:

SQL> show parameter CLIENT_RESULT_CACHE_SIZE;

2. Alternatively;

If spfile is used;

i. connect / as sysdba to database
ii. alter system set client_result_cache_size=64000 scope=SPFILE; -- value set to 64KB, please note that this is just a sample value.
iii. shutdown the database
iv. startup the database

Once the DB comes back on, you can check the value from SQL*Plus by doing:

SQL> show parameter CLIENT_RESULT_CACHE_SIZE;

'client_result_cache_size' is stored on the Client side.  Once that limit have reached to the maximum then all other subsequent connections will fail.  That is why it is recommended to set to 0 for high volume of transactions in which case there will be no caching.  Even if there's a limit it gets reset only when the Database is bounced.
References

Windows 7 64 bit (unable to make connect to sql data source error# -1)

Error in odbc.dbf : Connectivity error: [Microsoft][ODBC driver for Oracle][Oracle]ORA-12541: TNS:no listener

Solution:
 I have changed scb1213srv with scb1213 or someother name in directy.dbf, odbcad32, tnsnames.ora Now it is working

Cause:
Client IT team create ODBC using registry batch file, may scb1213srv is present some where.
ODBC is not points to listener address properly

Steps
Check your PATH and ORACLE_HOME environment variables
They should point to the location you installed the client and under which your tnsnames.ora file is located.

LD error money obligation cexist ORA-01422 exact fetch returns more than one requested number of rows

That error occurs in a SELECT ... INTO statement, when your query returns more than one row. Check your where clause, and make the necessary changes in order to ensure just 1 row is returned. There is a predefined exception for that error. It is: too_many_rows. If you know your query could return more than 1 row in some cases, you should put an exception handler, something like this:
<pre>Begin
SELECT xx into l_xx
FROM table_x;
Exception
when too_many_rows then
' do something
End;</pre>





select distinct oowncode,stkcode
  from compexist
 where firmnumber = 'FGSB-00001'
   and stkcode in (select stkcode
                     from (select distinct stkcode, oowncode
                             from compexist
                            where firmnumber = 'FGSB-00001'
                              and exchange = 'BSE ')
                    group by stkcode
                   having count(*) > 1)





[Microsoft][ODBC driver for Oracle][Oracle]ORA-20014: Nse Bill Data Generation cannot take place. ~-1422~ORA-01422: exact fetch returns more than requested number of rows~ORA-06512: at "LDBO.PK_NSEBI

Connectivity error: [Microsoft][ODBC driver for Oracle][Oracle]ORA-20014: Nse Bill Data Generation cannot take place. ~-1422~ORA-01422: exact fetch returns more than requested number of rows~ORA-0651


duplicacy of oowncode in accountaddressdetails

                    SELECT COUNT(*) INTO LNTOTALRECORDS FROM ACCOUNTADDRESSDETAIL ACCOUNTADDRESSDETAIL
                     WHERE ACCOUNTADDRESSDETAIL.FIRMNUMBER=ICFIRMNUMBER AND ACCOUNTADDRESSDETAIL.OOWNCODE=CLEXTRA.OOWNCODE ;


Nse Bill Data Generation cannot take place. ~-1422~ORA-01422 exact fetch returns more than requested number of rows~ORA-06512 at LDBO.PK_NSEBI

[Microsoft][ODBC driver for Oracle][Oracle]ORA-20014: Nse Bill Data Generation cannot take place. ~-1422~ORA-01422: exact fetch returns more than requested number of rows~ORA-06512: at "LDBO.PK_NSEBI

Connectivity error: [Microsoft][ODBC driver for Oracle][Oracle]ORA-20014: Nse Bill Data Generation cannot take place. ~-1422~ORA-01422: exact fetch returns more than requested number of rows~ORA-0651


SPLOGIN IS PRESENT IN LDBO AND SYS USER




duplicacy of oowncode in accountaddressdetails

                    SELECT COUNT(*) INTO LNTOTALRECORDS FROM ACCOUNTADDRESSDETAIL ACCOUNTADDRESSDETAIL
                     WHERE ACCOUNTADDRESSDETAIL.FIRMNUMBER=ICFIRMNUMBER AND ACCOUNTADDRESSDETAIL.OOWNCODE=CLEXTRA.OOWNCODE ;




primary key is from

FIRMNUMBER
CODE


but can be dupicate oowncode

table alias uncommitted changes

http://www.mofeel.net/110-microsoft-public-fox-programmer-exchange/5596.aspx
http://support.microsoft.com/KB/249836


Table buffer for alias "name" contains uncommitted changes. (1545)

The problem manifests itself when I load data, modify it and load the next record. When that happens it throws Error #1545 Table buffer for alias "My_Cursor" contains uncommitted changes. I have been able to solve the problem by closing the cursor before I reload the new data. I missed this required step before.


---solution

create index on that vfp table

---------CHECK FOXPRO VIEW

MAIN.DBC
MODI CONNECTION
MODI VIEW

alter system flush buffer_cache;
alter system flush shared_pool;


----------

seems like you have the table set to Buffering = 5 Table. So when you skip
it will not save the record. either change the buffering to a 3 Record, or
do an implicit Tableupdate(). The record is not being saved, that's all


  SET MULTILOCKS ON
   =CURSORSETPROP("Buffering",3)


-------------

Appending a record to a view with
buffered changes is allowed and would
just add that buffered, depending on the
buffermode.

Have you SET MULTILOCKS ON?
What buffermode do you use?
If you are using remote views only
buffermode 3 and 5 are allowed, so
only optimistic buffering is possible.

---------------------

When I fill a row/column in the grid ( DETAILBILL Table), display error :

CursorSetProp("buffering",THIS.oldBuffering,aTablesUsed([m.i,1])   && optimistic table buffering
Error: 1545
Table buffer for alias "DETAILBILL" contains uncommitted changes
Line 67

My code in form destroy event :
SELECT DETAILBILL
TABLEUPDATE(.T.)

You cannot change the buffering mode when there are uncommitted changes in the buffer. These changes are saved to the disk when you issue TABLEUPDATE() but your TABLEUPDATE() is issued after the CursorSetProp probably. To save changes made to your data in Destroy event is too late.


In versions prior to Visual FoxPro 9, using a SQL SELECT statement meant that the results were always pulled from disk. This meant that if you wanted to query uncommitted changes from a buffered table, you were forced to use procedural commands. Now it's possible to specify for each table in a SELECT statement whether to read from the disk or from the local buffer using SET SQLBUFFERING and SELECT ... WITH (Buffering = <lexpr>).

Some examples of how to use WITH (BUFFERING ...) include:

SELECT * FROM Customer WITH (BUFFERING = .t.)

SELECT * FROM Orders WITH (BUFFERING = lUseBuffer)

SELECT DISTINCT c.city, o.shipcity ;
  FROM customers C WITH (BUFFERING=.T.) ;
  JOIN orders O WITH (BUFFERING=.T.) ;
    ON c.customerID = o.customerID

ORA-22288 File Or LOB Operation FILEOPEN Failed On Windows2008 R2 OS SAN Attached Drive [ID 1504521.1]

Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.2 and later
Microsoft Windows x64 (64-bit)
Symptoms
Error
ORA-22288: file or LOB operation FILEOPEN failed on Windows2008 R2 OS SAN attached drive
Directory object refers to a directory which is a LUN on the netapp.
Re-pointing the Directory object to a folder on C:\ drive of the server with same permissions as the directory on the
LUN, the problem is resolved
Cause
Same symptoms described in
Bug 12667656 - DBMS_LOB.FILEOPEN() FAILS WITH ORA-22288 ON WINDOWS 2008 R2 LUN
which is closed as not a BUG
The implementation of a LUN may take place by junctions/symlinks in Windows.
It is disabled to use symlink/junctions with BFILE for security reason.
This has been introduced in unpublished Bug 2262685.
Solution
1. Use the new "disable_directory_link_check" access parameter, present in 11.2.0.3 and above versions. This
access parameter will disable additional checks for junctions.
For this, upgrade to the 11.2.0.2 latest bundle patch or 11.2.0.3 to get the new feature before creating an
external table with "disable_directory_link_check" access parameter.
-- OR --
2. Disable the check and allow links by turning on the hidden internal parameter "_kolfuseslf" in a DBA (SYS)
session, i.e. run
6/20/13 Document Display
https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_adf.ctrl-state=6no1qqwc6_69 2/2
SQL> alter session set "_kolfuseslf" = true;
before executing the previously failing code/scripts





alter session set "_kolfuseslf" = true;


alter system set "_kolfuseslf" = true;


ora-600 [16513] [1403]

http://www.sql.ru/forum/377566/vopros-pro-shutdown-abort

ora-600 [16513] [1403] [20]


SQL > Select count (*) from $ obj;

  COUNT (*)
----------
      6296

SQL > delete  from $ obj;

6296  rows deleted.

SQL > shutdown abort
ORACLE instance shut down.
SQL > Startup;
ORACLE instance started.

Total System Global Area   135760720 bytes
Fixed Size                    731984 bytes
Variable Size              104857600 bytes
 Database Buffers            29360128 bytes
Redo Buffers                  811008 bytes
 Database mounted.
ORA- 01092 : ORACLE instance terminated. Disconnection forced



after all objects are created from the bootstrap $, when the first call
=====================
PARSING IN  CURSOR # 5 len = 52 dep = 1  uid = 0 oct = 3 lid = 0 tim = 2701521679320 hv = 782031521 ad = '84756cb8 '
Select ctime, mtime, stime from obj $ where obj # =: 1
 END  OF STMT
PARSE # 5 : c = 0 , e = 966 , p = 0 , cr = 0 , cu = 0 , mis = 1 , r = 0 , dep = 1 , og = 0 , tim = 2701521679306
BINDS # 5 :
 bind 0 : dty = 2 mxl = 22 ( 22 ) mal = 00 scl = 00 pre = 00 oacflg = 08 oacfl2 = 1  Size = 24 offset = 0
   bfp = ffffffff7cc6a138 bln = 22 avl = 02 flg = 05
   value = 4
EXEC # 5 : c = 10000 , e = 1,340 , p = 0 , cr = 0 , cu = 0 , mis = 0 , r = 0 , dep = 1 , og = 4 , tim = 2701521681024
WAIT # 5 : Nam = 'file db Sequential Read ' ela = 109 p1 = 1 p2 = 94 p3 = 1
WAIT # 5 : Nam = 'db file Sequential Read' ela = 78 p1 = 1 p2 = 1291 p3 = 1
 Begin Cleaning out Block ...
Block header dump: 0x0040050b
 Object ID on Block? Y
 seg / obj: 0x24 csc: 0x00. 2452   itc: 2   flg: O typ: 2 - INDEX
     fsl: 2   fnx: 0x0 ver: 0x01

 Itl Xid Uba Flag Lck Scn / Fsc
0x01 0x000a.00a. 00000004   0x008000ba. 0002 . 01   CB - 0 SCN 0x0000.000021c0
0x02 0x0002. 013 . 00000378   0x00800025. 0075 .0 F   ---- 298 fsc 0x0fd9.00000000

- Poskipal -

----- End of leaf dump Block -----
Found All committed transactions

- Poskipal

Block cleanout Record , SCN: 0x0000.001b3c72 ver: opt 0x01: 0x01, entries follow ...
  itli: 2   flg: 2   scn: 0x0000. 00000001
Block header dump: 0x0040050b
 Object ID on Block? Y
 seg / obj: 0x24 csc: 0x00.1b3c72 itc: 2   flg: O typ: 2 - INDEX
     fsl: 0   fnx: 0x0 ver: 0x01

 Itl Xid Uba Flag Lck Scn / Fsc
0x01 0x000a.00a. 00000004   0x008000ba. 0002 . 01   CB - 0 SCN 0x0000.000021c0
0x02 0x0002. 013 . 00000378   0x00800025. 0075 .0 F C --- 0 scn 0x0000.00000001

- T. E he just commits, no checks, as a result of

... clean out dump

Internal or Fatal error
ORA- 00600 : internal error code, arguments: [ 16513 ], [ 1403 ], [ 4 ], [], [], [], []

- Ora1403 - no data found



ORA-00350 log 3 of instance beb1314 (thread 1) needs to be archived ORA-00312 online log 3 thread 1 'EBEBD1314REDO03.LOG'

ORA-00350 ORA-00312


SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-00350: log 3 of instance beb1314 (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'




16:36:22 SYS@eqdd011g SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR LOGFILE GROUP 3
*
ERROR at line 1:
ORA-00350: log 3 of instance eqdd011g (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: '/u01/app/oracle/admin/eqdd011g/oradata/eqdd011g/redo03.log'


Elapsed: 00:00:00.17
16:38:00 SYS@eqdd011g SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Database altered.

Elapsed: 00:00:02.69
16:38:13 SYS@eqdd011g SQL> alter database drop logfile group 3;

Database altered.


alter database open;

ORA-16014 ORA-00312

----------ORA-16014 ORA-00312

Archive process error: ORA-16014: log 3 sequence# 29808 not archived, no available destinations
ORA-00312: online log 3 thread 1: 'E:\BEBD1112\REDO03.LOG'



That would happen if the destination filesystem is full (i.e. at 100%) and Oracle cannot write the archivelog to that filesystem.


The general solution for this error is
- check the Disk space
- check where the archive location exists
- Try Clearing the Unarchived log group through the SQL :
   alter database clear unarchived logfile group <<No>>;
- Bounce the Db
- Bounce the Server



---------------------------Archive Log Creation Fails With Errors ORA-16014, ORA-00312 [ID 1373314.1]

Symptoms

When attempting to run "alter system switch logfile;", the session hangs and following errors are reported in the alert log and trace file:

Alert log :
Sun Oct 30 03:01:55 2011
ARC1: Failed to archive log 3 thread 3 sequence 122674
Sun Oct 30 03:01:55 2011
Errors in file /apps/oracle/9.2.0/admin/cwld/bdump/cwld_arc1_889040.trc:
ORA-16014: log 3 sequence# 122674 not archived, no available destinations
ORA-00312: online log 3 thread 3: '/data/oradata/cwld/redo3.log'


Changes

In our example, the database was moved (to new SAN).

Cause

In this case, the issue is caused by an inaccessible/incorrect archiving directory.

SQL> show parameter log_archive_dest_1

NAME                    TYPE    VALUE
-------------------- -------   ------------------------------
log_archive_dest_1  string     location=/data3/oradata/cwld/archive/


However, that location does not exist and the archive log cannot be created.

Solution

Correct the archive path to match the new location, for example;

SQL> alter system set log_archive_dest_1 = 'location=/data1/oradata/archive/';

Or

Remove the old directory path on the operating system and create the new directory path to match the Oracle archiving location.



------------------------------------------------------------------------------------------

Archive process error: ORA-16014: log 3 sequence# 53538 not archived, no available destinations
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'


alter system set db_recovery_file_dest_size=200m;


------------------------------

select Group#,Member from V$logfile;

select group#, status,bytes/1024/1024 from v$log;

 alter database add logfile group 4 ('E:\BEBD1314\REDO04.LOG') size 100m;
 alter database add logfile group 5 ('E:\BEBD1314\REDO05.LOG') size 100m;
 alter database add logfile group 6 ('E:\BEBD1314\REDO06.LOG') size 100m;


ALTER SYSTEM CHECKPOINT GLOBAL;

select group#, status from v$log;

alter database drop logfile group 1;

alter database drop logfile group 2;

alter database drop logfile group 3;

alter system switch logfile;

alter system switch logfile;

select group#, status,bytes/1024/1024 from v$log;

Note: Can not drop current logfile should be inactive state;




ORA-00350 ORA-00312


SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-00350: log 3 of instance beb1314 (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'




16:36:22 SYS@eqdd011g SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR LOGFILE GROUP 3
*
ERROR at line 1:
ORA-00350: log 3 of instance eqdd011g (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: '/u01/app/oracle/admin/eqdd011g/oradata/eqdd011g/redo03.log'


Elapsed: 00:00:00.17
16:38:00 SYS@eqdd011g SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Database altered.

Elapsed: 00:00:02.69
16:38:13 SYS@eqdd011g SQL> alter database drop logfile group 3;

Database altered.


alter database open;

ORA-00257Archiver Error, Connect Internal Only Until Freed and ORA-16014 and ORA-00312

ORA-00257Archiver Error, Connect Internal Only Until Freed and ORA-16014 and ORA-00312 [ID 376923.1]


I have stop archivelog to stop error

Symptoms

Users cannot connect to the database :

0RA-00257:archiver error, connect internal only until freed
ORA-16014:log 2 sequence# 231 not archived, no available destinations
ORA-00312:online log 2 thread 1:'/u02/oradata/wlbranch/redo02.log'





Cause

The flash_recovery_area is full.

SQL>  archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive Destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 231
Next log sequence to archive 231
Current log sequence 233



 SQL> select group#,status archived from v$log;
    GROUP# ARCHIVED
    ------------  ------------------------
    1                INVALIDATED
    2                INACTIVE
    3                INACTIVE


SQL> archive log all;
ORA-16020: less destinations available than specified by
LOG_ARCHIVE_MIN_SUCCEEDED_DEST


The only one destination allowed (log_archive_min_succeed_dest= 1) is not able to perform the archiving.


Solution

1. Either allow more space in the DB_RECOVERY_FILE_DEST with the DB_RECOVERY_FILE_DEST_SIZE parameter  :

SQL> alter system set db_recovery_file_dest_size=3G ;
2. Or to avoid the situation once the 3Gb is full , set the following parameters so that when the dest1 is full, archiving is automatically  performed to the alternate dest2 :

log_archive_dest_1='LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2'
log_archive_dest_2='LOCATION=/other_dest_for_archiving'
log_archive_dest_state_1='enable'
log_archive_dest_state_2='alternate'
db_recovery_file_dest='/u01/app/oracle/product/10.1.0/db_1/flash_recovery_area'
db_recovery_file_dest_size=2G




Problem while archiving the redo Log

https://forums.oracle.com/thread/977515

Hi all,
I m having few issues in my server...
I get the following error in the alert log of oracle..
There are many errors
1) No space left on device
2) ARC0: I/O error 19502 archiving log 1 to
'/oracle/admin/SNM/arch/arch_1_393_668727286.arc'
ARCH: Archival stopped, error occurred. Will continue retrying
3) ORA-16014: log 1 sequence# 393 not archive*d, no available destinations*


Also please find the v$log file query
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
1 1 405 1073741824 1 NO CURRENT
60275455 20-OCT-09
2 1 403 1073741824 1 NO INACTIVE
59987366 19-OCT-09
3 1 404 1073741824 1 NO INACTIVE
60125083 20-OCT-09




check the mount point ... is it full ?

if yes then

change the destination .....

alter system set log_archive_dest_1='location=/opt/oracle/backup/arch' scope=both;

or
delete the archive logfile

or
But not a very good idea ....


SQL> ALTER SYSTEM ARCHIVE LOG STOP;

System altered.



SQL> ALTER SYSTEM ARCHIVE LOG start;


ALTER SYSTEM SWITCH LOGFILE Statement hangs

ALTER SYSTEM SWITCH LOGFILE Statement hangs
ALTER SYSTEM SWITCH LOGFILE hangs:
Step 1: Login as sysdba.
Step 2: Enable log_archive_dest_state_1 Parameter.
SQL>alter system set log_archive_dest_state_1=enable scope=spfile;
System altered.
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

CreateFile error 32 when trying set file time oracle 11g windows 7

Today I am going to install Oracle 10g Client and ODAC11gR2 (Oracle Data Provider .Net) in Microsoft Windows XP Professional Service Pack 2. I have got following error message in console.

CreateFile error 32 when trying set file time

After searching I have found that this error message for my Antivirus. In my case the anti virus is Kaspersky. After disable my antivirus I am able to install Oracle Client with out any Warning or Error message


ora-02067 transaction or savepoint rollback required


disable trigger

THEN TRY AND ENABLE

ORA-00333 redo log read error block





PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;




alter database open resetlogs;

--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

add following in pfile
undo_management = manual
event = '10513 trace name context forever, level 2'


startup restrict pfile=<initsid.ora>


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'd:\rkdatabase\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

Dynamic change of CPU_COUNT causes ORA-600 [KsrpubwaitDeadlock] ORA-2097 Instance crash

Dynamic change of CPU_COUNT causes ORA-600 [Ksrpubwait:Deadlock] ORA-2097 Instance crash
Problem Description
Dynamic change of CPU_COUNT parameter causes oracle bug ORA-600 [ksrpubwait:deadlock]. For example, after issuing command "alter system set cpu_count=5;" it fails with following errors.

ORA-600 [ksrpubwait:deadlock], [parameters to cluster db instances - broadcast channel]
ORA-2097: parameter cannot be modified because specified value is invalid

And then instance crashes.

If you look at stack trace you will see,
ksrpubwait ksrpublish kspbcast kspset0 kxfpCPUCountUpdate ksbcpcb

Cause of the Problem
This errors are caused by Oracle Bug 7535429

Solution of the Problem
Solution 01: Bug 7535429 is fixed in version 11.2. So upgrade your oracle database to 11gR2 will solve the problem.
There is one-off patch which also help you to solve the problem. Check for the availability of one-off Patch 7535429 for your platform on MetaLink.

Solution 02: Explicitely setting CPU_COUNT to new value.

Solution 03: If you start the instance from pfile instead of using spfile then it will also solve the problem.

Solution 04: Set explicitly PARALLEL_MAX_SERVERS to the current value instead of relying default value.

ORA-600 [6006] ORA-600 [6856]

While startup, instance terminated after ORA-600 [6006] ORA-600 [6856]
Problem Description
While starting up the instance it fails with following errors:

ORA-00600: internal error code, arguments: [6006], [1], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [6856], [0], [60], [], [], []

and then SMON abnormally crashes the database instance.

Cause of the Problem
These are oracle bugs and these ORA-600 [6006] and ORA-600 [6856] bug occurred whenever undo segments are trying to rollback a failed transaction and cannot.

For error "ORA-00600: internal error code, arguments: [6006], [1], [], [], [], [], [], []" ,
Oracle is undoing an index leaf key operation. If the key is not found, ORA-00600 [6006] is logged.

For error "ORA-00600: internal error code, arguments: [6856], [0], [60], [], [], []" , SMON is trying to recover a dead transaction. But the undo application runs into an internal error (trying to delete a row that is already deleted).

Solution of the Problem

1) Review the trace files and look for the object(s) involved. If the trace file does not have a SQL statement, search on the following: "block dump header"

2) In the block header there will be a seg/obj = hex value. Convert the hex to dec and this will give you the data_object_id.

3) The alert.log may also show the affected object, for example:

ORACLE Instance ORCL (pid = 8) - Error 600 encountered while recovering transaction (9999, 36) on object 45879.

SQL>select owner, object_name from dba_objects where data_object_id = object#;

This will be the object you need to work with.


To implement the solution:

1. shutdown the oracle instance.
SQL> shut immediate

2. set event - event="10513 trace name context forever, level 2" (this event and setting disables transaction recovery which was initiated by SMON). Open the pfile and add the following line inside pfile.

event="10513 trace name context forever, level 2"

3. startup instance
SQL> startup

4.
- If the object is an index - drop and recreate.
- If the object is a table - drop / export / Create Table as Select (CTAS) to change the object ID

5. shutdown the instance
SQL> shut immediate

6. remove the event
Open the pfile and remove the event line that was added.

7. startup the instance
SQL> startup

ORA-600 [17147] ORA-600 [Kghlkrem1] ORA-7445 kghlkremf() database crash

ORA-600 [17147] ORA-600 [Kghlkrem1] ORA-7445 kghlkremf() database crash
During normal database activity, in alert log the following ORA-00600 error occurred and it causes database crashed.
ORA-00600: internal error code, arguments: [17147], [0xC0000003E4CE10A5], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [KGHLKREM1], [0xC0000003E4CE10B0], [], [], [], []
ORA-7445: exception encountered: core dump [kghlkremf()+33] [SIGSEGV]
If you look at the trace file you no longer will see any sql that causes this bug.

The call stack is as follows:

kghfrmrg kghfre qsmqktcc ktcdso ktcrcm ktdcmt k2lcom k2send xctctl xctcom_with_options opicom

or

k2send xctctl xctcom_with_options opicom opiodr

or

kghalo ktcccadd kwqidracbk kwqidcpmc kwqidafm0 kwqididqx kpoaqdq

or

kghadd_reserved_extent kghget_reserved_extent kghgex kghfnd

Cause of the Problem
The problem occurred due to oracle bug. The bug involves abnormal termination of queries involving bloom filters. This manifests itself as memory corruptions. The memory was freed, reallocated to some other client, and corrupted but slaves were still writing into it.

Solution of the Problem
Solution 01:
This bug is fixed in oracle database version 10.2.0.4. So apply 10.2.0.4 patchset to solve this problem.

Solution 02:
Workaround, you can set the oracle hidden parameter _bloom_filter_enabled to false.
This parameter can be set dynamically:

If you are using spfile then just set as,
SQL> connect / as sysdba
SQL> alter system set "_bloom_filter_enabled"=false scope=both;

If you use pfile to startup your database remove the scope parameter that is use,
SQL> alter system set "_bloom_filter_enabled"=false;

ORA-00600 [730] [SPACE LEAK] OR ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] during shutdown

ORA-00600 [730] [SPACE LEAK] OR ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] during shutdown
Problem Description
While shutting down oracle database the following error occurred.

In oracle 10g and 11g,
ORA-00600: internal error code, arguments: [730], [4100736], [space leak], [], [], [], [], [], [], [], [], []

If you observe stack trace it will look like,
ksesic2 <- ksmshu <- opistp_real <- opistp <- opiodr <- ttcpip <- opitsk <- opiino <- opiodr <- opidrv <- sou2o <- main <- start

 In oracle 10g the error looks like,
ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [] ,[], [], [], [], []

 If you observe stack trace it will look like, kglshu <- kqlnfy <- kscnfy <- ksmshu <- opistp_real <- opistp <- opiodr <- ttcpip <- opitsk <- opiino <- opiodr <- opidrv <- sou2o <- opimai_real


Cause of the Problem
The error occurred due to oracle bug. Oracle named this bug as BUG 7572335. This happens during database shutdown when child cursors had been previously marked as kept in the shared pool. Note that this is a non-corruptive error.

Solution of the Problem
Solution 01: This bug is solved in Oracle database 11gR2. So upgrade to Oracle database 11.2 or higher is one solution.

Solution 02: If you are using Oracle database 10gR2 then up to patchset version 10.2.0.4 this bug remained and it is solved in version 10.2.0.5. You can upgrade to oracle version 10.2.0.5 by applying the Patchset, Patch 8202632.

Solution 03: If available for your platform and version, you can apply one off patch 7572335. Download and apply the fix, Patch 7572335

ORA-00600 internal error code, arguments [4819]

ORA-00600: internal error code, arguments: [4819]
Problem Description
While reviewing database alert log files it was report an ORA-600 [4819] error followed by ORA-00308 and ORA-27037 errors like below.

ORA-00600: internal error code, arguments: [4819], [], [], [], [], [], [], []
ORA-00308: cannot open archived log '/home/oracle/archive/arc_ARJU_001.arc'
ORA-27037: unable to obtain file status

This problem started to happen because recently database has been switched into NOARCHIVELOG mode from ARCHIVELOG mode.

Cause of the Problem
The problem arises because the LOG_ARCHIVE_DEST_n parameter is still set to point to an archive log destination and the default value for the LOG_ARCHIVE_DEST_STATE_n parameter is 'ENABLE'. Based on these two settings database still attempts to open the required archive log file during transaction recovery.

Solution of the Problem
To resolve this problem, mark the destination as deferred and null out the archive destination, That is
1) Login as sysdba.
SQL> connect / as sysdba

2) Set the archive_dest_state_1 to defer.
SQL> alter system set log_archive_dest_state_1 = defer scope=both;

3) Set log_archive_dest_1 to null.
SQL> alter system set log_archive_dest_1 = '' scope=both;

ORA-00600 internal error code, arguments [13013] [5001]

http://oraclemamukutti.blogspot.in/2011/06/ora-00600-13013-5001.html




ORA-600 [13013] [a] [b] [c] [d] [e] [f]


This format relates to Oracle Server 8.0.3 to 10.1

Arg [a] Passcount
Arg [b] Data Object number
Arg [c] Tablespace Relative DBA of block containing the row to be updated
Arg [d] Row Slot number
Arg [e] Relative DBA of block being updated (should be same as [c])
Arg [f] Code




ORA-00600: internal error code, arguments: [13013], [5001]

ORA-00600: internal error code, arguments: [13013], [5001], [268], [8447338], [5], [8447338], [17], [], [], [], [], []


Select object_name,object_type,owner from dba_objects where data_object_id=268;


select * from dba_objects where object_name='SMON_SCN_TIME';

select * frOm dba_indexes where table_name='SMON_SCN_TIME';


Select dbms_metadata.get_ddl('INDEX','SMON_SCN_TIME_TIM_IDX','SYS') from dual;

Select dbms_metadata.get_ddl('INDEX','SMON_SCN_TIME_SCN_IDX','SYS') from dual;


 drop index "SYS"."SMON_SCN_TIME_TIM_IDX";


  CREATE UNIQUE INDEX "SYS"."SMON_SCN_TIME_TIM_IDX" ON "SYS"."SMON_SCN_TIME" ("TIME_MP")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;

 drop index "SYS"."SMON_SCN_TIME_SCN_IDX";

  CREATE UNIQUE INDEX "SYS"."SMON_SCN_TIME_SCN_IDX" ON "SYS"."SMON_SCN_TIME" ("SCN")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;


analyze table SYS.SMON_SCN_TIME validate structure cascade online;






======================================================================================================


SQL>Select object_name,object_type,owner from dba_objects where data_object_id=<value reported in argment b>

Once the Object is identified run the following :

The below command check if the table has corruption or not .

SQL> Analyze table <owner>.<table name> validate structure online ;

If this goes fine table doesnot have corruption. For next command.

If the above command fails with ORA-1498 go to Section 3


The below command check if table/index has corruption or not

SQL>Analyze table <owner>.<table name> validate structure cascade online ;


If the above command errors out with ora-1499 it indicates a corruption in index.

Go to section 2 for resolution

Run dbverify on the datafile reported in the error

Arg [c] in the ora-0600[13013] would give the Relative DBA

For example

ORA-00600: internal error code, arguments: [13013], [5001], [57353], [155254965], [261],[155254965], [17], []


Arg [c] --> rdba-->155254965

Use this value and find the file and block number for this dba

select dbms_utility.data_block_address_file(155254965) Rfile# ,dbms_utility.data_block_address_block(155254965) "Block#" from dual;

RFILE# Block#
---------- ----------
37 65717

You an run dbveirfy on datafile with rfile#=37

SQL>Select name from v$datafile where rfile#=37

dbv file=<location of datafile> blocksize=<db_block_size>








Section 2 >How to resolve if a Index is corrupted

You would need to drop and recreate the index

Ensure before dropping the Index

SQL>Spool /tmp/createindex.sql

SQL>Set long 100000000

SQL>Select dbms_metadata.get_ddl('INDEX','<Index name>',<'user name>') from dual

SQL>Spool off

Refer the Following note to Identify the index

Note 563070.1

Title: ORA-1499. Table/Index row count mismatch

Please note if there is just one index in the table then you can use dbms_metadata.get_ddl to get the script. of the index and drop and recreate it.


Section 3> How to resolve if table is corrupted

Option a> Backup is available

Ora-1498 would be reported on the table.

The trace file from Ora-1498 would contain following information

Example

Block Checking:DBA = 1066265208,Block Type = KTB-managed data block --->
file 254,block 911992
data header at 0xc00000010118e07c
kdbchk: avsp(816) > tosp(812)
Block header dump: 0x3f8dea78
Object id on Block? Y
seg/obj: 0x155b452 csc: 0x05.7b2b4ee6 itc: 3 flg: E typ: 1 - DATA
brn: 0 bdba: 0x3f8dde0c ver: 0x01
inc: 0 exflg: 0

Note the DBA value reported in the trace file DBA = 1066265208
Convert this to find the file number and block number having issue

Sql>Select dbms_utility.data_block_address_file('1066265208') from dual ;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE('1066265208')
--------------------------------------------------
254

Sql>Select dbms_utility.data_block_address_block('1066265208') from dual ;

DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK('1066265208')
---------------------------------------------------
911992


Run dbverify on the datafile containing the table

dbv file=<location of datafile> blocksize=<db_block_size>

Corruption would be reported on the block.

If you have an Rman backup do a Rman block recovery (Take the file number and block number affected from the trace file)

Rman> Blockrecover datafile <no> block <block number>

Or

If you have a user managed backup you can do an restore and recovery from good copy of the problematic datafile




Option b> Backup is not available

If no backups are available then use event 10231 at session level to create a salvage table

SQL>alter session set events '10231 trace name context forever, level 10'
SQL> Create table <owner>.salvage_table as select * from <Corrupted table> ;

Rename the Original table to old.

Rename salvage table to Original table name



or

You can use dbms_repair script. to mark the block soft corrupt.

Note 556733.1

DBMS_REPAIR SCRIPT.


How to resolve when Smon terminates the instance due to Ora-00600[13013]


If Smon is terminating the instance then.

Set event 10513 and startup the database

event="10513 trace name context forever, level 2"


SQL>Startup mount ;

SQL>Show parameter event

SQL>Alter datatabase open ;

Identify the object involved using information from Section 1.



How to resolve the issue if the object involved belongs to system tablespace


System objects are very important.
Please open a Service request with Oracle support if system tables are involved.






Identifying the row having issue when the table is having corruption.


1>Once the error occurs, using the ORA-600 arguments, gather the following information :

ORA-00600: internal error code, arguments: [13013], [5001], [57353], [155254965], [261],
[155254965], [17], []


arg b : [57353] - it is the OBJECT_ID
arg c : [155254965] - it is the block address in Decimal
arg d : [261] - is it the slot number

2. Translate the the block address in Decimal to a file# and block #

select dbms_utility.data_block_address_file(155254965) Rfile# ,dbms_utility.data_block_address_block(155254965) "Block#" from dual;

RFILE# Block#
---------- ----------
37 65717

The Relative file is 37
The block number is 65717

Find the data_object_id for this object

Argument b is Object_id--> 57353

SQL>Select data_object_id ,object_name,owner from dba_objects where object_id=57353 ;

3. Create the rowid using dbms_rowid.rowid_create(1,DATA_OBJECT_ID,FILE#,BLOCK#,SLOT#)

In this case :

select dbms_rowid.rowid_create(1,57353,37,65717,261) from dual;
DBMS_ROWID.ROWID_C
------------------
AAAOAJAAlAAAQC1AEF

4. You can select from table and identify the record causing the issue

SQL> Select * from <owner>.<table name> where rowid='AAAOAJAAlAAAQC1AEF';




Are there any Known Bug

There is a internal bug number 5085288

fixed in 11.1

Details

ORA-600 [13013] [5001] error can occur on a MERGE command if the DELETE pass encounters a consistent read (CR) error due to the update pass having updated the same
row and column previously.

Check for availability of one off patch usingpatch 5085288

Bug 4549673

Abstract: ORA-30926 / OERI:13030 during update
Fixed-Releases: 9208 A204 B106

Details:
ORA-30926 (in Oracle 9i) or ORA-600 [13030] (in Oracle10g) can occur
during an update DML. This can occur if an internal ORA-1551 error
occurs and is trapped (1551 errors are not visible to client
code and are trapped and handled internally)

Fixed In Ver: 11.0
Check for availability of one off patch usingpatch 5085288





ORA-00600 internal error code, arguments [kdsgrp1], [], [], [], [], [], [],[]


ORA-00600: internal error code, arguments: [kdsgrp1] , This error may occur on a RAC database when (re)building index online.

The fixes for this bug are in Metalink Note : 285586.1

As a workaround please try to rebuild the index either offline or online with as little as possible activity on the affected table.

If error continues check the table for chained rows.


---------------------------------


If this error comes up, you have to do the following:

use adrci to package the incident. How to do that is described here.
adrci
show incidents
ips pack incident … to …

afterwards check into the trace file in the package.
<the package zip>\diag\rdbms\$ORACLE_SID\$ORACLE_SID\incident\incdir_<some kind of id>look into the trc files searching right in the beginning for an sql statement.

If you find a select with bind variables, check the trace furthermore for "bind" or "variable" to get the variable which has been used in that specific statement.

Find out, which tables are involved in the specific SQL and analyze each by using followin syntax

ANALYZE TABLE <SCHEMA.TABLE_NAME> VALIDATE STRUCTURE CASCADE;

You should get an error like, this:

ERROR at line 1:
ORA-01499: table/index cross reference failure - see trace file

go to the tracefile, that just popped up in the background dump destination.

In that file, look for the string "mismatch".
You should now arrive at a line, which explains what exactly is wrong.

In my case, I found a mismatch between an index of the table and the table itself.
The tracefile with the mismatch pointed out the object_id of the specific index, which was corrupt.

You should be able to identify it by

select owner, object_name, object_type from dba_objects where object_id=<OBJECT_ID>;

If you rebuild/recreate indexes, the analyze table should run through without any error message.


To rebuild an index, use
alter index SCHEMA.INDEX_NAME rebuild online;

or get the metadata and recreate it using

set long 999999
set lines 160 pages 9999
col text for a150
select dbms_metadata.get_ddl('INDEX','<INDEX_NAME>','<INDEX_OWNER>') as text from dual;
replace the "create index" with "create or replace index"
and run it.

To check the status of all other tables, an idea is to extract the validate.sql script out of
Note:100419.1 SCRIPT: VALIDATE.SQL to ANALYZE .. VALIDATE STRUCTURE objects in a Tablespace

Then, log as sys user and run this script:

sql> @validate
This will create the "ValidateStructure" package.

After this, open a spool file and enable SERVEROUT

sql> spool myoutput.log
sql> execute dbms_output.enable(1000000);
sql> set serveroutput on
and run:

sql> execute ValidateStructure.TS('TABLESPACE_NAME', TRUE);
This will run until all requested items are scanned.

Errors from the ANALYZE commands are output to DBMS_OUTPUT and so any failing objects are listed when all TABLES / CLUSTERS have been analyzed.
More detailed output from failing ANALYZE commands will be written to the user trace file in USER_DUMP_DEST

Although, initially written for 10.2, this should still work fine against 11g databases.




Note 1413928 - Index corruption wrong results after rebuild index ONLINE

Other terms
Wrong Results, Corruption (Index), ORA-08102 / ORA-01499,
ORA-00600 [kdsgrp1], ORA-00600 [qertbFetchByRowID], ora-01452

Reason and Prerequisites
Wrong results or the next errors can be produced by SQL statements:

ORA-8102 by a delete/update
ORA-1499 by "analyze table validate structure cascade"
ORA-600 [kdsgrp1]
ORA-600 [qertbFetchByRowID]

This is Oracle Bug 7329252 which is fixed in Oracle release 10.2.0.5.

Solution
Install the relevant patch for your platform.
You can find this on SAP Service Marketplace in the software center

ora-00942 ora-06512 sys.dbms_snapshot


Solution

1) compile mv

2) recreate mv

---------
ORA-942 on Materialized View Refresh after Master Mview Recreated on Different Table [ID 867042.1]

exec dbms_mview.refresh(' ',C);

ORA-00942: table or view does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2563
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2776
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2745
ORA-06512: at "LDBO.SP_MVRKFOUTILIZATION", line 4
ORA-06512: at line 1

In this case, you do not know which of the tables within the materialized view is throwing the ORA-00942 error.  This user was advised to enable SQL Trace (TKPROF) st the session level to determine the exact table that is seeing the ORA-00942 error.


 could solve an occurance of this problem by calling
ALTER MATERIALIZED VIEW my_mview COMPILE;
before refreshing the mview. But I do not know why this was necessary in this case.


Possibly either of
a. the source table itself get's DROPped and reCREATEd -- and an MV Refresh within the DROP-CREATE cycle would error with ORA-942
b. grants on the source table get DROPped and reCREATEd -- and ....same as above ......
but dont know which table
The MV Definition (which is the QUERY column in USER_MVIEWS/DBA_MVIEWS) shows the query that the MV uses.
(QUERY is a LONG column so you must SET LONG 1000000 in sqlplus before querying the USER_MVIEWS/DBA_MVIEWS view



SELECT * FROM ALL_SYNONYMS WHERE SYNONYM_NAME IN ('DBMS_MVIEW',> 'DBMS_SNAPSHOT');

SELECT * FROM ALL_TAB_PRIVS_RECD WHERE GRANTEE = 'PUBLIC' AND TABLE_NAME = 'DBMS_SNAPSHOT';

ORA-29273 HTTP request failed ORA-24247

drop and recreate acl


begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_http_access.xml');
end;
/

commit;
begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_inaddr_access.xml');
end;
/

commit;
begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_mail.xml');
end;
/
commit;

begin
  dbms_network_acl_admin.drop_acl(acl=>'utl_http.xml');
end;
/
commit;




Exec dbms_network_acl_admin.create_acl ('utl_http_access.xml','Normal Access','LDBO',TRUE,'connect',NULL,NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_http_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_http_access.xml', '*',NULL,NULL);
Commit ;
               
Exec dbms_network_acl_admin.create_acl ('utl_inaddr_access.xml','Normal Access','LDBO',TRUE,'resolve',NULL, NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_inaddr_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_inaddr_access.xml', '*',NULL,NULL);
commit;

Exec dbms_network_acl_admin.create_acl ('utl_mail.xml','Allow mail to be send','LDBO',TRUE,'connect' );
Exec dbms_network_acl_admin.add_privilege ('utl_mail.xml','LDBO',TRUE,'resolve');
Exec dbms_network_acl_admin.assign_acl('utl_mail.xml','*',NULL,NULL);
commit ;


Exec dbms_network_acl_admin.create_acl ('utl_http.xml','HTTP Access','LDBO',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_http.xml','LDBO',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_http.xml','*',NULL,NULL);
commit;

Grant Execute on utl_inaddr to LDBO ;
Grant Execute on utl_http to LDBO ;

select acl, principal, privilege, is_grant from dba_network_acl_privileges;

SELECT * FROM DBA_NETWORK_ACLS;





LD file import- in temp file is generating in GBs

there is very much records in tables.

All records are coming.

plz delete the rows and reimport

ORA-01591 lock held by in-doubt distributed transaction string

ORA-01591: lock held by in-doubt distributed transaction string

Cause: Trying to access resource that is locked by a dead two-phase commit transaction that is in prepared state.

Action: DBA should query the pending_trans$ and related tables, and attempt to repair network connection(s) to coordinator and commit point. If timely repair is not possible, DBA should contact DBA at commit point if known or end user for correct outcome, or use heuristic default if given to issue a heuristic commit or abort command to finalize the local portion of the distributed transaction.




ORA-01591: lock held by in-doubt distributed transaction string
The first thing to do is to check the result of these queries:
Check the following table:

select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from pending_trans$;
select * from pending_sessions$;
select * from pending_sub_sessions$;

From the above queries keep all the value of LOCAL_TRAN_ID in each table and try to force commit or rollback.

ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
EXEC DBMS_TRANSACTION.rollback_force('LOCAL_TRAN_ID')
--this changed state prepared to forced_rollback after that
COMMIT ;
ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
EXEC DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('LOCAL_TRAN_ID');
COMMIT;

Repeat each steps for each LOCAL_TRAN_ID, turn on recovery for distributed transaction
crosscheck these tables (must be empty):

select * from dba_2pc_pending;
select * from dba_2pc_neighbors;
select * from pending_trans$;
select * from pending_sessions$;
select * from pending_sub_sessions$;

ORA-14402 updating partition key column would cause a partition change

Cause:

You want to update a record and new value will belong to a different partition.


Solution:

Block must be physically re-located. You can not do this if row movement is not enabled:


ldbo@ari1112>  update ledger set branchcode='2816' Where firmnumber='ACML-00001' and Oowncode='26964
0000';
 update ledger set branchcode='2816' Where firmnumber='ACML-00001' and Oowncode='269640000'
        *
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change


ALTER TABLE <Table_Name> enable row movement;

ORA-01031 insufficient privileges with dba IIS application

iis reset after do changes in sqlnet.ora


Do you have:
SQLNET.AUTHENTICATION_SERVICES=(NTS)

In your sqlnet.ora?
Is your Windows user in the ORA_DBA group?

sho parameter emote_login_passwordfile
select * from v$pwfile_users;

Set
SQLNET.AUTHENTICATION_SERVICES = (ALL)
in sqlnet.ora file in $ORACLE_HOME/network/admin/sqlnet.ora

Also set
SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;

Bouce the database and then try to connect through sqlplus / as sysdba
Refer to following MOS:
Troubleshooting ORA-1031: Insufficient Privileges While Connecting As SYSDBA [ID 730067.1]


ORA-1031: Insufficient Privileges

This is one of very common and frequently occuring error . According to the docs note ,the cause of the ORA-01031  is :

Cause : An attempt was made to change the current username or password without the appropriate privilege. This error also occurs if attempting to install a database without the necessary operating system privileges. When Trusted Oracle is configure in DBMS MAC, this error may occur if the user was granted the necessary privilege at a higher label than the current login .


Action :  Ask the database administrator to perform the operation or grant the required privileges. For Trusted Oracle users getting this error although granted the appropriate privilege at a higher label, ask the database administrator to re-grant the privilege at the appropriate label .

As we see the above action , nothing is explained in details and only mention about the privileges issues . But ORA-1031 may not only occurs with insufficient privileges but may other reasons too. Here i have try to cover all the possible reason of this error .

There are generally two method through which a user can connect to oracle database AS SYSDBA .
1.) OS authentication :
2.) Password file authentication :


1.) OS Authentication : OS authentication allows Oracle to pass control of user authentication to the operating system .The OS authentication is the process of verifying the identity of the user connecting to the database with the information managed by the OS. An OS user is able to use this authentication method if the following conditions are met:

i.) The  user is a member of a special group : The OS user should belong to the OSDBA group in order to login as sysdba. On Unix/Linux the default name of  these group is "dba" and on Windows the name of the group is "ORA_DBA" .


ii.) The OS authentication is allowed by the server settings(sqlnet.authentication_services is set correctly) : On Unix Parameter sqlnet.authentication_services must be set to (ALL) or to (BEQ, <other values>) for this to work. On Windows this parameter must be set to (NTS) or (NONE) or ALL.


If we have configured the Operating System authentication and an OS user is a member of the special groups OSDBA or OSOPER then the OS user does not have to provide any credentials while connecting to the database as SYSDBA or AS SYSOPER from the oracle Server host machine when using the bequeath protocol. As long as the session is not established through the listener the OS Authentication will be used and the credentials provided will be ignored. This means that the users who are able to use the OS authentication can use any username and any password to connect to the database locally on the Oracle Server using the below syntax:


$sqlplus / as sysdba
or
SQL> connect any_username/any_password AS SYSDBA


2.) Password File Authentication :  The password file provides a method to authenticate privileged users from a remote (over sqlnet / listener) location . By default the user SYS gets an entry in the passwordfile when we create it so we can connect to a remote database . The credentials provided when connecting remotely as sysdba are compared to the contents of the passwordfile. For example
$sqlplus sys/xxxx@db_name as sysdba


Password file authentication is enabled by setting the database parameter remote_login_password file to "shared" or "exclusive". For more about Password file Click Here


If we  grant the SYSDBA or SYSOPER privilege to any additional user then that user will also get an entry in the passwordfile: the hashed password of that user is then copied to the passwordfile , when this user connects,the effective user will be SYS .

Note: When both OS authentication and password file authentication are enabled then the OS Authentication will be used. This means that we can connect with any username/password combination.


Here we will considering all the possible issue related to ORA-1031


Case 1  :
One of the reason of getting ORA-1031 may be because the osuser is not the member of dba group .In case of  Linux , the osuser user must be the memeber of   "DBA"  group . To check the group the use the below command .
[oracle@Ramtech ~]$ id
uid=501(oracle) gid=502(oinstall) groups=501(dba),502(oinstall)


In case of Window, OS user must be a member of ora_dba group . Check whether the OS user is a member of ORA_DBA or not by using the below command  :
C:\>echo %username%                    ( for current osuser )
Neerajs


C:\>NET LOCALGROUP ORA_DBA
Alias name     ORA_DBA
Comment        Oracle DBA Group
Members
------------------------------------
Neerajs
NT AUTHORITY\SYSTEM
The command completed successfully.


If the osuser is not the member of the above group then add the osuser to the DBA and ORA_DBA in case of Linux and Window respectively.

Case 2 :
Check the value of  the SQLNET.AUTHENTICATION_SERVICES parameter in file $ORACLE_HOME/network/admin/SQLNET.ORA .


On Unix/Linux  : This parameter should not be set if no strong authentication method is used. If such a method is being used then set the parameter to one of the following  values:
SQLNET.AUTHENTICATION_SERVICES = (ALL)
 or
SQLNET.AUTHENTICATION_SERVICES = (BEQ,<the strong auth method>)
Where  <the strong auth method> can be any combination of the following values: TCPS, KERBEROS5, RADIUS .I will cover more about the sqlnet parameter in my later post .


On Windows : This parameter should be set to NTS or if needed we can add other strong authentication methods besides NTS as such NONE .
SQLNET.AUTHENTICATION_SERVICES = (NTS)
SQLNET.AUTHENTICATION_SERVICES = (NTS,TCPS)


Note: If the parameter is set to NONE then the OS authentication will be disabled and the user will have to provide a valid username/password combination to be able to connect to the database. On Windows the user who is not able to connect as sysdba using OS authentication might be a domain user. Check the following if we  are in this scenario:

A.) It is important that this user is a direct member of the local ORA_DBA group .
B.) Oracle Service must be started as a user who is able to check the group membership for any domain user who might be connecting as sysdba locally.
C.) Check whether the clocks of the RDBMS Server and of the Active Directory Server are perfectly synchronized. Even small clock drifts can cause issues to the underlying kerberos authentication mechanism used by default on Windows. In these cases the ORA-1031 would be most of the times intermittent.
D.) Check whether the Oracle Service is started by an user whose name contains non ASCII characters .


CASE  3 :
Check the value of parameter remote_login_passwordfile. This has to be set to either EXCLUSIVE or SHARED .


SQL> show  parameter  remote_login_passwordfile
NAME                                           TYPE           VALUE
-------------------------------            --------      ---------------
remote_login_passwordfile         string        EXCLUSIVE


If the parameter is not set correctly then modify it and then restart the database:
 SQL> alter system set remote_login_passwordfile=exclusive scope=spfile ;


Check whether the password file with the correct name exists in the right directory/folder and has the right ownership and permissions .
On Unix/Linux  : The password file with the name  orapw<ORACLE_SID> must exist in directory $ORACLE_HOME/dbs. If it does not exist then recreate it using the orapwd command .


[oracle@Ramtech dbs]$orapwd file=$ORACLE_HOME/dbs/orapw<sid> password=<password> force=y ignorecase=n
[oracle@Ramtech dbs]$ ls -l orapw*
-rw-r----- 1 oracle oinstall 1536 Jan 10 14:44 orapwcomcast


On Windows : The default location of the password file on Windows is folder %ORACLE_HOME%/database and the name of the password file must be pwd<%ORACLE_SID%>.ora. When the passwordfile authentication is being used Oracle searches for the password file in the following locations(in this exact order):

The folder pointed to by the registry key  HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HomeXX\ORA_<%ORACLE_SID%>_PWFILE
The folder pointed to by the registry key HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HomeXX\ORA_PWFILE
The default location ( %ORACLE_HOME%\database)


If the password file does not exist in the right folder then create it using the orapwd command:


C:\> cd  %ORACLE_HOME/database
C:\> orapwd file=pwd<sid>.ora password=<password> force=y nosysdba=n


Note: Make sure that the password file exists in the folder specified by the registry keys if these are set.


CASE 4 :
Check whether the user was granted the SYSDBA privilege. Sometimes , we may get this error when we don't have sysdba privileges and try to connect as sysdba when sqlnet.authentication_services is NONE . For example :
C:\>sqlplus scott/tiger@noida as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jan 10 17:56:27 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
ERROR:
ORA-01031: insufficient privileges


Check the v$pwfile_users  view to find the user which is having sysdba or sysoper privileges .
SQL> select * from v$pwfile_users;
USERNAME         SYSDB   SYSOP     SYSAS
------------          -------     --------     --------
SYS                    TRUE     TRUE       FALSE


If the user is not granted the privilege then log as sys user and run:
SQL> grant SYSDBA to scott ;
where scott is the user wewant to use to connect as sysdba.


CASE   5 :  If  the problem is not solved after reviewing the above notes open a service request with Oracle Support .


ORA-00600 internal error code, arguments [2662]


ORROR at line 1:
oRA-00603: ORACLE server session terminated by fatal error
oRA-00600: internal error code, arguments: [2662], [0], [6202286], [0], [6211768], [12582976], [], [], [], [], [], []
oRA-00600: internal error code, arguments: [2662], [0], [6202285], [0],
6211768], [12582976], [], [], [], [], [], []
oRA-01092: ORACLE instance terminated. Disconnection forced
oRA-00600: internal error code, arguments: [2662], [0], [6202283], [0],
6211768], [12582976], [], [], [], [], [], []
rocess ID: 4168
ession ID: 391 Serial number: 3




ORA-00600: internal error code, arguments: [2662], [0], [103516293]

1. create a pfile from spfile. Set parameter

_ALLOW_RESETLOGS_CORRUPTION = true
UNDO_MANAGEMENT = MANUAL

2. startup mount pfile=”;
3.

recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

recover database using backup controlfile;


4. Enter CANCEL to cancel recovery - there are NO archive logs applied.

5. Enter ALTER DATABASE OPEN RESETLOGS


==============solutions

restart oracle services many times


==============

http://myoracledbablogon.blogspot.in/2010/09/all-that-ive-found-to-recover-database.html


ERROR:            


  Format: ORA-600 [2662] [a] [b] [c] [d] [e]

VERSIONS:
  versions 6.0 to 10.1

DESCRIPTION:

  A data block SCN is ahead of the current SCN.

  The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION

SUGGESTIONS:      
   
  There are different situations where ORA-600 [2662] can be raised.

  It can be raised on startup or duing database operation.

  If not using Parallel Server, check that 2 instances have not mounted
  the same database.

  Check for SMON traces and have the alert.log and trace files ready
  to send to support.

  Check the SCN difference [argument d]-[argument b].

  If the SCNs in the error are very close, then try to shutdown and startup
  the instance several times.

  In some situations, the SCN increment during startup may permit the
  database to open. Keep track of the number of times you attempted a
  startup.

  If the Known Issues section below does not help in terms of identifying
  a solution, please submit the trace files and alert.log to Oracle
  Support Services for further analysis.



REDO UNDO CORRUPT DUE TO POWER FAILURE ORA-00333 ORA-600 [4194]




PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

cancel

alter database open resetlogs;

it will recreate redologs.


restart oracle services


--------------------IF ERROR: ORA-00600: internal error code, arguments: [2662]

restart oracle services many times

Format: ORA-600 [2662] [a] [b] [c] [d] [e]

 The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION


--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

-----------------------------add following in pfile


undo_management = manual
event = '10513 trace name context forever, level 2'

--------------
Set the Oracle event 10513 before opening the database. This event disables Oracle from rolling back an uncommitted transaction and should be used very carefully.


select file_name,tablespace_name from dba_data_files;

startup restrict pfile=


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'D:\APX1213D\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1],[13836], [16279], [16685], [], [], [], [], [], [], []



PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

cancel

alter database open resetlogs;

it will recreate redologs.


restart oracle services


--------------------IF ERROR: ORA-00600: internal error code, arguments: [2662]

restart oracle services many times

Format: ORA-600 [2662] [a] [b] [c] [d] [e]

 The ORA-600 [2662] occurs when an SCN is compared to the dependent SCN
  stored in a UGA variable.

  If the SCN is less than the dependent SCN then we signal the ORA-600 [2662]
  internal error.

ARGUMENTS:
  Arg [a]  Current SCN WRAP
  Arg [b]  Current SCN BASE
  Arg [c]  dependent SCN WRAP
  Arg [d]  dependent SCN BASE
  Arg [e]  Where present this is the DBA where the dependent SCN came from.

FUNCTIONALITY:    
  File and IO buffer management for redo logs

IMPACT:
  INSTANCE FAILURE
  POSSIBLE PHYSICAL CORRUPTION


--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

-----------------------------add following in pfile


undo_management = manual
event = '10513 trace name context forever, level 2'

--------------
Set the Oracle event 10513 before opening the database. This event disables Oracle from rolling back an uncommitted transaction and should be used very carefully.


select file_name,tablespace_name from dba_data_files;

startup restrict pfile=


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'D:\APX1213D\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

ORA-01157 ORA-01110 Undo Corrupt


===========================
undo_management = manual
pfile

shutdown immediate;
startup

alter database datafile 'd:\rkdatabase\undotbs01.ora' offline drop;

alter database open;

drop tablespace undotbs1;

create undo tablespace undotbs1 datafile 'd:\rkdatabase\undotbs02.ora' size 500m autoextend on next 10m maxsize unlimited;

undo_management = auto


shutdown immediate
startup


===========================

Recovery of UNDO tablespace in a Non Archive Log mode Database

SQL> startup;
ORACLE instance started.

Total System Global Area  167772160 bytes
Fixed Size                  1218316 bytes
Variable Size              67111156 bytes
Database Buffers           96468992 bytes
Redo Buffers                2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: '/u01/oradata/burp/undotbs01.dbf'


Since the database in Non Archive Log mode, we cannot open the database by taking the datafile offline using ‘alter database datafile 2 offline’,but we need to use ‘…OFFLINE FOR DROP‘. This will mark the datafile for subsequent dropping.A datafile once marked ‘OFFLINE FOR DROP’ can never be brought online.

SQL>alter database datafile 2 offline for drop;
SQL>alter database open;

Now we need to create another undo tablespace for the transactions to use. Once the new undo tablespace is created,shutdown the database and edit the init.ora file and change the parameter undo_tablepace=<new_undo_tablespace>

SQL>create undo tablespace UNDOTBS2 datafile '/path/undotbs02.dbf' size 20M;

SQL>shutdown immediate;
edit the init.ora file and set paramter undo_tablespace=UNDOTBS2

SQL>startup;
Now we will drop UNDOTBS1 as it is nolonger used.

SQL>drop tablespace UNDOTBS1 including contents and datafiles;
While database is OPEN
+++++++++++++++++++

SQL>create undo tablespace UNDOTBS2 datafile '/path/undotbs02.dbf' size 20M;


SQL>shutdown immediate;
edit the init.ora file and set paramter undo_tablespace=UNDOTBS2

SQL>startup;
Now we will drop UNDOTBS1 as it is nolonger used.

SQL>drop tablespace UNDOTBS1 including contents and datafiles;





======================================or

 alter system set undo_management = manual scope=spfile;


shut immediate

startup pfile

-----This is to confirm no backups available

rman target /
RMAN> restore tablespace undotbs1;
RMAN> exit

SQL> alter database datafile 'C:\APP\ORACLE11GR2\ORADATA\ORCL\DATAFILE\O1_MF_UNDOTBS1_7OS7JZJV_.DBF' offline drop;

alter database open;

drop tablespace undotbs1;


create UNDO tablespace undotbs1 datafile 'C:\APP\ORACLE11GR2\ORADATA\ORCL\DATAFILE\O1_MF_UNDOTBS1_new.dbf' size 25m autoextend on next 1m maxsize 50m;



alter system set undo_management = auto scope=spfile;





Thursday, March 7, 2013

YEP1314 Process




Please check required Space on storage for DB FY1314 = 1.5  * Database size of last year database FY1213


And follow following steps to create FY 1314 DB and database objects structure

1) Take Export Full Dump of apx1213 DB
Note: Please remove backup tables like (sauda050612 ldfibs070712) before take export

2) Create apx1314 Database same as last year database apx1213 (tablespace name, datafiles configuration should be same)

Email: Create Database Template for Database(APX1314) using existing Database(APX1213)

select file_name,tablespace_name,bytes/1024/1024 "Size MB",status,autoextensible from dba_data_files;
select file_name,tablespace_name,bytes/1024/1024 "Size MB",status,autoextensible from dba_temp_files;

Tablespace                 Datafile Size       Auotextend
SYSTEM SYSTEM01.ORA 500M 100M
SYSAUX SYSAUX01.ORA 500M 100M
UNDOTBS1 UNDOTBS01.ORA 1000M 500M
INDX INDEX01.ORA same as before same as before
USR USERS01.ORA same as before same as before
TEMPORARY TEMP01.ORA 1000M 500M


Keep Redolog size same as last year DB

SELECT LOG_MODE FROM V$DATABASE;
Archive log should be enabled


Tick on Use Bigfile tablespace Datafiles for USERS01.ORA, INDEX01.ORA. Or you can add datafiles whenever required

Parameters: db_block_size, sga_target, sga_max_size, utl_file_dir, processes, audit_trail, job_queue_processes, cursor_sharing, open_cursors and other should be same as last year.


3) Create Schema User LDBO using only attached scripts createuserLDBO10g_run_from_DB_server.SQL (Note: Don’t use previously having createuserldbo.sql)

Note: attached createuserLDBO10g_run_from_DB_server.SQL have all housekeeping job like analyze, purge queue, compile


Same time, Please scheduler database backup dump job using following or attached script and run, check & verify and confirm to Kshitij

/*
Note: Please change the details as per you client database name net services name (apx1213srv), user(ldbo) password (ldbo), oracle installation location(D:\oracle\product\10.2.0\db_1\BIN), dump location(E:\exp1314)
There should be entry of Net Manager(d:\oracle\product\10.2.0\db_1\network\ADMIN\tnsnames.ora) in oracle database server
Also create folder where you want backup like following E:\exp1314
Change the backup job start as per server availability
*/

conn sys@apx1314srv as sysdba

begin
dbms_scheduler.create_job
(job_name => 'exp1314',
job_type => 'EXECUTABLE',
number_of_arguments => 5,
job_action => 'D:\oracle\product\10.2.0\db_1\BIN\exp.exe',
start_date => '01-APR-13 11:00.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
comments=>'export dump');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',1,'system/oracle@apx1314srv');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',2,'full=Y');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',3,'file=E:\exp1314\exp1314.DMP');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',4,'log=E:\exp1314\exp1314.log');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE('exp1314',5,'STATISTICS= NONE');
end;
/

exec dbms_scheduler.run_job('exp1314');

4) Login into LDBO user at FY1314 db and drop and recreate Database Link

Match with last year database

Con ldbo/ldbo@apx1213srv
select owner,db_link,username,host from dba_db_links;

Con ldbo/ldbo@apx1314srv
select owner,db_link,username,host from dba_db_links;

Con ldbo/ldbo@apx1314srv

Spool c:\yep1314\drop_links.sql
SELECT 'DROP ' || decode (owner,'PUBLIC','PUBLIC DATABASE LINK ','DATABASE LINK ') || DB_LINK || ';' FROM DBA_DB_LINKS  WHERE USERNAME !=' ';
@ c:\yep1314\drop_links.sql


Con ldbo/ldbo@apx1213srv
Spool c:\yep1314\create_links.sql
SELECT 'CREATE ' || decode (owner,'PUBLIC','PUBLIC DATABASE LINK ','DATABASE LINK ') || DB_LINK || ' CONNECT TO ' || USERNAME || ' IDENTIFIED BY ' || '<PLEASE PUT PASSWORD>' || ' USING ' || '<PLEASE PUT DATABASE STRING>' || ';' FROM DBA_DB_LINKS  WHERE USERNAME !=' ';

/* put the password in above file and also change the connect string*/

Con ldbo/ldbo@apx1314srv
@ c:\yep1314\create_links.sql

Database link creation is below
Note: Please change the password and connect string as per client software

CREATE PUBLIC DATABASE LINK "LNK_RAKSHAK"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

 CREATE PUBLIC DATABASE LINK "LNK_CCM"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_DIGITAL"
  CONNECT TO "LDBO" IDENTIFIED BY ldbo USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_PREVIOUSYEARBALANCE"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1213SRV';



5) Create Database structure (previously called optable) using exp dump (rows=N) or expdp (content=metadata_only)

imp ldbo/ldbo@apx1314srv FILE=D:\apx1213.dmp ROWS=N FULL=Y CONSTRAINTS=Y GRANTS=Y INDEXES=Y STATISTICS=NONE

or

impdp ldbo/ldbo@apx1314SRV directory=DPUMP_DIR1 dumpfile=apx1213.DMP LOGFILE=apx1314metadata.LOG content=metadata_only EXCLUDE=TABLESPACE,DB_LINK,SEQUENCE,STATISTICS,USER,GRANT



6) Login into LDBO user at FY1314 and drop and recreate Sequence
Sequences should be started from 1. Therefore we drop and recreate Sequences


Con ldbo/ldbo@apx1314srv

SET LINE 9999
SPOOL C:\YEP1314\SEQ1314.SQL
select 'DROP SEQUENCE ' || Sequence_Name || ';' from dba_Sequences where Sequence_Owner='LDBO';
SPOOL OFF
@ C:\YEP1314\SEQ1314.SQL

Create Sequence LDBO.Dematuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Saudauniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Billuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Jobbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Delbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Turnbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Fwdbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Extbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Dpuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Bankuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Payrequniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Riskuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Portuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Axiscms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Pmsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Debarreduniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Bkrecouniquenumber minvalue 50000000 maxvalue 9999999999 start with 50000000 increment by 1 nocache;
Create Sequence LDBO.Sq_Smsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Offlinenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Phytodemnumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Ipouniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Trademodification minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_SpicePoa minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Idbicms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Scripuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Clientuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Brokeruniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Nsdlcdsldpuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Orarupee minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Rtgssequence minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Bobbentopool minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Clientcode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Familycode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_NigPortclose Minvalue 1 maxvalue 9999999999 Start With 1 Increment by 1 noCache;
Create Sequence LDBO.Seqclientsequence minvalue 1 maxvalue 99 start with 1 increment by 1;
Create Sequence LDBO.Sq_collateral minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Nsdlkra minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.LdOraclebatchid minvalue 1 maxvalue 9999999999 nocache;
Create Sequence LDBO.Sq_Limittokenno minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.SQ_CDSLXMLUNIQUENUMBER minvalue 10000 maxvalue 99999;


/*Missing Sequence */
select object_name from user_objects@LNK_PREVIOUSYEARBALANCE where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%'
minus
select object_name from user_objects where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%';


7) Create splogin,tglogin manually from SYS user only (already present in createuserLDBO10g_run_from_DB_server.SQL )

SELECT owner,object_name FROM Dba_Objects WHERE OBJECT_NAME IN ('SPLOGININFORMATION','TGLOGININFORMATION');
Owner                  Object_name
SYS                         SPLOGININFORMATION
SYS                         TGLOGININFORMATION

8)  Compile All Invalid Objects

conn sys@apx1314srv as sysdba

EXEC sys.UTL_RECOMP.recomp_serial;

select object_name  from dba_objects where owner='LDBO' and status='INVALID';

SELECT owner,object_name,status FROM Dba_Objects WHERE OBJECT_NAME IN ('SPLOGININFORMATION','TGLOGININFORMATION');


9) Login into LDBO user at FY1314 db and Drop and recreate Database directory

 Match with last year database and change the path as per your requirement

conn sys@apx1213srv as sysdba
select 'create or replace directory '||DIRECTORY_NAME||' as '||''''||DIRECTORY_PATH||''''||';' from dba_directories;


10) if Synonym is missing then create synonym from last year database apx1213 and run into current year apx1314

conn sys@apx1314srv as sysdba

Select synonym_Name From Dba_Synonyms@LNK_PREVIOUSYEARBALANCE Where table_Owner='LDBO'
Minus
Select synonym_Name From Dba_Synonyms Where table_Owner='LDBO';


conn sys@apx1213srv as sysdba

set pagesize 0
set linesize 30000
set long 500000
set longchunksize 500000
set trimspool on
set feed off
exec dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
exec dbms_metadata.set_transform_param( DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );

SPOOL C:\YEP1314\SYNONYMS1213.SQL
select dbms_metadata.get_ddl('SYNONYM',synonym_name, 'PUBLIC') metadata
from dba_synonyms
where owner = 'PUBLIC'
  and table_owner = 'LDBO'
order by synonym_name;
SPOOL OFF

conn sys@apx1314srv as sysdba
@C:\YEP1314\SYNONYMS1213.SQL

11) Create Users from Last years


/* Missing Users */

conn sys@apx1213srv as sysdba

select username from dba_users@LNK_PREVIOUSYEARBALANCE
minus
select username from dba_users;

conn sys@apx1213srv as sysdba

set pagesize 0
set linesize 30000
set long 500000
set longchunksize 500000
set trimspool on
set feed off
exec dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
exec dbms_metadata.set_transform_param( DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE );


spool c:\yep1314\user_script.sql
SELECT DBMS_METADATA.GET_DDL('PROFILE',profile) FROM DBA_PROFILES WHERE profile != 'DEFAULT' GROUP BY PROFILE;
SELECT DBMS_METADATA.GET_DDL('USER', USERNAME) FROM DBA_USERS;
SELECT DBMS_METADATA.GET_GRANTED_DDL('TABLESPACE_QUOTA', USERNAME) DDL FROM DBA_USERS where username in (select username from dba_ts_quotas);
SELECT DBMS_METADATA.GET_DDL('ROLE', role) FROM dba_roles;
spool off

conn sys@apx1314srv as sysdba

@c:\yep1314\user_script.sql


12) Create Object roles and user roles from last year using db link lnk_previousyearbalance

Conn ldbo/ldbo@apx1314srv


BEGIN
 FOR cur_rec IN (SELECT role
                  FROM   dba_roles@lnk_previousyearbalance) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'CREATE ROLE ' || cur_rec.role ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,privilege
                  FROM   DBA_SYS_PRIVS@lnk_previousyearbalance where grantee not in ('SYS','SYSTEM','SYSMAN','TSMSYS','WMSYS','RECOVERY_CATALOG_OWNER','RESOURCE','OUTLN','ORACLE_OCM','OEM_MONITOR','OEM_ADVISOR','MGMT_USER','IMP_FULL_DATABASE','EXP_FULL_DATABASE','DBA','CONNECT','AQ_ADMINISTRATOR_ROLE','DBSNMP','SCHEDULER_ADMIN')) LOOP
    BEGIN
      EXECUTE IMMEDIATE ('Grant ' || cur_rec.privilege || ' to ' || cur_rec.grantee );
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name
                  FROM   dba_tab_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant ' || cur_rec.privilege || ' on ' || cur_rec.table_name || ' to ' || cur_rec.grantee ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name,column_name
                  FROM   dba_col_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.PRIVILEGE || '('|| cur_rec.COLUMN_NAME ||') on '|| cur_rec.TABLE_NAME || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,granted_role
                  FROM   dba_role_privs@lnk_previousyearbalance Where Grantee!='SYSTEM' and Grantee!='SYS' and Grantee!='DBSNML' and Grantee!='REPADMIN') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.granted_role || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

13) If you have partitioned tables in last year then create DEMATMAIN table as partitioned (partitions should be same as TRANSACTIONS table) which is attached

Drop the dematmain table and recreate partitioned DEMATMAIN table using attached script

Conn ldbo/ldbo@apx1314srv

Select TABLE_NAME From Dba_Part_Tables@LNK_PREVIOUSYEARBALANCE where owner='LDBO'
Minus
Select TABLE_NAME From Dba_Part_Tables where owner='LDBO';


14)  Match Last and Current Year Database Objects (Run the following command in last and current year database and match the records)

Select Object_Type,Count(*) From User_Objects Group By Object_Type Order By 1;
select COUNT(*)  from user_objects where status='INVALID';
Select count(*) From Dba_Synonyms Where Table_Owner='LDBO';
select count(*) from dba_Constraints where owner='LDBO';
SELECT * FROM DBA_DB_LINKS;
SELECT * FROM DBA_DIRECTORIES;
Select TABLE_NAME From Dba_Part_Tables where owner='LDBO';


Conn ldbo/ldbo@apx1314srv

/*Missing Object */
Select object_Name From Dba_objects@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO'
Minus
Select object_Name From Dba_objects Where Owner='LDBO';

/*invalid objects */
Select object_Name From Dba_objects@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO' and status='INVALID'
Minus
Select object_Name From Dba_objects Where Owner='LDBO' and status='INVALID';

/*Missing Tables */
Select table_Name From Dba_tables@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO'
Minus
Select table_Name From Dba_tables Where Owner='LDBO';

/*Missing Synonyms */
Select synonym_Name From Dba_Synonyms@LNK_PREVIOUSYEARBALANCE Where table_Owner='LDBO'
Minus
Select synonym_Name From Dba_Synonyms Where table_Owner='LDBO';

/*Missing Sequences */
select * from user_objects@LNK_PREVIOUSYEARBALANCE where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%'
minus
select * from user_objects where object_type='SEQUENCE' and object_NAME NOT LIKE 'SQFIN%';

/*Missing Constraints */
Select Constraint_Name From Dba_Constraints@LNK_PREVIOUSYEARBALANCE Where Owner='LDBO' and (Constraint_Name not like 'SYS%' and Constraint_Name not like 'BIN%')
Minus
Select Constraint_Name From Dba_Constraints Where Owner='LDBO' and (Constraint_Name not like 'SYS%' and Constraint_Name not like 'BIN%');

/* Missing Users */
select username from dba_users@LNK_PREVIOUSYEARBALANCE
minus
select username from dba_users;

/* Missing Roles */
select role from dba_roles@LNK_PREVIOUSYEARBALANCE
minus
select role from dba_roles;

select GRANTEE from dba_tab_privs where owner='LDBO'@LNK_PREVIOUSYEARBALANCE
minus
select GRANTEE from dba_tab_privs where owner='LDBO';



15) Import Lookup tables data using DB link LNK_PREVIOUSYEARBALANCE

Conn ldbo/ldbo@apx1314srv

SET LINE 9999
SET FEEDBACK OFF

Spool c:\yep1314\lookuptables.sql

SELECT 'ALTER TABLE "' || a.table_name || '" DISABLE CONSTRAINT "' || a.constraint_name || '";'
FROM   dba_constraints a
Where  A.Owner='LDBO' And A.Constraint_Type In ('R')
And    (A. Table_Name Like '%SMSPROVIDER%' OR A.Table_Name Like '%LOOKUP%');

select 'INSERT INTO ' || TABLE_NAME  || ' SELECT * FROM ' || TABLE_NAME || '@LNK_PREVIOUSYEARBALANCE;' from dba_tables where owner='LDBO'
and (Table_Name Like '%SMSPROVIDER%' OR Table_Name Like '%LOOKUP%');
Commit;
SELECT 'ALTER TABLE "' || a.table_name || '" ENABLE CONSTRAINT "' || a.constraint_name || '";'
FROM   dba_constraints a
Where  A.Owner='LDBO' And A.Constraint_Type In ('R')
And    (A. Table_Name Like '%SMSPROVIDER%' OR A.Table_Name Like '%LOOKUP%');
spool off

@ c:\yep1314\lookuptables.sql
INSERT INTO TBLSOFTWAREACCESSLEVEL select * from TBLSOFTWAREACCESSLEVEL@LNK_PREVIOUSYEARBALANCE;
Commit;

/* Check and verify constraints, it should not be disabled */

select Constraint_Name from Dba_Constraints Where Owner='LDBO' AND status !='ENABLED';

Also check the row count in Lookup tables

16) Analyze whole database after all above steps

conn sys@apx1314srv as sysdba

exec DBMS_SCHEDULER.run_job ('ANALYZE_FULL');

Note: After Master transmission and other bulk transmission activities , it is required to run ANALYZE_FULL job for performance


Followers