Showing posts with label error. Show all posts
Showing posts with label error. Show all posts

Friday, January 22, 2016

ora-14450 ora-14452 global temporary table attempt to access a transactional temp table already in use

SELECT
distinct
ss.program "SOFTWARE",
SS.TERMINAL,
       ss.username "USER",
'alter system disconnect session ''' || ss.sid || ',' || ss.serial# || ',@' || ss.inst_id || ''' immediate;'
  fROM gv$process pr, gv$session ss, gv$sqlarea sqa,
(select  sid
  from  gv$lock
  where id1 = (
               select  object_id
                 from  dba_objects
                 where owner = 'LDBO'
                   and object_name = 'TBLTEMPTRADESTATUS'))tbllock
 WHERE pr.addr = ss.paddr
   AND ss.username is not null
   AND ss.sql_address = sqa.address(+)
   AND ss.sql_hash_value = sqa.hash_value(+)
and ss.inst_id=pr.inst_id
and ss.sid = tbllock.sid
;

Saturday, July 27, 2013

Sessions Hang On Wait Event WAIT FOR EMON PROCESS NTFNS ID 1287435.1

Sessions Hang On Wait Event "WAIT FOR EMON PROCESS NTFNS" [ID 1287435.1]



Symptoms

RDBMS 10.2.0.4.0
OCI based DB change notification
EMON spin with high CPU usage
EMN0 locks affected multiple applications for a few hours.
Performance suffers until DBCN Database Change Notification is disabled.


Cause


Bug 7282408: EMN STUCK - OTHER SESSIONS HANG ON "WAIT FOR EMON TO PROCESS NTFNS"



Filename = awr_report_29315_29317.html
See the following error:
In Top 5 Timed Events, "wait for EMON to process ntfns " wait event is listing

Matches Bug 7282408 filed against 10.2.0.4.0 and fix is included in 11.1.0.6.0 RDBMS
WORKAROUND:
SQLNET.SEND_TIMEOUT=3

Values are in seconds. However the send timeout is a tunable parameter and there is no standard value which has been prescribed. The ideal value for this parameter would be such that this does not affect the normal working clients.





Solution


1) Set SQLNET.SEND_TIMEOUT=10

It was found that after the SEND_TIMEOUT parameter was adjusted and resolved the EMN process to stop spinning and de-register the subscribers; if customer tried to re-register, the subscribers were removed as soon as new changes took place and could not re-register successfully until the EMN process was killed.
Continued problem resolution found that unpublished Bug 6750296 'Incorrect handling of Connection Failures' was needed to address the incorrect handling of connection failure issues.

Bug 8605337 - Deadlock between gather stats job and concurrent DDL [ID 8605337.8]

Bug 8605337  Deadlock between gather stats job and concurrent DDL



Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions BELOW 12.1
Versions confirmed as being affected
11.2.0.1
11.1.0.7
11.1.0.6
Platforms affected Generic (all / most platforms affected)
Fixed:

This issue is fixed in
12.1 (Future Release)
11.2.0.2 (Server Patch Set)
Symptoms:

Related To:

Deadlock
ORA-60
Optimizer
PL/SQL (DBMS Packages)
DBMS_STATS
Description

ORA-60 can occur when gathering stats and some other concurrent DDL
is happening on the same table(s).

The deadlocking sessions will wait on following wait events:
- 'enq: TX - row lock contention' on tab$ table
- 'library cache lock' on a sys owned object

ORA-00439 feature not enabled Deferred Segment Creation 11g enterprise to standard edition downgrade


1293326.1


http://lifeandoracle.blogspot.in/2012/07/ora-00439-feature-not-enabled-deferred.html
http://ocpdba.wordpress.com/2009/10/09/deferred-segment-creation-segmentless-tables-11gr2-new-feature-and-dumb-questions/
http://asanga-pradeep.blogspot.in/2011/01/segment-creation-behavior-change.html
http://www.donotcommit.net/?p=51


------------------
1. Init.ora Parameter "DEFERRED_SEGMENT_CREATION" Reference Note [ID 1216282.1]
2. IMP-00003, ORA-00959 While Importing Data Into Existing Table Of 11gR2 Using Traditional Import [ID 1180873.1]
3. Bug 8795792 - DBMS_METADATA.get_ddl generates wrong keywords for compressed indexes in 11.2 [ID 8795792.8]
4. Exporting Schema In 11.2 Database With 10g Client Gives ORA-1455 [ID 1083330.1]
1293326.1
-----------------

then you must know you have just hit one of the new features of 11gR2. Unfortunately this feature makes the life of the casual DBA a little bit more complicated than necessary.

I found an interesting debate about whether this must be considered a feature or a bug in the OTN forums but I very much appreciated the last comment where Michiel provided a working solution without delving into philosophical matters. I remember doing the same thing when dealing with another EXPDP/IMPDP problem between versions 11.1.0.6 and 11.1.0.7.

---------------------------Solution 1--------------

-- on the source instance
EXPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2

-- on the target instance
IMPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2


---------------------------Solution 2--------------

For 11.2.0.1 one way to avoid the issue is to allocate segment manually before the export. This could be done with

alter table  allocate extent;

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';



---------------Solution 3--------

SQL> alter system set deferred_segment_creation=false;



===================
From 11g Release 2 all tables created without rows, do not create any segments (by default). If you want to change this behaviour, tweak the spfile parameter “deferred_segment_creation” to FALSE. Anyway the people installing Peoplesoft, SAP, Siebel and other thousand-table data models, really do thank for this new feature.

How does it work?

If the INITIAL extent of a table is 15 MegaBytes in size, but the table is empty, the table won’t ask for that 15 MB of space. As soon as the first row of data is inserted (even a few bytes), the Oracle space engine will run the storage space settings of the table, before inserting that line, making the table actually owner of that space. You can create a 100GB table onto a 100MB tablespace as long as the table is empty, because the space will only be reclaimed by the table when it receives the first row.

========================

In 11.2 the initialization parameter DEFERRED_SEGMENT_CREATION controls whether segment space is allocated at the time of the segment creation. This is set to true by default both on enterprise edition and standard edition. But on standard edition this parameter has no effect as this feature is not enabled. Initialization parameter behavior could be override at table with
SQL> CREATE TABLE x(a number) SEGMENT CREATION IMMEDIATE;
or
SQL> CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
But on standard edition this would still give an error.
SQL>  CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
CREATE TABLE x(a number) SEGMENT CREATION DEFERRED
*
ERROR at line 1:
ORA-00439: feature not enabled: Deferred Segment Creation
Problem happens when exporting some empty tables from an enterprise edition database to a standard edition database. According to the default behavior on enterprise edition when tables have no rows there won't be any segments. But default behavior on standard edition is that, tables are always created with segments.

In 11.2.0.1 when importing tables from enterprise to standard edition tables with no segments throws an error while in 11.2.0.2 tables get created with segments without an error.


========================


Oracle 11g2 does NOT export empty tables when using exp.exe

http://tsells.wordpress.com/2011/02/10/oracle-11g2-does-not-export-empty-tables-when-using-exp-exe/

Out of habit I have been using the older data import / export functions in lieu of using the Data Pump commands that Oracle recommends.  As of 11g2 I can no longer do this.  The exp.exe no longer exports tables that do not have an entry in DBA_SEGMENTS.  This is due to a new feature named “deferred segment creation”.

The syntax for the new Data Pump commands are different and require some additional setup / maneuvering on the Oracle Server itself to complete.  The files when importing now must be placed in the Data Pump Directory.  This can be found by running the following query against the database server.

select owner, directory_name, directory_path from dba_directories

================


The cause is a new feature of Oracle 11.2: deferred segment creation.
All newly created tables that has no rows in it (ever!), has no corresponding row in DBA_SEGMENTS, so somewhy won't get exported with (deprecated) exp utility. Use expdp instead.


================

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';

================

Don't forget: the deferred segment creation applies also to indexes and lobs and Indexes inherit table characteristics.
You can also query the SEGMENT_CREATED column of the USER_TABLES, USER_INDEXES, or USER_LOBS views. For non-partitioned tables, indexes and lobs, this column shows YES, if the segment is created.
However you should not take care of LOBs and LOB indexes extent allocations since they will be allocated automatically by the database once you allocate the corresponding table extent.

================Export was always failing due to the empty tables. Here is my dynamic sql======

DECLARE
CURSOR cur IS
SELECT 'ALTER TABLE ' || table_name || ' ALLOCATE EXTENT ' vSQL
FROM user_tables
where SEGMENT_CREATED = 'NO';
BEGIN
FOR c IN cur LOOP
BEGIN
EXECUTE IMMEDIATE c.vSQL;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line (SUBSTR (SQLERRM, 1, 250));
END;
END LOOP;
END;
/

ORA-24330 OCI INTERNAL ERROR

Solution: alter system set client_result_cache_size=0 scope=spfile;




Effect of client_result_cache_size On Client Applications Or OCI Based Applications [ID 1300727.1]


Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.2 and later   [Release: 11.2 and later ]
Information in this document applies to any platform.
Symptoms

- Receiving error 'ORA-24330: OCI INTERNAL ERROR' while running back end Procedures from SQL*Plus and also from the Application connected to the Database - Oracle 11g 11.2.0.2.

- Unable to log in through Application.
Changes

- Database was upgraded from Oracle 11g Release 2 - 11.2.0.1 to 11.2.0.2. The client_result_cache_size was set to 1GB.

- Parameter client_result_cache_size is not on by default and the recommended value is 0.


Cause

client_result_cache_size is supposed to reset once the client process terminates but in cases like transactions with millions of rows the cache gets full very soon and does not reset itself while the client process is still on and fails during that time with errors like 'ORA-24330: OCI INTERNAL ERROR'.

If client_result_cache_size is set to a non zero value, technically it would mean that you have turned on the setting client_result_cache_size and it does have a limit active until that client session that started is active.  For example; if the value is set to 1GB and the process that started client process is still actively processing million of rows then it will for sure get 'ORA-24330: OCI INTERNAL ERROR' when it reaches 1GB cache which is the set hard limit value.

So for higher volume of transactions the recommended value for client_result_cache_size is zero to avoid all such issues.

BUG 8480915 - ORA-24330: OCI INTERNAL ERROR IN ODP.NET STRESS WHEN CLIENT RESULT CACHE ENABLED

Base BUG 8418934 - KPOQGINV: CLIENT CACHING GOT DISABLED AFTER BADREGID ERROR
Solution

'client_result_cache_size' is an internal integer parameter and to change settings, please do:

1. Setting client_result_cache_size = 0 in the initialization (init.ora) parameter files followed by instance restart is the most reliable way.

client_result_cache_size=0 need to be set at the init.ora and bouncing the DB should effectively set that value.

If client_result_cache_size is set to 0 and it would mean that NO client_result_cache_size is on and nothing will be in cache.

(OR)

Client caching can be totally removed by commenting out the parameter setting client_result_cache_size from init.ora (or) by totally removing that entry.

Once the DB comes back on, you can check the value from SQL*Plus by doing:

SQL> show parameter CLIENT_RESULT_CACHE_SIZE;

2. Alternatively;

If spfile is used;

i. connect / as sysdba to database
ii. alter system set client_result_cache_size=64000 scope=SPFILE; -- value set to 64KB, please note that this is just a sample value.
iii. shutdown the database
iv. startup the database

Once the DB comes back on, you can check the value from SQL*Plus by doing:

SQL> show parameter CLIENT_RESULT_CACHE_SIZE;

'client_result_cache_size' is stored on the Client side.  Once that limit have reached to the maximum then all other subsequent connections will fail.  That is why it is recommended to set to 0 for high volume of transactions in which case there will be no caching.  Even if there's a limit it gets reset only when the Database is bounced.
References

Windows 7 64 bit (unable to make connect to sql data source error# -1)

Error in odbc.dbf : Connectivity error: [Microsoft][ODBC driver for Oracle][Oracle]ORA-12541: TNS:no listener

Solution:
 I have changed scb1213srv with scb1213 or someother name in directy.dbf, odbcad32, tnsnames.ora Now it is working

Cause:
Client IT team create ODBC using registry batch file, may scb1213srv is present some where.
ODBC is not points to listener address properly

Steps
Check your PATH and ORACLE_HOME environment variables
They should point to the location you installed the client and under which your tnsnames.ora file is located.

ORA-22288 File Or LOB Operation FILEOPEN Failed On Windows2008 R2 OS SAN Attached Drive [ID 1504521.1]

Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.2 and later
Microsoft Windows x64 (64-bit)
Symptoms
Error
ORA-22288: file or LOB operation FILEOPEN failed on Windows2008 R2 OS SAN attached drive
Directory object refers to a directory which is a LUN on the netapp.
Re-pointing the Directory object to a folder on C:\ drive of the server with same permissions as the directory on the
LUN, the problem is resolved
Cause
Same symptoms described in
Bug 12667656 - DBMS_LOB.FILEOPEN() FAILS WITH ORA-22288 ON WINDOWS 2008 R2 LUN
which is closed as not a BUG
The implementation of a LUN may take place by junctions/symlinks in Windows.
It is disabled to use symlink/junctions with BFILE for security reason.
This has been introduced in unpublished Bug 2262685.
Solution
1. Use the new "disable_directory_link_check" access parameter, present in 11.2.0.3 and above versions. This
access parameter will disable additional checks for junctions.
For this, upgrade to the 11.2.0.2 latest bundle patch or 11.2.0.3 to get the new feature before creating an
external table with "disable_directory_link_check" access parameter.
-- OR --
2. Disable the check and allow links by turning on the hidden internal parameter "_kolfuseslf" in a DBA (SYS)
session, i.e. run
6/20/13 Document Display
https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_adf.ctrl-state=6no1qqwc6_69 2/2
SQL> alter session set "_kolfuseslf" = true;
before executing the previously failing code/scripts





alter session set "_kolfuseslf" = true;


alter system set "_kolfuseslf" = true;


ora-600 [16513] [1403]

http://www.sql.ru/forum/377566/vopros-pro-shutdown-abort

ora-600 [16513] [1403] [20]


SQL > Select count (*) from $ obj;

  COUNT (*)
----------
      6296

SQL > delete  from $ obj;

6296  rows deleted.

SQL > shutdown abort
ORACLE instance shut down.
SQL > Startup;
ORACLE instance started.

Total System Global Area   135760720 bytes
Fixed Size                    731984 bytes
Variable Size              104857600 bytes
 Database Buffers            29360128 bytes
Redo Buffers                  811008 bytes
 Database mounted.
ORA- 01092 : ORACLE instance terminated. Disconnection forced



after all objects are created from the bootstrap $, when the first call
=====================
PARSING IN  CURSOR # 5 len = 52 dep = 1  uid = 0 oct = 3 lid = 0 tim = 2701521679320 hv = 782031521 ad = '84756cb8 '
Select ctime, mtime, stime from obj $ where obj # =: 1
 END  OF STMT
PARSE # 5 : c = 0 , e = 966 , p = 0 , cr = 0 , cu = 0 , mis = 1 , r = 0 , dep = 1 , og = 0 , tim = 2701521679306
BINDS # 5 :
 bind 0 : dty = 2 mxl = 22 ( 22 ) mal = 00 scl = 00 pre = 00 oacflg = 08 oacfl2 = 1  Size = 24 offset = 0
   bfp = ffffffff7cc6a138 bln = 22 avl = 02 flg = 05
   value = 4
EXEC # 5 : c = 10000 , e = 1,340 , p = 0 , cr = 0 , cu = 0 , mis = 0 , r = 0 , dep = 1 , og = 4 , tim = 2701521681024
WAIT # 5 : Nam = 'file db Sequential Read ' ela = 109 p1 = 1 p2 = 94 p3 = 1
WAIT # 5 : Nam = 'db file Sequential Read' ela = 78 p1 = 1 p2 = 1291 p3 = 1
 Begin Cleaning out Block ...
Block header dump: 0x0040050b
 Object ID on Block? Y
 seg / obj: 0x24 csc: 0x00. 2452   itc: 2   flg: O typ: 2 - INDEX
     fsl: 2   fnx: 0x0 ver: 0x01

 Itl Xid Uba Flag Lck Scn / Fsc
0x01 0x000a.00a. 00000004   0x008000ba. 0002 . 01   CB - 0 SCN 0x0000.000021c0
0x02 0x0002. 013 . 00000378   0x00800025. 0075 .0 F   ---- 298 fsc 0x0fd9.00000000

- Poskipal -

----- End of leaf dump Block -----
Found All committed transactions

- Poskipal

Block cleanout Record , SCN: 0x0000.001b3c72 ver: opt 0x01: 0x01, entries follow ...
  itli: 2   flg: 2   scn: 0x0000. 00000001
Block header dump: 0x0040050b
 Object ID on Block? Y
 seg / obj: 0x24 csc: 0x00.1b3c72 itc: 2   flg: O typ: 2 - INDEX
     fsl: 0   fnx: 0x0 ver: 0x01

 Itl Xid Uba Flag Lck Scn / Fsc
0x01 0x000a.00a. 00000004   0x008000ba. 0002 . 01   CB - 0 SCN 0x0000.000021c0
0x02 0x0002. 013 . 00000378   0x00800025. 0075 .0 F C --- 0 scn 0x0000.00000001

- T. E he just commits, no checks, as a result of

... clean out dump

Internal or Fatal error
ORA- 00600 : internal error code, arguments: [ 16513 ], [ 1403 ], [ 4 ], [], [], [], []

- Ora1403 - no data found



ORA-00350 log 3 of instance beb1314 (thread 1) needs to be archived ORA-00312 online log 3 thread 1 'EBEBD1314REDO03.LOG'

ORA-00350 ORA-00312


SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-00350: log 3 of instance beb1314 (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'




16:36:22 SYS@eqdd011g SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR LOGFILE GROUP 3
*
ERROR at line 1:
ORA-00350: log 3 of instance eqdd011g (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: '/u01/app/oracle/admin/eqdd011g/oradata/eqdd011g/redo03.log'


Elapsed: 00:00:00.17
16:38:00 SYS@eqdd011g SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Database altered.

Elapsed: 00:00:02.69
16:38:13 SYS@eqdd011g SQL> alter database drop logfile group 3;

Database altered.


alter database open;

ORA-16014 ORA-00312

----------ORA-16014 ORA-00312

Archive process error: ORA-16014: log 3 sequence# 29808 not archived, no available destinations
ORA-00312: online log 3 thread 1: 'E:\BEBD1112\REDO03.LOG'



That would happen if the destination filesystem is full (i.e. at 100%) and Oracle cannot write the archivelog to that filesystem.


The general solution for this error is
- check the Disk space
- check where the archive location exists
- Try Clearing the Unarchived log group through the SQL :
   alter database clear unarchived logfile group <<No>>;
- Bounce the Db
- Bounce the Server



---------------------------Archive Log Creation Fails With Errors ORA-16014, ORA-00312 [ID 1373314.1]

Symptoms

When attempting to run "alter system switch logfile;", the session hangs and following errors are reported in the alert log and trace file:

Alert log :
Sun Oct 30 03:01:55 2011
ARC1: Failed to archive log 3 thread 3 sequence 122674
Sun Oct 30 03:01:55 2011
Errors in file /apps/oracle/9.2.0/admin/cwld/bdump/cwld_arc1_889040.trc:
ORA-16014: log 3 sequence# 122674 not archived, no available destinations
ORA-00312: online log 3 thread 3: '/data/oradata/cwld/redo3.log'


Changes

In our example, the database was moved (to new SAN).

Cause

In this case, the issue is caused by an inaccessible/incorrect archiving directory.

SQL> show parameter log_archive_dest_1

NAME                    TYPE    VALUE
-------------------- -------   ------------------------------
log_archive_dest_1  string     location=/data3/oradata/cwld/archive/


However, that location does not exist and the archive log cannot be created.

Solution

Correct the archive path to match the new location, for example;

SQL> alter system set log_archive_dest_1 = 'location=/data1/oradata/archive/';

Or

Remove the old directory path on the operating system and create the new directory path to match the Oracle archiving location.



------------------------------------------------------------------------------------------

Archive process error: ORA-16014: log 3 sequence# 53538 not archived, no available destinations
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'


alter system set db_recovery_file_dest_size=200m;


------------------------------

select Group#,Member from V$logfile;

select group#, status,bytes/1024/1024 from v$log;

 alter database add logfile group 4 ('E:\BEBD1314\REDO04.LOG') size 100m;
 alter database add logfile group 5 ('E:\BEBD1314\REDO05.LOG') size 100m;
 alter database add logfile group 6 ('E:\BEBD1314\REDO06.LOG') size 100m;


ALTER SYSTEM CHECKPOINT GLOBAL;

select group#, status from v$log;

alter database drop logfile group 1;

alter database drop logfile group 2;

alter database drop logfile group 3;

alter system switch logfile;

alter system switch logfile;

select group#, status,bytes/1024/1024 from v$log;

Note: Can not drop current logfile should be inactive state;




ORA-00350 ORA-00312


SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-00350: log 3 of instance beb1314 (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: 'E:\BEBD1314\REDO03.LOG'




16:36:22 SYS@eqdd011g SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR LOGFILE GROUP 3
*
ERROR at line 1:
ORA-00350: log 3 of instance eqdd011g (thread 1) needs to be archived
ORA-00312: online log 3 thread 1: '/u01/app/oracle/admin/eqdd011g/oradata/eqdd011g/redo03.log'


Elapsed: 00:00:00.17
16:38:00 SYS@eqdd011g SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Database altered.

Elapsed: 00:00:02.69
16:38:13 SYS@eqdd011g SQL> alter database drop logfile group 3;

Database altered.


alter database open;

ORA-00257Archiver Error, Connect Internal Only Until Freed and ORA-16014 and ORA-00312

ORA-00257Archiver Error, Connect Internal Only Until Freed and ORA-16014 and ORA-00312 [ID 376923.1]


I have stop archivelog to stop error

Symptoms

Users cannot connect to the database :

0RA-00257:archiver error, connect internal only until freed
ORA-16014:log 2 sequence# 231 not archived, no available destinations
ORA-00312:online log 2 thread 1:'/u02/oradata/wlbranch/redo02.log'





Cause

The flash_recovery_area is full.

SQL>  archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive Destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 231
Next log sequence to archive 231
Current log sequence 233



 SQL> select group#,status archived from v$log;
    GROUP# ARCHIVED
    ------------  ------------------------
    1                INVALIDATED
    2                INACTIVE
    3                INACTIVE


SQL> archive log all;
ORA-16020: less destinations available than specified by
LOG_ARCHIVE_MIN_SUCCEEDED_DEST


The only one destination allowed (log_archive_min_succeed_dest= 1) is not able to perform the archiving.


Solution

1. Either allow more space in the DB_RECOVERY_FILE_DEST with the DB_RECOVERY_FILE_DEST_SIZE parameter  :

SQL> alter system set db_recovery_file_dest_size=3G ;
2. Or to avoid the situation once the 3Gb is full , set the following parameters so that when the dest1 is full, archiving is automatically  performed to the alternate dest2 :

log_archive_dest_1='LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2'
log_archive_dest_2='LOCATION=/other_dest_for_archiving'
log_archive_dest_state_1='enable'
log_archive_dest_state_2='alternate'
db_recovery_file_dest='/u01/app/oracle/product/10.1.0/db_1/flash_recovery_area'
db_recovery_file_dest_size=2G




ALTER SYSTEM SWITCH LOGFILE Statement hangs

ALTER SYSTEM SWITCH LOGFILE Statement hangs
ALTER SYSTEM SWITCH LOGFILE hangs:
Step 1: Login as sysdba.
Step 2: Enable log_archive_dest_state_1 Parameter.
SQL>alter system set log_archive_dest_state_1=enable scope=spfile;
System altered.
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

CreateFile error 32 when trying set file time oracle 11g windows 7

Today I am going to install Oracle 10g Client and ODAC11gR2 (Oracle Data Provider .Net) in Microsoft Windows XP Professional Service Pack 2. I have got following error message in console.

CreateFile error 32 when trying set file time

After searching I have found that this error message for my Antivirus. In my case the anti virus is Kaspersky. After disable my antivirus I am able to install Oracle Client with out any Warning or Error message


ora-02067 transaction or savepoint rollback required


disable trigger

THEN TRY AND ENABLE

ORA-00333 redo log read error block





PFILE

_ALLOW_RESETLOGS_CORRUPTION = true
 UNDO_MANAGEMENT = MANUAL


RENAME SPFILE


recover database until cancel;
or
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;




alter database open resetlogs;

--------------------UNDO RECREATION

now undo is corrupted
ORA-00600: internal error code, arguments: [4194]
drop corrupted undo tablespace and create a new undo tablespace.

but database is not able to open due to undo corruption

add following in pfile
undo_management = manual
event = '10513 trace name context forever, level 2'


startup restrict pfile=<initsid.ora>


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'd:\rkdatabase\UNDOTBS02.ORA' SIZE 500M REUSE AUTOEXTEND ON;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;


REMOVE ADD PARAMETER FROM PFILE

Dynamic change of CPU_COUNT causes ORA-600 [KsrpubwaitDeadlock] ORA-2097 Instance crash

Dynamic change of CPU_COUNT causes ORA-600 [Ksrpubwait:Deadlock] ORA-2097 Instance crash
Problem Description
Dynamic change of CPU_COUNT parameter causes oracle bug ORA-600 [ksrpubwait:deadlock]. For example, after issuing command "alter system set cpu_count=5;" it fails with following errors.

ORA-600 [ksrpubwait:deadlock], [parameters to cluster db instances - broadcast channel]
ORA-2097: parameter cannot be modified because specified value is invalid

And then instance crashes.

If you look at stack trace you will see,
ksrpubwait ksrpublish kspbcast kspset0 kxfpCPUCountUpdate ksbcpcb

Cause of the Problem
This errors are caused by Oracle Bug 7535429

Solution of the Problem
Solution 01: Bug 7535429 is fixed in version 11.2. So upgrade your oracle database to 11gR2 will solve the problem.
There is one-off patch which also help you to solve the problem. Check for the availability of one-off Patch 7535429 for your platform on MetaLink.

Solution 02: Explicitely setting CPU_COUNT to new value.

Solution 03: If you start the instance from pfile instead of using spfile then it will also solve the problem.

Solution 04: Set explicitly PARALLEL_MAX_SERVERS to the current value instead of relying default value.

ORA-600 [6006] ORA-600 [6856]

While startup, instance terminated after ORA-600 [6006] ORA-600 [6856]
Problem Description
While starting up the instance it fails with following errors:

ORA-00600: internal error code, arguments: [6006], [1], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [6856], [0], [60], [], [], []

and then SMON abnormally crashes the database instance.

Cause of the Problem
These are oracle bugs and these ORA-600 [6006] and ORA-600 [6856] bug occurred whenever undo segments are trying to rollback a failed transaction and cannot.

For error "ORA-00600: internal error code, arguments: [6006], [1], [], [], [], [], [], []" ,
Oracle is undoing an index leaf key operation. If the key is not found, ORA-00600 [6006] is logged.

For error "ORA-00600: internal error code, arguments: [6856], [0], [60], [], [], []" , SMON is trying to recover a dead transaction. But the undo application runs into an internal error (trying to delete a row that is already deleted).

Solution of the Problem

1) Review the trace files and look for the object(s) involved. If the trace file does not have a SQL statement, search on the following: "block dump header"

2) In the block header there will be a seg/obj = hex value. Convert the hex to dec and this will give you the data_object_id.

3) The alert.log may also show the affected object, for example:

ORACLE Instance ORCL (pid = 8) - Error 600 encountered while recovering transaction (9999, 36) on object 45879.

SQL>select owner, object_name from dba_objects where data_object_id = object#;

This will be the object you need to work with.


To implement the solution:

1. shutdown the oracle instance.
SQL> shut immediate

2. set event - event="10513 trace name context forever, level 2" (this event and setting disables transaction recovery which was initiated by SMON). Open the pfile and add the following line inside pfile.

event="10513 trace name context forever, level 2"

3. startup instance
SQL> startup

4.
- If the object is an index - drop and recreate.
- If the object is a table - drop / export / Create Table as Select (CTAS) to change the object ID

5. shutdown the instance
SQL> shut immediate

6. remove the event
Open the pfile and remove the event line that was added.

7. startup the instance
SQL> startup

ORA-600 [17147] ORA-600 [Kghlkrem1] ORA-7445 kghlkremf() database crash

ORA-600 [17147] ORA-600 [Kghlkrem1] ORA-7445 kghlkremf() database crash
During normal database activity, in alert log the following ORA-00600 error occurred and it causes database crashed.
ORA-00600: internal error code, arguments: [17147], [0xC0000003E4CE10A5], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [KGHLKREM1], [0xC0000003E4CE10B0], [], [], [], []
ORA-7445: exception encountered: core dump [kghlkremf()+33] [SIGSEGV]
If you look at the trace file you no longer will see any sql that causes this bug.

The call stack is as follows:

kghfrmrg kghfre qsmqktcc ktcdso ktcrcm ktdcmt k2lcom k2send xctctl xctcom_with_options opicom

or

k2send xctctl xctcom_with_options opicom opiodr

or

kghalo ktcccadd kwqidracbk kwqidcpmc kwqidafm0 kwqididqx kpoaqdq

or

kghadd_reserved_extent kghget_reserved_extent kghgex kghfnd

Cause of the Problem
The problem occurred due to oracle bug. The bug involves abnormal termination of queries involving bloom filters. This manifests itself as memory corruptions. The memory was freed, reallocated to some other client, and corrupted but slaves were still writing into it.

Solution of the Problem
Solution 01:
This bug is fixed in oracle database version 10.2.0.4. So apply 10.2.0.4 patchset to solve this problem.

Solution 02:
Workaround, you can set the oracle hidden parameter _bloom_filter_enabled to false.
This parameter can be set dynamically:

If you are using spfile then just set as,
SQL> connect / as sysdba
SQL> alter system set "_bloom_filter_enabled"=false scope=both;

If you use pfile to startup your database remove the scope parameter that is use,
SQL> alter system set "_bloom_filter_enabled"=false;

ORA-00600 [730] [SPACE LEAK] OR ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] during shutdown

ORA-00600 [730] [SPACE LEAK] OR ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] during shutdown
Problem Description
While shutting down oracle database the following error occurred.

In oracle 10g and 11g,
ORA-00600: internal error code, arguments: [730], [4100736], [space leak], [], [], [], [], [], [], [], [], []

If you observe stack trace it will look like,
ksesic2 <- ksmshu <- opistp_real <- opistp <- opiodr <- ttcpip <- opitsk <- opiino <- opiodr <- opidrv <- sou2o <- main <- start

 In oracle 10g the error looks like,
ORA-00600: internal error code, arguments: [LibraryCacheNotEmptyOnClose], [], [] ,[], [], [], [], []

 If you observe stack trace it will look like, kglshu <- kqlnfy <- kscnfy <- ksmshu <- opistp_real <- opistp <- opiodr <- ttcpip <- opitsk <- opiino <- opiodr <- opidrv <- sou2o <- opimai_real


Cause of the Problem
The error occurred due to oracle bug. Oracle named this bug as BUG 7572335. This happens during database shutdown when child cursors had been previously marked as kept in the shared pool. Note that this is a non-corruptive error.

Solution of the Problem
Solution 01: This bug is solved in Oracle database 11gR2. So upgrade to Oracle database 11.2 or higher is one solution.

Solution 02: If you are using Oracle database 10gR2 then up to patchset version 10.2.0.4 this bug remained and it is solved in version 10.2.0.5. You can upgrade to oracle version 10.2.0.5 by applying the Patchset, Patch 8202632.

Solution 03: If available for your platform and version, you can apply one off patch 7572335. Download and apply the fix, Patch 7572335

Followers