Showing posts with label Oracle 11g. Show all posts
Showing posts with label Oracle 11g. Show all posts

Saturday, July 27, 2013

ORA-00439 feature not enabled Deferred Segment Creation 11g enterprise to standard edition downgrade


1293326.1


http://lifeandoracle.blogspot.in/2012/07/ora-00439-feature-not-enabled-deferred.html
http://ocpdba.wordpress.com/2009/10/09/deferred-segment-creation-segmentless-tables-11gr2-new-feature-and-dumb-questions/
http://asanga-pradeep.blogspot.in/2011/01/segment-creation-behavior-change.html
http://www.donotcommit.net/?p=51


------------------
1. Init.ora Parameter "DEFERRED_SEGMENT_CREATION" Reference Note [ID 1216282.1]
2. IMP-00003, ORA-00959 While Importing Data Into Existing Table Of 11gR2 Using Traditional Import [ID 1180873.1]
3. Bug 8795792 - DBMS_METADATA.get_ddl generates wrong keywords for compressed indexes in 11.2 [ID 8795792.8]
4. Exporting Schema In 11.2 Database With 10g Client Gives ORA-1455 [ID 1083330.1]
1293326.1
-----------------

then you must know you have just hit one of the new features of 11gR2. Unfortunately this feature makes the life of the casual DBA a little bit more complicated than necessary.

I found an interesting debate about whether this must be considered a feature or a bug in the OTN forums but I very much appreciated the last comment where Michiel provided a working solution without delving into philosophical matters. I remember doing the same thing when dealing with another EXPDP/IMPDP problem between versions 11.1.0.6 and 11.1.0.7.

---------------------------Solution 1--------------

-- on the source instance
EXPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2

-- on the target instance
IMPDP ldbo/ldbo@apx1314srv full=Y dumpfile=expdp1314.dmp directory=DATA_DUMP_DIR logfile=expdp1314.log version=10.2


---------------------------Solution 2--------------

For 11.2.0.1 one way to avoid the issue is to allocate segment manually before the export. This could be done with

alter table  allocate extent;

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';



---------------Solution 3--------

SQL> alter system set deferred_segment_creation=false;



===================
From 11g Release 2 all tables created without rows, do not create any segments (by default). If you want to change this behaviour, tweak the spfile parameter “deferred_segment_creation” to FALSE. Anyway the people installing Peoplesoft, SAP, Siebel and other thousand-table data models, really do thank for this new feature.

How does it work?

If the INITIAL extent of a table is 15 MegaBytes in size, but the table is empty, the table won’t ask for that 15 MB of space. As soon as the first row of data is inserted (even a few bytes), the Oracle space engine will run the storage space settings of the table, before inserting that line, making the table actually owner of that space. You can create a 100GB table onto a 100MB tablespace as long as the table is empty, because the space will only be reclaimed by the table when it receives the first row.

========================

In 11.2 the initialization parameter DEFERRED_SEGMENT_CREATION controls whether segment space is allocated at the time of the segment creation. This is set to true by default both on enterprise edition and standard edition. But on standard edition this parameter has no effect as this feature is not enabled. Initialization parameter behavior could be override at table with
SQL> CREATE TABLE x(a number) SEGMENT CREATION IMMEDIATE;
or
SQL> CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
But on standard edition this would still give an error.
SQL>  CREATE TABLE x(a number) SEGMENT CREATION DEFERRED;
CREATE TABLE x(a number) SEGMENT CREATION DEFERRED
*
ERROR at line 1:
ORA-00439: feature not enabled: Deferred Segment Creation
Problem happens when exporting some empty tables from an enterprise edition database to a standard edition database. According to the default behavior on enterprise edition when tables have no rows there won't be any segments. But default behavior on standard edition is that, tables are always created with segments.

In 11.2.0.1 when importing tables from enterprise to standard edition tables with no segments throws an error while in 11.2.0.2 tables get created with segments without an error.


========================


Oracle 11g2 does NOT export empty tables when using exp.exe

http://tsells.wordpress.com/2011/02/10/oracle-11g2-does-not-export-empty-tables-when-using-exp-exe/

Out of habit I have been using the older data import / export functions in lieu of using the Data Pump commands that Oracle recommends.  As of 11g2 I can no longer do this.  The exp.exe no longer exports tables that do not have an entry in DBA_SEGMENTS.  This is due to a new feature named “deferred segment creation”.

The syntax for the new Data Pump commands are different and require some additional setup / maneuvering on the Oracle Server itself to complete.  The files when importing now must be placed in the Data Pump Directory.  This can be found by running the following query against the database server.

select owner, directory_name, directory_path from dba_directories

================


The cause is a new feature of Oracle 11.2: deferred segment creation.
All newly created tables that has no rows in it (ever!), has no corresponding row in DBA_SEGMENTS, so somewhy won't get exported with (deprecated) exp utility. Use expdp instead.


================

select 'alter table '||table_name||' allocate extent;' from user_tables where segment_created = 'NO';



set autocommit on;
set line 124;

select 'ALTER TABLE ' || TABLE_NAME || ' ALLOCATE EXTENT;' from user_tables where SEGMENT_CREATED = 'NO';
select 'ALTER TABLE '|| TABLE_NAME || ' ALLOCATE EXTENT;' from user_lobs where SEGMENT_CREATED = 'NO';
select 'ALTER INDEX '|| INDEX_NAME || ' ALLOCATE EXTENT;' from user_indexes where SEGMENT_CREATED = 'NO';

================

Don't forget: the deferred segment creation applies also to indexes and lobs and Indexes inherit table characteristics.
You can also query the SEGMENT_CREATED column of the USER_TABLES, USER_INDEXES, or USER_LOBS views. For non-partitioned tables, indexes and lobs, this column shows YES, if the segment is created.
However you should not take care of LOBs and LOB indexes extent allocations since they will be allocated automatically by the database once you allocate the corresponding table extent.

================Export was always failing due to the empty tables. Here is my dynamic sql======

DECLARE
CURSOR cur IS
SELECT 'ALTER TABLE ' || table_name || ' ALLOCATE EXTENT ' vSQL
FROM user_tables
where SEGMENT_CREATED = 'NO';
BEGIN
FOR c IN cur LOOP
BEGIN
EXECUTE IMMEDIATE c.vSQL;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line (SUBSTR (SQLERRM, 1, 250));
END;
END LOOP;
END;
/

Tuesday, November 6, 2012

11g File Watcher


http://docs.oracle.com/cd/E11882_01/server.112/e25494/scheduse005.htm
http://www.oracle-base.com/articles/11g/scheduler-enhancements-11gr2.php
http://awads.net/wp/2011/03/29/did-you-know-about-file-watchers/


A file watcher is a new scheduler object that enables a new type of even-based job that is triggered by the arrival of a file in a specified location. File watchers can be defined to monitor locations on the local server and remote servers, provided they have an agent installed on them.

We want oracle to import all the objects belonging to HR schema automatically whenever a .dmp file arrives in a specified folder on a system.
 To implement it we will use filewatcher which is a new scheduler object in oracle 11g. It  enables a new type of event-based job that is triggered by the arrival of a file in a specified location. File watchers can be defined to monitor locations on the local server and remote servers, provided they have an agent installed on them.

I am loading a file to a oracle table every day between 8 a.m and 9 a.m. The file may come at any time between 8 a.m to 9 a.m.
As of now, we are manually checking the file availability and then loading the file into oracle table.
My client is asking us to automate this process. My team is wondering if there is any way we can schedule based on file arrival in a specified location.


SELECT file_watcher_name, destination, directory_path, file_name, credential_name FROM user_scheduler_file_watchers;

SELECT file_watcher_name, destination, directory_path, file_name, credential_name FROM dba_scheduler_file_watchers;

SELECT * FROM   DBA_scheduler_credentials;

/* Configuration */
-- by default, a destination is checked every 10 mins.
To change this:
CONN / AS SYSDBA

set serveroutput on
declare v varchar2(1000);
begin
DBMS_SCHEDULER.GET_ATTRIBUTE ( 'FILE_WATCHER_SCHEDULE','REPEAT_INTERVAL', v);
DBMS_OUTPUT.PUT_LINE(V);
end;
/

BEGIN
DBMS_SCHEDULER.set_attribute('file_watcher_schedule','repeat_interval','freq=minutely;interval=1');
END;
/

/* Creating File Watcher */
-- create OS credential:

BEGIN
DBMS_SCHEDULER.create_credential(credential_name => 'fw_credential',username => 'Administrator',password => 'adminadmin@123');
END;
/

-- Grant EXECUTE on the credential to the schema that owns the
-- event-based job that the file watcher will start:


GRANT EXECUTE ON fw_credential to LDBO;

-- create file watcher:

BEGIN
DBMS_SCHEDULER.create_file_watcher(
file_watcher_name => 'data_fw',
directory_path => 'd:\ldoutput',  -- if '?' = ORACLE_HOME
file_name => '*.dat', -- wildcard supported
credential_name => 'fw_credential',
destination => NULL, -- NULL=local server
enabled => FALSE);
END;
/


-- Grant EXECUTE on the file watcher to any schema that owns an event-based job-- that references the file watcher.

GRANT EXECUTE ON data_fw to LDBO;

-- create a program raised by the file watcher

BEGIN
DBMS_SCHEDULER.create_program(
program_name => 'import_data_prog',
program_type => 'stored_procedure',
program_action => 'import_data_proc',
number_of_arguments => 1,
enabled => FALSE);
END;
/

-- define the metadata argument using the event_message attribute-- the metadata contains info about the file, such as its name:

BEGIN
DBMS_SCHEDULER.define_metadata_argument(
program_name => 'import_data_prog',
metadata_attribute => 'event_message',
argument_position => 1);
END;
/

-- create the defined procedure:-- It must accept an argument of the SCHEDULER_FILE WATCHER_RESULT type

DROP TABLE received_files ;

CREATE TABLE received_files ( fileinfo VARCHAR2(4000), rdate date );

CREATE OR REPLACE PROCEDURE import_data_proc
(p_sfwr SYS.SCHEDULER_FILEWATCHER_RESULT) AS
v_message received_files.fileinfo%type;
BEGIN
v_message := p_sfwr.directory_path || '\' || p_sfwr.actual_file_name || ' (' ||p_sfwr.file_size || ')';
INSERT INTO received_files VALUES (v_message, sysdate);
COMMIT;
END;
/

-- create the job:

BEGIN
DBMS_SCHEDULER.create_job(
job_name => 'import_data_job',
program_name => 'import_data_prog',
event_condition => NULL,      -- 'tab.user_data.file_size < 1024'
queue_spec => 'data_fw', -- file watcher name
auto_drop => FALSE,
enabled => FALSE);
END;
/

-- By default, the arrival of new files will be ignored if the job is already running.-- If you need the job to fire for each new arrival, regardless of whether the job isalready-- running or not, set the PARALLEL_INSTANCES attribute for the job to true. The job-- will then be run as a lightweight job:

BEGIN
DBMS_SCHEDULER.set_attribute('import_data_job','parallel_instances',TRUE);
END;
/

-- Enable all the objects:

EXEC DBMS_SCHEDULER.enable('data_fw,import_data_prog,import_data_job');

-- to test:
echo "This is a test" > D:\LDOUTPUT\f1.dat
echo "This is a test too" > D:\LDOUTPUT\f2.dat
echo "Yes another test" > D:\LDOUTPUT\f3.dat

select * from received_files order by rdate desc;

/* Managing File Watchers */
-- enable/disable
EXEC DBMS_SCHEDULER.enable('data_fw') ;
EXEC DBMS_SCHEDULER.disable('data_fw') ;

-- change an attribute:

begin
DBMS_SCHEDULER.SET_ATTRIBUTE (name => 'data_fw',attribute =>'directory_path',value =>'/home/oracle/receivedfiles' ) ;
end;
/

beginDBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'data_fw',
attribute =>'file_name',
value =>'*.txt' ) ;
end;
/

begin
DBMS_SCHEDULER.SET_ATTRIBUTE (name => 'data_fw',attribute =>'credential_name',value =>'fw_credential2' ) ;
end;
/

-- to drop a file watchers:

EXEC DBMS_SCHEDULER.DROP_job('import_data_job');

EXEC DBMS_SCHEDULER.DROP_program('import_data_prog');

EXEC DBMS_SCHEDULER.DROP_FILE_WATCHER('data_fw');


BEGIN
DBMS_SCHEDULER.drop_credential(credential_name => 'fw_credential');
END;
/



--------------Enable the file watcher, the program, and the job

BEGIN
   DBMS_SCHEDULER.ENABLE('data_fw,import_data_prog,import_data_job');
END;
/




---------------------------------------------------

SELECT * FROM Dba_Scheduler_Job_Run_Details ORDER BY 1 DESC;

ORA-29538: Java not installed
ORA-06512: at "SYS.DBMS_ISCHED", line 3696
ORA-06512: at "SYS.DBMS_ISCHED", line 3907

ORA-29538 ORA-06512 FILE WATCHER

select * from product_component_version;
select comp_name,status from dba_registry;
select object_name from dba_objects where object_type like '%JAVA%' ;


set feedback on termout off
spool C:\install_java.log
@D:\app\Administrator\product\11.2.0\dbhome_1\javavm/install\initjvm.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\xdk\admin\initxml.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\xdk\admin\xmlja.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\rdbms\admin\catjava.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\rdbms\admin\catexf.sql
spool off

-------------Remove JVM:
SQL> @?/javavm/install/jvmrm.sql


-----------------------------Installing JVM on Oracle Database 11gR2
JVM was not installed on our Oracle 11gR2 database and we installed it manually by executing below steps from sqlplus login directly on the server [please follow all th steps]:

spool /home/oracle/ODMRINSTL/JVM/jvm_install.txt
set echo on
connect / as sysdba
shutdown immediate
startup mount
alter system set "_system_trig_enabled" = false scope=memory;
alter database open;
@?/javavm/install/initjvm.sql
@?/xdk/admin/initxml.sql
@?/xdk/admin/xmlja.sql
@?/rdbms/admin/catjava.sql
@?/rdbms/admin/catexf.sql
shutdown immediate
set echo off
spool off
exit

Verify with below queries & outputs:

SQL> select count(*), object_type from all_objects where object_type like '%JAVA%' group by object_type;

OWNER COUNT(*)
------ ----------
SYS 21313
EXFSYS 48

SQL> select comp_id,comp_name from dba_registry where comp_id in('CATJAVA','JAVAVM');

COMP_ID COMP_NAME
------- ------------------------------
JAVAVM JServer JAVA Virtual Machine
CATJAVA Oracle Database Java Packages

We have followed above for our case. For details and uninstallation please go through MOS Note:1112983.1 and prepare steps for your case.

-----------------------------------------------jssu.exe error on server

Problem signature:
  Problem Event Name: APPCRASH
  Application Name: jssu.exe
  Application Version: 0.0.0.0
  Application Timestamp: 4bb1e0d1
  Fault Module Name: StackHash_cc43
  Fault Module Version: 6.0.6001.18538
  Fault Module Timestamp: 4cb73957
  Exception Code: c0000374
  Exception Offset: 00000000000a7857
  OS Version: 6.0.6001.2.1.0.274.10
  Locale ID: 1033
  Additional Information 1: cc43
  Additional Information 2: dcf9b9d4873b0ed461caccd3a40f2997
  Additional Information 3: f0eb
  Additional Information 4: 0e4fbfefd1abdaa97da3f3d79632520f

Read our privacy statement:
  http://go.microsoft.com/fwlink/?linkid=50163&clcid=0x0409




ORA-29540: class schedFileWatcherJava does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 3744
ORA-06512: at "SYS.DBMS_ISCHED", line 3956

Monday, November 5, 2012

11g Migration using RMAN


The steps for 10g database:

1- Run the utlu112i.sql Pre-upgrade script. You can find this script under @$ORACLE_HOME/rdbms/admin/. It must be copied from the 11g database software.

SQL> @$ORACLE_HOME/rdbms/admin/utlu112i.sql

This script adds a column named tz_version to table named registry$database. Pre-upgrade script updates this column with the value of the following query.

SQL> select version from v$timezone_file;

So it performs following operation.

SQL> ALTER TABLE registry$database ADD (tz_version NUMBER);
SQL> UPDATE registry$database set tz_version =4;

SQL> ALTER  PACKAGE “SYS”.”DBMS_REGISTRY”  COMPILE BODY;
SQL> ALTER VIEW “SYS”.”DBA_REGISTRY_DATABASE”  COMPILE;

2- Connect to 10g database and take RMAN full backup.

#rman target /
RMAN> backup as backupset database;

3- Copy 10g database backup files and archive files to 11g database server.

The steps for 11g database:

1- Create temporary pfile in $ORACLE_HOME/dbs

*.audit_file_dest=’/oracle/admin/TALIPDB/adump’
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/talipdb/controlfile/current.257.787742981′,’+DATA/talipdb/controlfile/current.258.787742983′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_create_online_log_dest_1=’+RECO’
*.db_domain=”
*.db_name=’TALIPDB’
*.diagnostic_dest=’/oracle’
*.job_queue_processes=0
*.open_cursors=300
*.pga_aggregate_target=1G
*.processes=150
*.remote_login_passwordfile=’EXCLUSIVE’
*.sga_target=2G
*.undo_tablespace=’UNDOTBS1′

2- Open the database in NOMOUNT state.

# rman target /

RMAN> startup nomount;

3- Restore controlfile from backup.

RMAN> restore controlfile from ‘/oracle/ora11g/talipdb/backup/c-784951186-20120620-02′;

4- Open the database in MOUNT state.

RMAN> alter database mount;

5- Catalog RMAN backup files and archive log files.

RMAN> catalog start with ‘/oracle/ora11g/talipdb/backup’;
RMAN> catalog start with ‘/oracle/ora11g/talipdb/archive’;

6- Restore 10g database backup to +DATA diskgroup and perform incomplete recovery.

RMAN> run
{
allocate channel c1 device type disk;
SET NEWNAME FOR DATAFILE 1 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 2 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 3 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 4 TO ‘+DATA’;
restore database until sequence 4;
switch datafile all;
recover database until sequence 4;
}

7-  Open the database with RESETLOGS UPGRADE.

# sqlplus / as sysdba
SQL> alter database open resetlogs upgrade;

8- Run the upgrade script.

SQL> SPOOL upgrade.log
SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
SQL> SPOOL off

9- If our 10g and 11g database os platforms are different then you must run utlmmig.sql script.

————–Changing 32 bit to 64 bit————–
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP UPGRADE
SQL> SPOOL migrate.log
SQL> @$ORACLE_HOME/rdbms/admin/utlmmig.sql
SQL> SPOOL off
——————————————-

10- Now, you can open the database.

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

11- Run the Post-Upgrade script to check problems.

SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql

12- Compile invalid objects.

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

13- Drop the temporary file and create new one in +DATA diskgroup.

SQL> alter tablespace temp drop tempfile ‘/data_TALIPDB/temp01.dbf’;
SQL> alter tablespace temp add tempfile ‘+DATA’ size 1024M;

Tuesday, September 25, 2012

expdp procedure in 11g


 1) create directory export_auto as 'd:\expdp1213';

 create user dba_export_user identified by test123;

grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;


2)
CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
   hdl_job        NUMBER;
   l_cur_scn      NUMBER;
   l_job_state    VARCHAR2 (20);
   l_status       SYS.ku$_status1010;
   l_job_status   SYS.ku$_jobstatus1010;
BEGIN

 begin
    execute immediate 'drop table  dba_export_user.AUTO_EXPORT';
    exception when others then null;
   end;

   hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
 
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1213.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file,reusefile => 1);
                         
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'export.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file,reusefile => 1);
                     
   DBMS_DATAPUMP.start_job (handle => hdl_job);
 
   DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
 
   DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);

  DBMS_DATAPUMP.detach(handle => hdl_job);

END;
/


3) Change the time, Date

begin
 dbms_scheduler.create_job(
      job_name => 'EXPORT_JOB'
     ,job_type => 'STORED_PROCEDURE'
     ,job_action => 'dba_export_user.start_export'
     ,start_date => '08-FEB-12 06.02.00.00 PM ASIA/CALCUTTA'
       ,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
     ,enabled => TRUE
     ,comments => 'EXPORT_DATABASE_JOB');
end;
/

EXEC  dbms_scheduler.run_job('EXPORT_JOB');

Friday, September 14, 2012

11g Active Data Guard - enabling Real-Time Query


Active Data Guard is a good new feature in 11g (although requires a license) which enables us to query the Standby database while redo logs are being applied to it. In earlier releases, we had to stop the log apply, open the database in read only mode and then start the log apply again when the database was taken out of the read only mode.
With Oracle 11g Active Data Guard, we can make use of our standby site to offload reporting and query type applications while at the same time not compromising on the high availability aspect.
How do we enable Active Data Guard?
If we are not using the Data Guard Broker, we need to open the standby database, set it in read only mode and then start the managed recovery as shown below.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1069252608 bytes
Fixed Size 2154936 bytes
Variable Size 847257160 bytes
Database Buffers 213909504 bytes
Redo Buffers 5931008 bytes
Database mounted.
Database opened.
SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.
If we are using the Data Guard Broker CLI, DGMGRL, the procedure is a bit different and is not very clearly explained in the documentation.
You need to stop redo apply first via the SET STATE dgmgrl command, then from a SQL*PLUS session, open the database in read only mode, and then back again from dgmgrl via set SET STATE command, start the redo apply again.
Stop redo apply with the following command from Data Guard Broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-OFF’;
Open standby read-only via SQL*Plus
SQL> alter database open read only;
Restart redo apply via broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-ON’;
I tried to run the same only via DGMGRL and got this error:
DGMGRL> edit database PRODDB set state=”APPLY-OFF”;
Succeeded.
DGMGRL> edit database PRODDB set state=”READ ONLY”;
Error: ORA-16516: current state is invalid for the attempted operation
After we have enabled the Real-Time Query feature, we can confirm the same via the DGMGRL command – SHOW DATABASE
DGMGRL> show database verbose PRODDB_DR
Database – PRODDB_DR
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: ON
Note:
Even though we have enabled Real-Time Query feature, if we go to Data Guard page via the Enterprise Manager Grid Control GUI, it will show that Real-Time Query is in a Disabled state.
This is apparently a bug which applies to OEM Grid Control 10.2.0.1 to 10.2.0.5 with a 11.2 target database.
Bug 7633734: DG ADMIN PAGE REAL TIME QUERY SHOWS DISABLED WHEN ENABLED FOR 11.2 DATABASES

Wednesday, August 29, 2012

Oracle 11g Advanced Compression


You can implement advanced compression option into your databases having more update, insert operation tables to manage high growing amount of data.

Followings are very simple command to implement advanced compression

alter table  tablename compress for all operations;

alter index index1 rebuild compress;
alter index index2 rebuild compress;


SELECT table_name, compression, compress_for FROM user_tables where table_name='tablename';

select index_name,COMPRESSION,STATUS from dba_indexes where ='tablename';


Satish,

Please implement the same on UAT first to get performance.



Following is test case to show the difference between 10gR2 Table Compression feature and 11gR2’s Advanced Compression. Oracle provided table level compression feature in 10gR2. While this compression provided some storage reduction, 10g’s table compression only compressed the data during BULK LOAD operations. New and updated data were not compressed.

With 11g’s Advanced Compression new and updated data are also compressed; achieving highest level in storage reduction, while providing performance improvements as compressed blocks result in more data being moved per I/O. 

Note1: Basic compression comes with oracle 11g Enterprise Edition, To make table as OLTP compressed its again extra cost (US$11,500.00/ Processor) perpetual option with Enterprise Edition.

Note2: There is tradeoff between Disk IO and CPU. it depends on how your system is configured. If your performance bottleneck is disk I/O, you almost certainly will benefit from using compression, because it saves a lot of disk reads. If you are on the other hand low on CPU, you might not always.

------------------------------Test Case-----------------------------------------------

Following test case was executed in 10g database server. 

A table called TEST was created without COMPRESSION option. 

SQL> select table_name,compression from dba_tables where table_name = 'TEST';

TABLE_NAME COMPRESS
------------------------- -------------
TEST DISABLED


SQL> select bytes from dba_segments where segment_name = 'TEST';

SUM(BYTES)
------------------
92274688 


The size of the table was around 92MB.

Now create another table called TEST_COMPRESSED with COMPRESS option. 

SQL> create table TEST_COMPRESSED COMPRESS as select * from test;

Table created.

SQL> select table_name, compression from dba_tables where table_name like 'TEST
%';

TABLE_NAME COMPRESS
------------------------------ ---------------
TEST_COMPRESSED ENABLED
TEST DISABLED


Now let’s check the size of the COMPRESSED table.


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------------
30408704

Check out the size of the COMPRESSED table. It is only 30MB, around 30% reduction in size. So far so good. 

Now let’s do a plain insert into the COMPRESSED table.

SQL> insert into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

Let’s check the size of the COMPRESSED table. 

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED'

2 /

SUM(BYTES)
----------
117440512


Wow! From 30MB to 117MB? So, plain INSERT statement does not COMPRESS the data in 10g.

(You will see this is not the case with 11g)

Now let’s do the same insert with a BULK LOAD 

SQL> insert /*+ APPEND */ into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';


SUM(BYTES)
----------
142606336

Ok, now the size of the COMPRESSED table is 142MB from 117MB. For the same number of rows, the table size only increased by 25MB. So BULK LOAD compresses the data. 

Let’s check other DML statements such as DELETE and UPDATE against the COMPRESSED table. 


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
142606336

No change in total size of the table. DELETE has no impact as expected.


Let’s check UPDATE. 

SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
rownum < 100000;

99999 rows updated.

SQL> commit;


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
150994944

The table size is increased by 8MB? No compression for UPDATE statement either. 


All this clearly shows that 10g’s Table COMPRESSION would work great for initial BULK LOADS, however subsequent UPDATE’s, DELETE’s and INSERT’s will not result in COMPRESSED blocks.





Now, let’s see 11g’s Test Results.

The following SQL statements were executed against 11.2.0.1 database version.


TEST table of 100MB in size was created as before.



SQL> select bytes from dba_segments where segment_name = 'TEST';

BYTES
----------
100663296

So 100MB of table created.

Let’s create a table with COMPRESS FOR ALL OPERATIONS option. This is only available in 11g.


SQL> create table test_compressed compress for all operations as select * from
test;

Table created.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
31457280


Check out the size of the compressed table vs. uncompressed table. 30% less space usage on a compressed table. Not a big difference compared to 10g.




Let’s check other DML statements.

Let’s do a plain insert to the compressed table.

SQL> insert into TEST_COMPRESSED select * from test;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
75497472



11g’s Advanced compression, compressed 100MB of data to 40MB and inserted to the compressed table, WITHOUT BULK LOAD option. 

Now let’s do the BULK LOAD onto 11g’s COMPRESSED table.


SQL> insert into /*+ APPEND */ test_compressed select * from TEST;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904
It has a same impact as PLAIN insert. 

What about deletes and updates?


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


No change in deletes. This is expected as the blocks are compressed when the new rows are added to the existing blocks and that the threshold reaches PCTFREE.


SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where 

2 rownum < 100000;

99999 rows updated.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


There is no change in this case as existing blocks were able to accommodate updates. However the same update generated more data in 10g. 


Wednesday, August 1, 2012

Purging trace and dump files with 11g ADRCI


Purging trace and dump files with 11g ADRCI

In previous versions of Oracle prior to 11g, we had to use our own housekeeping scripts to purge the udump, cdump and bdump directories.
In Oracle 11g, we now have the ADR (Automatic Diagnostic Repository) which is defined by the diagnostic_dest parameter.
So how are unwanted trace and core dump files cleaned out in 11g automatically?
This is done by the MMON background process.
There are two time attributes which are used to manage the retention of information in ADR. Both attributes correspond to a number of hours after which the MMON background process purges the expired ADR data.
LONGP_POLICY (long term) defaults to 365 days and relates to things like Incidents and Health Monitor warnings.
SHORTP_POLICY (short term) defaults to 30 days and relates to things like trace and core dump files
The ADRCI command show control will show us what the current purge settings are as shown below.
adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/ttrlwiki/ttrlwiki:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           720                  8760                 2010-07-07 08:46:56.405618 +08:00        2010-08-22 22:14:11.443356 +08:00                                                 1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00
In this case it is set to the defaults of 720 hours (30 days) for the Short Term and 8760 hours (One year) for the long term category.
We can change this by using the ADRCI command ‘set control’
In this example we are changing the retention to 15 days for the Short Term policy attribute (note it is defined in Hours)
adrci> set control (SHORTP_POLICY =360)

adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/ttrlwiki/ttrlwiki:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           360                  8760                 2010-08-27 09:36:09.385370 +08:00        2010-08-22 22:14:11.443356 +08:00                                                 1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00
We can also manually purge information from the ADR using the ‘purge’ command from ADRCI (note this is defined in minutes and not hours!).
In this example we are purging all trace files older than 6 days. We see that the LAST_MANUPRG_TIME column is now populated.
adrci> purge -age 8640 -type TRACE  

adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/ttrlwiki/ttrlwiki:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1095473802           360                  8760                 2010-08-27 09:36:09.385370 +08:00        2010-08-22 22:14:11.443356 +08:00        2010-08-27 09:50:07.399853 +08:00        1                    2                    76                   1                    2010-07-07 08:46:56.405618 +08:00


Trace Dump File Housekeeping in 11g Adrci

Automatic Diagnostic Repository Command Interface


Starting with 11gR1, we have a new way to deal with Oracle Errors & Tracefiles: There is now a special command line utility dedicated for that purpose called adrci (Automatic Diagnostic Repository Command Interpreter). This posting is intended to show you the (in my view) essential commands, a DBA ought to know in order to use it. We will look at
  1. Viewing the alert.log
  2. The relation between incident & problem
  3. Creation of Packages & ZIP files to send to Oracle Support
  4. Managing, especially purging tracefiles
I will at first create a problem. Don’t do that with your Production Database! Especially: Never do DML on dictionary tables!
[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 1 10:25:06 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0    Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> show parameter diagnostic

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
diagnostic_dest              string     /u01/app/oracle

SQL> grant dba to adam identified by adam;

Grant succeeded.

SQL> connect adam/adam
Connected.
SQL> create table t (n number);

Table created.

SQL> select object_id from user_objects;

 OBJECT_ID
----------
     75719

SQL> connect / as sysdba
Connected.
SQL> update tab$ set cols=2 where obj#=75719;

1 row updated.

SQL> commit;

Commit complete.

SQL> alter system flush shared_pool;

System altered.

SQL> connect adam/adam
Connected.
SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 2236
Session ID: 29 Serial number: 9
I flushed the Shared Pool to get the Data Dictionary Cache empty. Else the select may not crash the session as it did. Imagine the user calls me now on the phone. Our first idea as an experienced DBA: We look at the alert.log! Right so. Please notice that we now have two different kinds of the alert.log.
One is present in the conventional text format, per OFA in $ORACLE_BASE/diag/rdbms/name of the db/name of the instance/trace This location is determined by the new initialization parameter DIAGNOSTIC_DEST, while BACKGROUND_DUMP_DEST is deprecated in 11g.
1. Viewing the alert.log
The other one is in XML format placed in $ORACLE_BASE/diag/rdbms/name of the db/name of the instance/alert This version of the alert.log is accessed by adrci:
[oracle@uhesse ~]$ adrci

ADRCI: Release 11.2.0.2.0 - Production on Wed Jun 1 10:20:08 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> show home
ADR Homes:
diag/tnslsnr/uhesse/listener
diag/rdbms/orcl/orcl
Please notice that we have different ADR Homes. In my case only two, because I am not using Grid Infrastructure on this Demo System, else there would be another one. I specify my Database Home first. Then I look at the alert.log. Good news if you are on Windows: Finally, you can tail -f your alert.log also :-)
adrci> set home diag/rdbms/orcl/orcl
adrci> show alert -tail -f
2011-06-01 10:16:35.337000 +02:00
db_recovery_file_dest_size of 4032 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Starting background process CJQ0
CJQ0 started with pid=21, OS id=2204
2011-06-01 10:18:42.668000 +02:00
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x0] [PC:0x90D891A, qcstda()+702] [flags: 0x0, count: 1]
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2236.trc  (incident=6153):
ORA-07445: exception encountered: core dump [qcstda()+702] [SIGSEGV] [ADDR:0x0] [PC:0x90D891A] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6153/orcl_ora_2236_i6153.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2011-06-01 10:18:47.518000 +02:00
Dumping diagnostic data in directory=[cdmp_20110601101847], requested by (instance=1, osid=2236), summary=[incident=6153].
2011-06-01 10:18:48.727000 +02:00
Sweep [inc][6153]: completed
Sweep [inc2][6153]: completed
2. The relation between Incident & Problem
You see the incident was recorded in the alert.log. And it tells you “Use ADRCI or Support Workbench to package the incident.” We will soon see how to do that. First I’d like to explain the relation between incident and problem: An incident is the concrete occurrence of a problem. In other words: The same problem may have multiple incidents. To show that, I will open another terminal and do again a select against the table t, while still tailing the alert log from the first session.
Second terminal:
[oracle@uhesse ~]$ sqlplus adam/adam

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 1 10:21:52 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from t where n=42;
select * from t where n=42
                         *
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 2299
Session ID: 36 Serial number: 11
First terminal:
2011-06-01 10:21:31.367000 +02:00
Starting background process SMCO
SMCO started with pid=19, OS id=2268
2011-06-01 10:22:08.781000 +02:00
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x0] [PC:0x90D891A, qcstda()+702] [flags: 0x0, count: 1]
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2299.trc  (incident=6201):
ORA-07445: exception encountered: core dump [qcstda()+702] [SIGSEGV] [ADDR:0x0] [PC:0x90D891A] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2011-06-01 10:22:11.135000 +02:00
Dumping diagnostic data in directory=[cdmp_20110601102211], requested by (instance=1, osid=2299), summary=[incident=6201].
2011-06-01 10:22:13.370000 +02:00
Sweep [inc][6201]: completed
Sweep [inc2][6201]: completed
I have seen the second incident recorded. I exit out of the tail -f with CTRL+C and continue:
adrci> show problem

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
PROBLEM_ID           PROBLEM_KEY                                                 LAST_INCIDENT        LASTINC_TIME                             
-------------------- ----------------------------------------------------------- -------------------- ----------------------------------------
1                    ORA 7445 [qcstda()+702]                                     6201                 2011-06-01 10:22:08.924000 +02:00       
1 rows fetched
So I have one problem with the ID 1 and the last incident occurred at 10:22. Are there more?
adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
INCIDENT_ID          PROBLEM_KEY                                                 CREATE_TIME                              
-------------------- ----------------------------------------------------------- ----------------------------------------
6153                 ORA 7445 [qcstda()+702]                                     2011-06-01 10:18:42.995000 +02:00       
6201                 ORA 7445 [qcstda()+702]                                     2011-06-01 10:22:08.924000 +02:00       
2 rows fetched
I want to see some more detail about the incidents:
adrci> show incident -mode detail -p "incident_id=6201"

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************

**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
   INCIDENT_ID                   6201
   STATUS                        ready
   CREATE_TIME                   2011-06-01 10:22:08.924000 +02:00
   PROBLEM_ID                    1
   CLOSE_TIME                   
   FLOOD_CONTROLLED              none
   ERROR_FACILITY                ORA
   ERROR_NUMBER                  7445
   ERROR_ARG1                    qcstda()+702
   ERROR_ARG2                    SIGSEGV
   ERROR_ARG3                    ADDR:0x0
   ERROR_ARG4                    PC:0x90D891A
   ERROR_ARG5                    Address not mapped to object
   ERROR_ARG6                   
   ERROR_ARG7                   
   ERROR_ARG8                   
   ERROR_ARG9                   
   ERROR_ARG10                  
   ERROR_ARG11                  
   ERROR_ARG12                  
   SIGNALLING_COMPONENT          SQL_Parser
   SIGNALLING_SUBCOMPONENT      
   SUSPECT_COMPONENT            
   SUSPECT_SUBCOMPONENT         
   ECID                         
   IMPACTS                       0
   PROBLEM_KEY                   ORA 7445 [qcstda()+702]
   FIRST_INCIDENT                6153
   FIRSTINC_TIME                 2011-06-01 10:18:42.995000 +02:00
   LAST_INCIDENT                 6201
   LASTINC_TIME                  2011-06-01 10:22:08.924000 +02:00
   IMPACT1                       0
   IMPACT2                       0
   IMPACT3                       0
   IMPACT4                       0
   KEY_NAME                      ProcId
   KEY_VALUE                     25.3
   KEY_NAME                      Client ProcId
   KEY_VALUE                     oracle@uhesse (TNS V1-V3).2299_140262306875136
   KEY_NAME                      PQ
   KEY_VALUE                     (0, 1306916528)
   KEY_NAME                      SID
   KEY_VALUE                     36.11
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2299.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
1 rows fetched
I want to look at the incident tracefile mentioned above:
adrci> show trace /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
/u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
 ----------------------------------------------------------
 LEVEL PAYLOAD
 ----- ------------------------------------------------------------------------------------------------------------------------------------------------
 Dump file /u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
 Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1
 System name:      Linux
 Node name:        uhesse
 Release:  2.6.32-100.28.5.el6.x86_64
 Version:  #1 SMP Wed Feb 2 18:40:23 EST 2011
 Machine:  x86_64
 Instance name: orcl
 Redo thread mounted by this instance: 1
 Oracle process number: 25
 Unix process pid: 2299, image: oracle@uhesse (TNS V1-V3)
*** 2011-06-01 10:22:08.929
 *** SESSION ID:(36.11) 2011-06-01 10:22:08.929
 *** CLIENT ID:() 2011-06-01 10:22:08.929
 *** SERVICE NAME:(SYS$USERS) 2011-06-01 10:22:08.929
 *** MODULE NAME:(SQL*Plus) 2011-06-01 10:22:08.929
 *** ACTION NAME:() 2011-06-01 10:22:08.929
Dump continued from file: /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2299.trc
 1>     ***** Error Stack *****
 ORA-07445: exception encountered: core dump [qcstda()+702] [SIGSEGV] [ADDR:0x0] [PC:0x90D891A] [Address not mapped to object] []
 1<     ***** Error Stack *****
 1>     ***** Dump for incident 6201 (ORA 7445 [qcstda()+702]) *****
 2>      ***** Beginning of Customized Incident Dump(s) *****
 2>      ***** Beginning of Customized Incident Dump(s) *****
 Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x0] [PC:0x90D891A, qcstda()+702] [flags: 0x0, count: 1]
 Registers:
 %rax: 0x0000000000000000 %rbx: 0x00007f915c77f0e0 %rcx: 0x0000000000000007
 %rdx: 0x0000000000000000 %rdi: 0x00007f915c77be98 %rsi: 0x0000000000000000
 %rsp: 0x00007fffc65178e0 %rbp: 0x00007fffc6517960  %r8: 0x0000000000000028
 %r9: 0x0000000000002000 %r10: 0x00000000093849c0 %r11: 0x0000000000000168
 %r12: 0x00007f915c77ade8 %r13: 0x000000008edbb178 %r14: 0x00007f915c777da0
 %r15: 0x00007f915c77ae28 %rip: 0x00000000090d891a %efl: 0x0000000000010246
 qcstda()+686 (0x90d890a) mov -0x40(%rbp),%rdi
 qcstda()+690 (0x90d890e) mov %rdx,0x18(%rbx)
 qcstda()+694 (0x90d8912) mov 0x60(%r15),%rsi
 qcstda()+698 (0x90d8916) mov %ecx,0x8(%r15)
 > qcstda()+702 (0x90d891a) mov %ecx,(%rsi)
 qcstda()+704 (0x90d891c) mov 0x78(%rdi),%rdx
 qcstda()+708 (0x90d8920) test %rdx,%rdx
 qcstda()+711 (0x90d8923) jnz 0x90d8d03
 qcstda()+717 (0x90d8929) mov -0x70(%rbp),%rdi
*** 2011-06-01 10:22:08.963
 dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x3, level=3, mask=0x0)
 3>       ***** Current SQL Statement for this session (sql_id=8r222qucmawdt) *****
          select * from t where n=42
 3<       ***** current_sql_statement ***** 3 
3. Creation of Packages & ZIP files to send to Oracle Support
I may not be able to solve the problem myself. Oracle Support will help me with that one. I gather all the required information with a method called “Incident Packaging Service” (IPS):
adrci> ips create package problem 1 correlate all
 Created package 2 based on problem id 1, correlation level all
This did not yet create a ZIP file and is therefore referred to as “Logical Package”. The ZIP file is generated from the Logical Package that was created:
adrci> ips generate package 2 in "/home/oracle"  
Generated package 2 in file /home/oracle/ORA7445qc_20110601112533_COM_1.zip, mode complete
4. Managing, especially purging of tracefiles
Now to the management of tracefiles. You may notice that 11g creates lots of tracefiles that need to be purged from time to time. In fact, this is done automatically, but you may want to change the default purge policy:
adrci> show tracefile -rt
 01-JUN-11 10:31:48  diag/rdbms/orcl/orcl/trace/orcl_mmon_2106.trc
 01-JUN-11 09:43:43  diag/rdbms/orcl/orcl/trace/orcl_ckpt_2100.trc
 01-JUN-11 09:22:13  diag/rdbms/orcl/orcl/trace/alert_orcl.log
 01-JUN-11 09:22:11  diag/rdbms/orcl/orcl/trace/orcl_diag_2088.trc
 01-JUN-11 09:22:10  diag/rdbms/orcl/orcl/trace/orcl_ora_2299.trc
 01-JUN-11 09:22:10  diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
 01-JUN-11 09:18:47  diag/rdbms/orcl/orcl/trace/orcl_ora_2236.trc
 01-JUN-11 09:18:47  diag/rdbms/orcl/orcl/incident/incdir_6153/orcl_ora_2236_i6153.trc
 01-JUN-11 09:17:19  diag/rdbms/orcl/orcl/trace/orcl_dbrm_2090.trc
 01-JUN-11 09:16:44  diag/rdbms/orcl/orcl/trace/orcl_j002_2210.trc
 01-JUN-11 09:16:30  diag/rdbms/orcl/orcl/trace/orcl_ora_2187.trc
 01-JUN-11 09:16:19  diag/rdbms/orcl/orcl/trace/orcl_mman_2094.trc
 01-JUN-11 09:16:16  diag/rdbms/orcl/orcl/trace/orcl_vktm_2082.trc
 01-JUN-11 09:16:14  diag/rdbms/orcl/orcl/trace/orcl_ora_2016.trc
 30-MAY-11 14:07:02  diag/rdbms/orcl/orcl/trace/orcl_mmon_2093.trc
 30-MAY-11 11:15:30  diag/rdbms/orcl/orcl/trace/orcl_ora_3414.trc
 30-MAY-11 11:00:01  diag/rdbms/orcl/orcl/trace/orcl_j000_2245.trc
 30-MAY-11 10:56:58  diag/rdbms/orcl/orcl/trace/orcl_dbrm_2077.trc
 30-MAY-11 10:56:20  diag/rdbms/orcl/orcl/trace/orcl_j002_2201.trc
 30-MAY-11 10:56:06  diag/rdbms/orcl/orcl/trace/orcl_ora_2178.trc
 30-MAY-11 10:55:58  diag/rdbms/orcl/orcl/trace/orcl_mman_2081.trc
 30-MAY-11 10:55:55  diag/rdbms/orcl/orcl/trace/orcl_vktm_2069.trc
 30-MAY-11 10:55:53  diag/rdbms/orcl/orcl/trace/orcl_ora_2006.trc
 27-MAY-11 10:53:25  diag/rdbms/orcl/orcl/trace/orcl_mmon_8589.trc
 27-MAY-11 10:17:05  diag/rdbms/orcl/orcl/trace/orcl_ora_11390.trc
 27-MAY-11 09:26:41  diag/rdbms/orcl/orcl/trace/orcl_ora_10739.trc
 27-MAY-11 09:23:53  diag/rdbms/orcl/orcl/trace/orcl_dbrm_8573.trc
 27-MAY-11 09:22:58  diag/rdbms/orcl/orcl/trace/orcl_ora_8687.trc
 27-MAY-11 09:22:54  diag/rdbms/orcl/orcl/trace/orcl_mman_8577.trc
 27-MAY-11 09:22:50  diag/rdbms/orcl/orcl/trace/orcl_vktm_8565.trc
 27-MAY-11 09:22:48  diag/rdbms/orcl/orcl/trace/orcl_ora_8516.trc
 27-MAY-11 09:22:44  diag/rdbms/orcl/orcl/trace/orcl_ora_8515.trc
 27-MAY-11 09:22:44  diag/rdbms/orcl/orcl/trace/orcl_vktm_8347.trc
 27-MAY-11 09:21:24  diag/rdbms/orcl/orcl/trace/orcl_dbrm_8355.trc
 27-MAY-11 09:20:29  diag/rdbms/orcl/orcl/trace/orcl_ora_8470.trc
 27-MAY-11 09:20:28  diag/rdbms/orcl/orcl/trace/orcl_mmon_8371.trc
 27-MAY-11 09:20:28  diag/rdbms/orcl/orcl/trace/orcl_ora_8381.trc
 27-MAY-11 09:20:26  diag/rdbms/orcl/orcl/trace/orcl_mman_8359.trc
 27-MAY-11 09:20:20  diag/rdbms/orcl/orcl/trace/orcl_ora_8299.trc
 27-MAY-11 09:20:15  diag/rdbms/orcl/orcl/trace/orcl_ora_8297.trc
 27-MAY-11 09:20:15  diag/rdbms/orcl/orcl/trace/orcl_vktm_8096.trc
 27-MAY-11 09:20:07  diag/rdbms/orcl/orcl/trace/orcl_ora_8296.trc
 27-MAY-11 09:19:42  diag/rdbms/orcl/orcl/trace/orcl_ora_8285.trc
 27-MAY-11 09:19:33  diag/rdbms/orcl/orcl/trace/orcl_dm00_8271.trc
 27-MAY-11 09:19:33  diag/rdbms/orcl/orcl/trace/orcl_dw00_8273.trc
 27-MAY-11 09:19:11  diag/rdbms/orcl/orcl/trace/orcl_dbrm_8104.trc
 27-MAY-11 09:18:53  diag/rdbms/orcl/orcl/trace/orcl_ora_8267.trc
 27-MAY-11 09:18:33  diag/rdbms/orcl/orcl/trace/orcl_j001_8237.trc
 27-MAY-11 09:18:26  diag/rdbms/orcl/orcl/trace/orcl_mmon_8219.trc
 27-MAY-11 09:18:23  diag/rdbms/orcl/orcl/trace/orcl_ora_8231.trc
 27-MAY-11 09:18:22  diag/rdbms/orcl/orcl/trace/orcl_cjq0_8229.trc
 27-MAY-11 09:18:16  diag/rdbms/orcl/orcl/trace/orcl_ora_8131.trc
 27-MAY-11 09:18:14  diag/rdbms/orcl/orcl/trace/orcl_m000_8223.trc
 27-MAY-11 09:18:13  diag/rdbms/orcl/orcl/trace/orcl_mman_8108.trc
 27-MAY-11 09:18:05  diag/rdbms/orcl/orcl/trace/orcl_ora_8048.trc
 27-MAY-11 09:17:59  diag/rdbms/orcl/orcl/trace/orcl_vktm_7920.trc
 27-MAY-11 09:17:59  diag/rdbms/orcl/orcl/trace/orcl_ora_8046.trc
 27-MAY-11 09:17:00  diag/rdbms/orcl/orcl/trace/orcl_mman_7932.trc
 27-MAY-11 09:16:56  diag/rdbms/orcl/orcl/trace/orcl_ora_7954.trc
 27-MAY-11 09:16:51  diag/rdbms/orcl/orcl/trace/orcl_ora_7871.trc
I have already got some tracefiles. How long are they going to stay?
adrci> show control

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME                              
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1335663986           720                  8760                 2011-05-27 10:16:46.997118 +02:00                                                                                          1                    2                    80                   1                    2011-05-27 10:16:46.997118 +02:00       
1 rows fetched
The ordinary tracefiles will stay for 30 days (720 hours), while files like incident files stay one year (8760 hours) by default. We can change that policy with for example:
adrci> set control (SHORTP_POLICY = 360)

adrci> set control (LONGP_POLICY = 2190)

adrci> show control
ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1335663986           360                  2190                 2011-06-01 11:42:17.208064 +02:00                                                                                          1                    2                    80                   1                    2011-05-27 10:16:46.997118 +02:00
1 rows fetched
Also, we may want to purge tracefiles manually. Following command will manually purge all tracefiles older than 2 days(2880 minutes):
adrci> purge -age 2880 -type trace
adrci> show tracefile -rt
   01-JUN-11 10:46:54  diag/rdbms/orcl/orcl/trace/orcl_mmon_2106.trc
   01-JUN-11 09:43:43  diag/rdbms/orcl/orcl/trace/orcl_ckpt_2100.trc
   01-JUN-11 09:22:13  diag/rdbms/orcl/orcl/trace/alert_orcl.log
   01-JUN-11 09:22:11  diag/rdbms/orcl/orcl/trace/orcl_diag_2088.trc
   01-JUN-11 09:22:10  diag/rdbms/orcl/orcl/incident/incdir_6201/orcl_ora_2299_i6201.trc
   01-JUN-11 09:22:10  diag/rdbms/orcl/orcl/trace/orcl_ora_2299.trc
   01-JUN-11 09:18:47  diag/rdbms/orcl/orcl/incident/incdir_6153/orcl_ora_2236_i6153.trc
   01-JUN-11 09:18:47  diag/rdbms/orcl/orcl/trace/orcl_ora_2236.trc
   01-JUN-11 09:17:19  diag/rdbms/orcl/orcl/trace/orcl_dbrm_2090.trc
   01-JUN-11 09:16:44  diag/rdbms/orcl/orcl/trace/orcl_j002_2210.trc
   01-JUN-11 09:16:30  diag/rdbms/orcl/orcl/trace/orcl_ora_2187.trc
   01-JUN-11 09:16:19  diag/rdbms/orcl/orcl/trace/orcl_mman_2094.trc
   01-JUN-11 09:16:16  diag/rdbms/orcl/orcl/trace/orcl_vktm_2082.trc
   01-JUN-11 09:16:14  diag/rdbms/orcl/orcl/trace/orcl_ora_2016.trc
   30-MAY-11 14:07:02  diag/rdbms/orcl/orcl/trace/orcl_mmon_2093.trc
   30-MAY-11 11:15:30  diag/rdbms/orcl/orcl/trace/orcl_ora_3414.trc
   30-MAY-11 11:00:01  diag/rdbms/orcl/orcl/trace/orcl_j000_2245.trc
   30-MAY-11 10:56:58  diag/rdbms/orcl/orcl/trace/orcl_dbrm_2077.trc
   30-MAY-11 10:56:20  diag/rdbms/orcl/orcl/trace/orcl_j002_2201.trc
   30-MAY-11 10:56:06  diag/rdbms/orcl/orcl/trace/orcl_ora_2178.trc
   30-MAY-11 10:55:58  diag/rdbms/orcl/orcl/trace/orcl_mman_2081.trc
   30-MAY-11 10:55:55  diag/rdbms/orcl/orcl/trace/orcl_vktm_2069.trc
   30-MAY-11 10:55:53  diag/rdbms/orcl/orcl/trace/orcl_ora_2006.trc
Conclusion: With adrci, we have a new and efficient utility to deal with Oracle Errors – especially for collecting information to send them to Oracle Support. This functionality is called Incident Packaging Service. 11g is generating lots of tracefiles. We can control the purging policy of them with adrci. Finally, we can now tail -f our alert.log from any OS.

Followers