Tuesday, September 25, 2012
expdp procedure in 11g
1) create directory export_auto as 'd:\expdp1213';
create user dba_export_user identified by test123;
grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;
2)
CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
hdl_job NUMBER;
l_cur_scn NUMBER;
l_job_state VARCHAR2 (20);
l_status SYS.ku$_status1010;
l_job_status SYS.ku$_jobstatus1010;
BEGIN
begin
execute immediate 'drop table dba_export_user.AUTO_EXPORT';
exception when others then null;
end;
hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1213.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file,reusefile => 1);
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'export.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file,reusefile => 1);
DBMS_DATAPUMP.start_job (handle => hdl_job);
DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);
DBMS_DATAPUMP.detach(handle => hdl_job);
END;
/
3) Change the time, Date
begin
dbms_scheduler.create_job(
job_name => 'EXPORT_JOB'
,job_type => 'STORED_PROCEDURE'
,job_action => 'dba_export_user.start_export'
,start_date => '08-FEB-12 06.02.00.00 PM ASIA/CALCUTTA'
,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
,enabled => TRUE
,comments => 'EXPORT_DATABASE_JOB');
end;
/
EXEC dbms_scheduler.run_job('EXPORT_JOB');
Friday, September 21, 2012
RMAN Block Media Recovery
From Oracle 9i onwards you can use RMAN to recover only blocks while database is up and running.
This could possibly save hours and hours of recovery time as a full database restore is not necessary.
Error reported by user pointing to block corruption.
POPULATE_MACSDATA - ORA-01578: ORACLE data block corrupted (file # 48, block # 142713)
ORA-01110: data file 48: '/hqlinux08db06/ORACLE/macsl/MACSDAT_2006_06.dbf'
ORA-02063: preceding 2 lines from MODSL_MACSL_LINK
File name : /hqlinux08db06/ORACLE/macsl/MACSDAT_2006_06.dbf
Check first if the there is only one(few) blocks corrupted or most of the blocks are corrupted.
macsl:/opt/oracle/admin/macsl/bdump>
Issue command below at UNIX prompt.
dbv file=/hqlinux08db06/ORACLE/macsl/MACSDAT_2006_06.dbf BLOCKSIZE=8192 LOGFILE=test.log
DBV-00200: Block, dba 201469305, already marked corrupted
SQL> Select * from v$database_block_corruption;
You will get block number corrupt.
Ex: block 142713.
After that LOGIN TO RMAN.
rman target / catalog rman10/rman10@rman10p
RMAN> BLOCKRECOVER DATAFILE 48 BLOCK 142713;
V$database_block_corruption is the view to check the list of corrupted blocks.
If you have multiple block list as corrupt, You can use single command to recover them.
RMAN> BLOCKRECOVER corruption list;
Thursday, September 20, 2012
Table level Recovery using Flashback Table
---------------------------------------------------Recover Dropped Table from Recyclebin using Flashback Table-------------------------
Oracle Flashback Table enables you to restore a table to its state as of a previous point in time. It provides a fast, online solution for recovering a table that has been accidentally modified or deleted by a user or application. In many
cases, Oracle Flashback Table eliminates the need for you to perform more complicated point-in-time recovery operations.
Oracle Flashback Table:
Restores all data in a specified table to a previous point in time described by a timestamp or SCN.
Performs the restore operation online.
Automatically maintains all of the table attributes, such as indexes, triggers, and constraints that are necessary for an application to function with the flashed-back table.
Maintains any remote state in a distributed environment. For example, all of the table modifications required by replication if a replicated table is flashed back.
Maintains data integrity as specified by constraints. Tables are flashed back provided none of the table constraints are violated. This includes any referential integrity constraints specified between a table included in the FLASHBACK
TABLE statement and another table that is not included in the FLASHBACK TABLE statement.
Even after a flashback operation, the data in the original table is not lost. You can later revert to the original state.
FLASHBACK TABLE <table_name> TO BEFORE DROP;
Some other variations of the flashback database command include.
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
--------------------------------------------------Recover deleted Table Data from Recyclebin using Flashback Table---------------------
On Oracle Database 11g (10gR2...), we can rewind one or more tables back to their contents at a previous time without affecting other database objects.
Before we use Flashback Table, We must enable row movement on the table. because rowids will change after the flashback.
Example: Flashback the table back to previous time using SCN
select count(*) from LDBO.test;
COUNT(*)
----------
68781
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
1584494
SQL> delete from LDBO.test where rownum <= 50000;
50000 rows deleted.
SQL> commit;
Commit complete.
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
18781
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
1587106
Enable row movement:
SQL> alter table LDBO.test enable row movement;
Table altered.
SQL> FLASHBACK TABLE LDBO.test to scn 1584494;
Flashback complete.
SQL> select count(*) from LDBO.test;
QL> alter table LDBO.test disable row movement;
Table altered.
We can rewind the table back to previous time using timestamp:
SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
Session altered.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/08/30 17:01:09
SQL> delete from LDBO.test ;
68781 rows deleted.
SQL> commit;
Commit complete.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/08/30 17:03:18
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
0
SQL> alter table LDBO.test enable row movement;
Table altered.
SQL> flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS');
Flashback complete.
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
68781
--------------------------------------------------------------------------------------------Flashback Table recover table to different table----------------------
FLASHBACK TABLE test TO BEFORE DROP RENAME TO test2;
flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS') RENAME TO test2;
Friday, September 14, 2012
11g Active Data Guard - enabling Real-Time Query
Active Data Guard is a good new feature in 11g (although requires a license) which enables us to query the Standby database while redo logs are being applied to it. In earlier releases, we had to stop the log apply, open the database in read only mode and then start the log apply again when the database was taken out of the read only mode.
With Oracle 11g Active Data Guard, we can make use of our standby site to offload reporting and query type applications while at the same time not compromising on the high availability aspect.
How do we enable Active Data Guard?
If we are not using the Data Guard Broker, we need to open the standby database, set it in read only mode and then start the managed recovery as shown below.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
ORACLE instance started.
Total System Global Area 1069252608 bytes
Fixed Size 2154936 bytes
Variable Size 847257160 bytes
Database Buffers 213909504 bytes
Redo Buffers 5931008 bytes
Database mounted.
Database opened.
Fixed Size 2154936 bytes
Variable Size 847257160 bytes
Database Buffers 213909504 bytes
Redo Buffers 5931008 bytes
Database mounted.
Database opened.
SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.
Media recovery complete.
If we are using the Data Guard Broker CLI, DGMGRL, the procedure is a bit different and is not very clearly explained in the documentation.
You need to stop redo apply first via the SET STATE dgmgrl command, then from a SQL*PLUS session, open the database in read only mode, and then back again from dgmgrl via set SET STATE command, start the redo apply again.
Stop redo apply with the following command from Data Guard Broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-OFF’;
Open standby read-only via SQL*Plus
SQL> alter database open read only;
Restart redo apply via broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-ON’;
I tried to run the same only via DGMGRL and got this error:
DGMGRL> edit database PRODDB set state=”APPLY-OFF”;
Succeeded.
Succeeded.
DGMGRL> edit database PRODDB set state=”READ ONLY”;
Error: ORA-16516: current state is invalid for the attempted operation
Error: ORA-16516: current state is invalid for the attempted operation
After we have enabled the Real-Time Query feature, we can confirm the same via the DGMGRL command – SHOW DATABASE
DGMGRL> show database verbose PRODDB_DR
Database – PRODDB_DR
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: ON
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: ON
Note:
Even though we have enabled Real-Time Query feature, if we go to Data Guard page via the Enterprise Manager Grid Control GUI, it will show that Real-Time Query is in a Disabled state.
This is apparently a bug which applies to OEM Grid Control 10.2.0.1 to 10.2.0.5 with a 11.2 target database.
Bug 7633734: DG ADMIN PAGE REAL TIME QUERY SHOWS DISABLED WHEN ENABLED FOR 11.2 DATABASES
Labels:
dataguard,
Oracle 11g,
standby database
RMAN Recovery Scenarios
RMAN Backup and Recovery Scenarios
RMAN Backup and Recovery Scenarios
Complete Closed Database Recovery. System tablespace is missing
If the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie:
cp -p /usr/backup/RMAN/system01.dbf /usr/oradata/u01/IASDB/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;
Complete Open Database Recovery. Non system tablespace is missing
If a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database remain open.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;
Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing
If a non system tablespace is missing or corrupted and the database crashed,recovery can be performed after the database is open.
Pre requisites: A closed or open database backup and archived logs.
1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)
2. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
3. alter database datafile 6 offline; (tablespace cannot be used because the database is not open)
4. alter database open;
5. recover datafile 6;
6. alter tablespace <tablespace_name> online;
Recovery of a Missing Datafile that has no backups (database is open).
If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist.
Pre requisites: All relevant archived logs.
1. alter tablespace <tablespace_name> offline immediate;
2. alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’
Restore and Recovery of a Datafile to a different location.
If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.
Pre requisites: All relevant archived logs.
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to ‘/user/oradata/u02/IASDB/user01.dbf’;
4. recover tablespace <tablespace_name>;
5. alter tablespace <tablespace_name> online;
Control File Recovery
Always multiplex your controlfiles. Controlfiles are missing, database crash.
Pre requisites: A backup of your controlfile and all relevant archived logs.
1. startup; (you get ora-205, missing controlfile, instance start but database is not mounted)
2. Use OS commands to restore the missing controlfile to its original location:
cp -p /usr/backup/RMAN/control01.dbf /usr/oradata/u01/IASDB/control01.dbf
cp -p /usr/backup/RMAN/control02.dbf /usr/oradata/u01/IASDB/control02.dbf
3. alter database mount;
4. recover automatic database using backup controlfile;
5. alter database open resetlogs;
6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.
Incomplete Recovery, Until Time/Sequence/Cancel
Incomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence, or when an important object was dropped, and recovery needs to be made until before the object was dropped.
Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.
1. If the database is open, shutdown abort
2. Use OS commands to restore all datafiles to its original locations:
cp -p /usr/backup/RMAN/u01/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u02/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u03/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u04/*.dbf /usr/oradata/u01/IASDB/
etc…
3. startup mount;
4. recover automatic database until time ’2004-03-31:14:40:45′;
5. alter database open resetlogs;
6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until time, until sequence or until cancel:
recover automatic database until sequence 120 thread 1; OR
recover database until cancel;
Rman Recovery Scenarios
Rman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived redolog files are made using Rman. Incremental Rman backups may be used also.
Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or other database.
Configuration and operation recommendations:
Set the parameter controlfile autobackup to ON to have with each backup a
controlfile backup also:
configure controlfile autobackup on;
set the parameter retention policy to the recovery window you want to have,
ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands:
configure retention policy to redundancy 2;
Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup:
backup database plus archivelog;
Perform daily maintenance routines to maintain on your backup directory the number of backups you need only:
crosscheck backup;
crosscheck archivelog all;
delete noprompt obsolete backup;
To work with Rman and a database based catalog follow these steps:
1. sqlplus /
2. create tablespace repcat;
3. create user rmuser identified by rmuser default tablespace repcat temporary tablespace temp;
4. grant connect, resource, recovery_catalog_owner to rmuser
5. exit
6. rman catalog rmuser/rmuser # connect to rman catalog as the rmuser
7. create catalog # create the catalog
8. connect target / #
Complete Closed Database Recovery. System tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing,so the database can be opened without reseting the redologs.
1. rman target /
2. startup mount;
3. restore database;
4. recover database;
5. alter database open;
Complete Open Database Recovery. Non system tablespace is missing,database is up
1. rman target /
2. sql ‘alter tablespace <tablespace_name> offline immediate’;
3. restore datafile 3;
4. recover datafile 3;
5. sql ‘alter tablespace <tablespace_name> online’;
Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing
A user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1. sqlplus /nolog
2. connect / as sysdba
3. startup mount
4. alter database datafile ‘<datafile_name>’ offline;
5. alter database open;
6. exit;
7. rman target /
8. restore datafile ‘<datafile_name>’;
9. recover datafile ‘<datafile_name>’;
10. sql ‘alter tablespace <tablespace_name> online’;
Recovery of a Datafile that has no backups (database is up).
If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option offline immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1. sqlplus ‘/ as sysdba’
2. alter tablespace <tablespace_name> offline immediate;
3. alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf;
4. exit
5. rman target /
6. recover tablespace <tablespace_name>;
7. sql ‘alter tablespace <tablespace_name> online’;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’
Restore and Recovery of a Datafile to a different location. Database is up.
If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to ‘/user/oradata/u02/IASDB/user01.dbf’;
4. rman target /
5. recover tablespace <tablespace_name>;
6. sql ‘alter tablespace <tablespace_name> online’;
Control File Recovery
Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the Database. If both controlfiles are missing, the database will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It is the number following the ‘c-’ at the start of the name.
1. rman target /
2. set dbid <dbid#>
3. startup nomount;
4. restore controlfile from autobackup;
5. alter database mount;
6. recover database;
7. alter database open resetlogs;
8. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.
Incomplete Recovery, Until Time/Sequence/Cancel
Incomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing.
Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it.
In this case recovery needs to be performed until before the object was dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.
1. If the database is open, shutdown it to perform full restore.
2. rman target \
3. startup mount;
4. restore database;
5. recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1.
6. alter database open resetlogs;
7. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until sequence, until time, ie: ’2012-01-04:01:01:10′.
RMAN scenarios
http://itcareershift.com/blog1/2010/11/25/real-life-oracle-dba-scenarios-using-rman-backups-and-troubleshooting-new-oracle-dba-career/
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmadvre.htm
http://www.reachdba.com/showthread.php?453-RMAN-All-major-Restoration-and-Recovery-Scenarios
Note: When database is in open state then we can not delete datafiles from OS level (File is being used by another User)
--------------------------------------------------------------DataFile removed physcially from OS level and no Backup of that datafile------------------------------------
1) rman backup taken 01-01-12
2) add datafile and add some tables and data on 02-01-12
3) remove datafile from OS level on 02-01-12 no backup till now
4) alter tablespace USR offline;
ORA-01116: error in opening database file 11
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
5) alter tablespace drtbs offline immediate;
select TABLESPACE_NAME,STATUS from dba_tablespaces;
6) select file_id, file_name from dba_data_files where Tablespace_name='USR';
7) RMAN TARGET
8) rman > list of backup datafile 11;
9) list backup of datafile 12; ---- no exist because no backup
10) RMAN> restore tablespace USR;
11) RMAN> recover tablespace USR;
select file_id, file_name from dba_data_files where Tablespace_name='USR';
--------------------------------------------------------------------------Loss of Controlfile-----------------------------
1) select name from v$controlfile;
2) remove controlfile physcially from OS level
3) alter tablespace users online;
alter tablespace users online
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
SQL> shutdown abort
ORACLE instance shut down.
SQL> startup nomount;
Since we are not using a RMAN catalog we need to set the DBID
RMAN> set dbid=2415549446;
executing command: SET DBID
Restore the controlfile
RMAN> run {
restore controlfile from autobackup;
}
error RMAN-06172 because when check show all; controlfile autoback is not present...also not able to configure again becuase of nomount state
set controlfile autobackup format for device type disk to '/orabkp\%F';
restore controlfile from autobackup;
RMAN> alter database mount;
RMAN> recover database;
SQL> alter database open resetlogs;
-------------------------------------recover and open the database if the archive log required for recovery is miss
SQL> recover database until cancel using backup controlfile;
CANCEL
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) recover database until cancel using backup controlfile;
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
------------------------------------------Redo log removed from OS level-----------------------------------
select member from v$Logfile;
rm redo*.log
If one or all of the online redo logfiles are delete then the database hangs and in the alert log file
ARC1: Failed to archive thread 1 sequence 10 (0)
SQL> shutdown immediate;
SQL> startup mount;
SELECT GROUP#,SEQUENCE#,STATUS,FIRST_CHANGE# from v$log;
GROUP# SEQUENCE# STATUS FIRST_CHANGE#
--------- ---------- ---------------- -------------
1 10 CURRENT 8.6056E+10
3 9 INACTIVE 8.6056E+10
2 8 INACTIVE 8.6056E+10
RMAN TARGET /
RMAN> run {
2> set until sequence 10;
3> restore database;
4> recover database;
5> alter database open resetlogs;
6> }
EXIT
The recovery process creates the online redo logfiles at the operating system level also.
Since we have done an incomplete recover with open resetlogs, we should take a fresh complete backup of the database.
NOTE: Please make sure you remove all the old archived logfiles from the archived area.
------------------------------------------------------Drop tablespace by mistake (drop tablespace test including contents and datafiles;)------Point inTime
DBA realized the mistake;
He will refer alert log for the exact timing when tablespace was dropped.
-------Alert log--------------
Sun Feb 4 10:59:43 2012
drop tablespace test including contents and datafiles
Sun Feb 4 10:59:47 2012
Completed: drop tablespace test including contents and datafiles
SQL> shutdown abort
rman target /
RMAN> RUN
{
STARTUP NOMOUNT
SET UNTIL TIME "TO_DATE ('04-02-07 10:58:00', 'DD-MM-YY HH24:MI:SS')";
RESTORE CONTROLFILE;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}
select name from v$tablespace;
-----------------------------------------------------------Complete loss of all database files including SPFILE using RMAN
ls -l
rm *.dbf
ls -l *.dbf
ls: *.dbf: No such file or directory
mv spfileopsdba.ora spfileopsdba.org
ls -lt spfile* *SPFILE REMOVED
Database Details
------------------
Database Name=OPSDBA
Machine Name=ITLINUXDEVBLADE07
DBID=1499754868 (select dbid from v$database)
---------------------
Step 1: RECOVERY OF SPFILE
Create spfile.rcv as:
set dbid= 1499754868
run {
startup nomount force ;
};
rman target / catalog rman10/rman10@rman10p cmdfile=spfile.rcv
Now restore the spfile
set dbid=1499754868
run {
allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
restore spfile ;
release channel ch1 ;
}
Step 2: RESTORE OF CONTROLFILES
Same Steps as spfile with the restore command changed. So the new script is
set dbid=1499754868
run {
allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
restore controlfile ;
release channel ch1 ;
}
Step 3: RESTORE OF DATABASE
SQL> conn sys as sysdba
SQL> alter database mount;
Now get the log sequence number of the database from the catalog database:
select sequence# from rc_backup_redolog where db_name=’OPSDBA’;
RMAN> run {
2> allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
3> restore database ;
4> recover database until logseq=6; -- GOT FROM THE ABOVE QUERY
5> release channel ch1 ;
6> }
7>
Step 4: alter database open resetlogs;
--------------------------- SYSTEM / SYSAUX / UNDO tablespace is missing
In this case complete recovery is performed, only the system tablespace is missing, so the database can be opened without resetlogs option.
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;
$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSAUX;
RMAN> recover tablespace SYSAUX;
RMAN> alter database open;
------------------------------Non system tablespace is missing, database is up
$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;
To restore/recover only datafile(s)
$ rman target /
RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;
-------------------------------Non system tablespace is missing,database is closed
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;
$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';
-----------------------restore a tablespace to a new location
rman target / catalog rman/rman@rcat
run {
allocate channel ch1 type disk;
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}
RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';
-------------------Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can be performed if all archived logs since the creation of the missing datafile exist.
Pre requisites: All relevant archived logs.
$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online';
-------------------------------Recovering After the Loss of All Members of an Online Redo Log Group
If a media failure damages all members of an online redo log group, then different scenarios can occur depending on the type of online redo log group affected by the failure and the archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active, then it is needed for crash recovery.
SQL> startup mount
Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)
1.1 Clearing Inactive, Archived Redo
alter database clear logfile group 1 ;
alter database open ;
1.2 Clearing Inactive, Not-Yet-Archived Redo
alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have to be dropped because the redo necessary to bring it online is being cleared, and there is no copy of it. )
alter database clear unarchived logfile group 1 unrecoverable datafile;
Take a complete backup of database.
And now open database:
alter database open ;
Case-2 If the group is ACTIVE
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
Case-3 If the group is CURRENT
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
--------------------------
1) OPEN REDO01.LOG (INACTIVE) AND EDIT AND CORRUPT IT
SQL> ALTER SYSTEM SWITCH LOGFILE; -------HANG
alter database clear logfile group 1 ;
ORA-00350: log 1 of instance rkangel (thread 1) needs to be archived
ORA-00312: online log 1 thread 1: 'G:\RKANGEL\REDO01.LOG'
SQL> SHUT IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
SQL> ALTER DATABASE OPEN;
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL> ALTER SYSTEM SWITCH LOGFILE;
-------------2) OPEN REDO02.LOG (ACTIVE) AND EDIT AND CORRUPT IT
SQL> ALTER SYSTEM SWITCH LOGFILE; -------HANG
ERROR at line 1:
ORA-03113: end-of-file on communication channel
SQL> STARTUP
ORACLE instance started.
Total System Global Area 6413680640 bytes
Fixed Size 2267184 bytes
Variable Size 4563404752 bytes
Database Buffers 1828716544 bytes
Redo Buffers 19292160 bytes
Database mounted.
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: 'G:\RKANGEL\REDO02.LOG'
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 52429318)
ALTER DATABASE CLEAR LOGFILE GROUP 2;
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
alter database clear logfile group 2 unrecoverable datafile;
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance rkangel (thread 1)
ORA-00312: online log 2 thread 1: 'G:\RKANGEL\REDO02.LOG'
-----note: For ACTIVE or CURRENT logfile
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;
1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) recover database until cancel using backup controlfile;
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”
--------------------------------------Recovering a NOARCHIVELOG Database
Restore of a database running in NOARCHIVELOG mode is similar to restore of a database in ARCHIVELOG mode. The main differences are:
Only consistent backups can be used in restoring a database in NOARCHIVELOG mode.
Media recovery is not possible because no archived redo logs exist.
When recovering a NOARCHIVELOG database, specify the NOREDO option on the RECOVER command to indicate that RMAN should not attempt to apply archived redo logs.
----------------------------------------------------------------Remove Temporary datafile from OS level---------------------------------
shut immediate
Remove Temporary datafile from OS level
startup
auotmatically create temp
shut immediate
startup mount
Remove Temporary datafile from OS level
alter database open;
auotmatically create temp
----------------------------------------------------------------Remove UNDO datafile from OS level----------------------------------------
shut immediate
startup mount
Remove UNDO datafile from OS level
alter database open;
ERROR at line 1:
ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
ORA-01110: data file 3: 'G:\RKANGEL\UNDOTBS01.ORA'
RMAN> restore tablespace UNDOTBS1;
RMAN> recover tablespace UNDOTBS1;
RMAN> alter database open;
----------------------------------------How to recover dropped / deleted table using RMAN backup----------------------
restore backup on to the UAT machine
export the table from UAT
import into LIVE
1) incomplete recovery ( just before to the drop table) can recover your table from RMAN backup.
or
2) You can recover /duplcate the database to another database with RMAN backup
Export the concerned table from the restored/duplicated database to the database you want.
3) use flashback (in oracle 10g) for indivisual table recovery purpose.
Oracle 10g makes life easier with the ability to recover a dropped table similar to recovering a file from a Windows Recycle Bin.
And in case if this on 10g,
Eg: You dropped the DEPT table, which belongs to the USER_DATA tablespace
You can use the following command to recover a dropped table:
FLASHBACK TABLE <table_name> TO BEFORE DROP;
4) RMAN TSPITR is most useful for recovering the following:
An erroneous DROP TABLE or TRUNCATE TABLE statement
A table that has become logically corrupted
An incorrect batch job or other DML statement that has affected only a subset of the database
A logical schema to a point different from the rest of the physical database when multiple schemas exist in separate tablespaces of one physical database
--------------------------------------------------------Point in time recovery using RMAN (until a log sequence number) ----------------delete some data from table by mistake
Someone delete all data from a table at 1:47 PM (delete from myobjects) SELECT * FROM DBA_AUDIT_TRAIL;
windows dir d:\archive
20/09/2012 01:44 PM 1,024 ARC0000000010_0794497245.0001
20/09/2012 01:47 PM 41,461,248 ARC0000000011_0794497245.0001
archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u02/ORACLE/opsdba/arch
Oldest online log sequence 21
Next log sequence to archive 23
Current log sequence 23
--Note the current log sequence number (22)
We need to determine the log sequence we need to recover until
select sequence#,first_change#, to_char(first_time,'HH24:MI:SS') from v$log order by 3
SEQUENCE# FIRST_CHANGE# TO_CHAR(
--------- ------------- --------
21 8.6056E+10 13:48:16
22 8.6056E+10 13:48:22
23 8.6056E+10 13:48:29
Log sequence 21 was first written to at 1:48 PM so we should recover to a log sequence before this – i.e sequence# 10 (from window dir )
SQL> shutdown immediate;
SQL> startup mount;
RMAN> run {
set until sequence=10; >>> add one to the sequence number we have to recover until
restore database;
recover database;
}
alter database open resetlogs;
ORA-01190: control file or data file 1 is from before the last RESETLOGS
ORA-01110: data file 1: 'G:\RKANGEL\SYSTEM01.ORA'
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 09/20/2012 14:15:38
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'G:\RKANGEL\SYSTEM01.ORA'
SQL> ALTER DATABASE DATAFILE 1 ONLINE;
SQL> RECOVER DATABASE;
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
select count(*) from myobjects;
---------------------------------------------------Recover Dropped Table from Recyclebin using Flashback Table-------------------------
Oracle Flashback Table enables you to restore a table to its state as of a previous point in time. It provides a fast, online solution for recovering a table that has been accidentally modified or deleted by a user or application. In many cases, Oracle Flashback Table eliminates the need for you to perform more complicated point-in-time recovery operations.
Oracle Flashback Table:
Restores all data in a specified table to a previous point in time described by a timestamp or SCN.
Performs the restore operation online.
Automatically maintains all of the table attributes, such as indexes, triggers, and constraints that are necessary for an application to function with the flashed-back table.
Maintains any remote state in a distributed environment. For example, all of the table modifications required by replication if a replicated table is flashed back.
Maintains data integrity as specified by constraints. Tables are flashed back provided none of the table constraints are violated. This includes any referential integrity constraints specified between a table included in the FLASHBACK TABLE statement and another table that is not included in the FLASHBACK TABLE statement.
Even after a flashback operation, the data in the original table is not lost. You can later revert to the original state.
FLASHBACK TABLE <table_name> TO BEFORE DROP;
Some other variations of the flashback database command include.
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
--------------------------------------------------Recover deleted Table Data from Recyclebin using Flashback Table---------------------
On Oracle Database 11g (10gR2...), we can rewind one or more tables back to their contents at a previous time without affecting other database objects.
Before we use Flashback Table, We must enable row movement on the table. because rowids will change after the flashback.
Example: Flashback the table back to previous time using SCN
select count(*) from LDBO.test;
COUNT(*)
----------
68781
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
1584494
SQL> delete from LDBO.test where rownum <= 50000;
50000 rows deleted.
SQL> commit;
Commit complete.
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
18781
SQL> SELECT CURRENT_SCN FROM V$DATABASE;
CURRENT_SCN
-----------
1587106
Enable row movement:
SQL> alter table LDBO.test enable row movement;
Table altered.
SQL> FLASHBACK TABLE LDBO.test to scn 1584494;
Flashback complete.
SQL> select count(*) from LDBO.test;
QL> alter table LDBO.test disable row movement;
Table altered.
We can rewind the table back to previous time using timestamp:
SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
Session altered.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/08/30 17:01:09
SQL> delete from LDBO.test ;
68781 rows deleted.
SQL> commit;
Commit complete.
SQL> select sysdate from dual;
SYSDATE
-------------------
2009/08/30 17:03:18
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
0
SQL> alter table LDBO.test enable row movement;
Table altered.
SQL> flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS');
Flashback complete.
SQL> select count(*) from LDBO.test;
COUNT(*)
----------
68781
--------------------------------------------------------------------------------------------Flashback Table recover table to different table----------------------
FLASHBACK TABLE test TO BEFORE DROP RENAME TO test2;
flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS') RENAME TO test2;
Wednesday, August 29, 2012
Oracle 11g Advanced Compression
You
can implement advanced compression option into your databases having more
update, insert operation tables to manage high growing amount of data.
Followings
are very simple command to implement advanced compression
alter table tablename compress for all operations;
alter index index1 rebuild compress;
alter index index2 rebuild compress;
SELECT table_name, compression, compress_for FROM user_tables
where table_name='tablename';
select index_name,COMPRESSION,STATUS from dba_indexes where
='tablename';
Satish,
Please
implement the same on UAT first to get performance.
Following is test case to show the
difference between 10gR2 Table Compression feature and 11gR2’s Advanced
Compression. Oracle provided table level compression feature in 10gR2. While
this compression provided some storage reduction, 10g’s table compression only
compressed the data during BULK LOAD operations. New and updated data were
not compressed.
With 11g’s
Advanced Compression new and updated data are also compressed; achieving
highest level in storage reduction, while providing performance improvements as
compressed blocks result in more data being moved per I/O.
Note1: Basic compression comes with oracle 11g Enterprise
Edition, To make table as OLTP compressed its again extra cost (US$11,500.00/
Processor) perpetual option with Enterprise Edition.
Note2: There is tradeoff between
Disk IO and CPU. it depends on how your system is configured. If your
performance bottleneck is disk I/O, you almost certainly will benefit from
using compression, because it saves a lot of disk reads. If you are on the
other hand low on CPU, you might not always.
------------------------------Test
Case-----------------------------------------------
Following test case was executed in 10g database server.
A table called TEST was created without COMPRESSION option.
SQL> select table_name,compression from dba_tables where table_name = 'TEST';
TABLE_NAME COMPRESS
------------------------- -------------
TEST DISABLED
SQL> select bytes from dba_segments where segment_name = 'TEST';
SUM(BYTES)
------------------
92274688
The size of the table was around 92MB.
Now create another table called TEST_COMPRESSED with COMPRESS option.
SQL> create table TEST_COMPRESSED COMPRESS as select * from test;
Table created.
SQL> select table_name, compression from dba_tables where table_name like 'TEST
%';
TABLE_NAME COMPRESS
------------------------------ ---------------
TEST_COMPRESSED ENABLED
TEST DISABLED
Now let’s check the size of the COMPRESSED table.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
SUM(BYTES)
----------------
30408704
Check out the size of the COMPRESSED table. It is only 30MB, around 30% reduction in size. So far so good.
Now let’s do a plain insert into the COMPRESSED table.
SQL> insert into TEST_COMPRESSED select * from TEST;
805040 rows created.
SQL> commit;
Commit complete.
Let’s check the size of the COMPRESSED table.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED'
2 /
SUM(BYTES)
----------
117440512
Wow! From 30MB to 117MB? So, plain INSERT statement does not COMPRESS the data in 10g.
(You will see this is not the case with 11g)
Now let’s do the same insert with a BULK LOAD
SQL> insert /*+ APPEND */ into TEST_COMPRESSED select * from TEST;
805040 rows created.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
SUM(BYTES)
----------
142606336
Ok, now the size of the COMPRESSED table is 142MB from 117MB. For the same number of rows, the table size only increased by 25MB. So BULK LOAD compresses the data.
Let’s check other DML statements such as DELETE and UPDATE against the COMPRESSED table.
SQL> delete from test_compressed where rownum < 100000;
99999 rows deleted.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
SUM(BYTES)
----------
142606336
No change in total size of the table. DELETE has no impact as expected.
Let’s check UPDATE.
SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
rownum < 100000;
99999 rows updated.
SQL> commit;
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
SUM(BYTES)
----------
150994944
The table size is increased by 8MB? No compression for UPDATE statement either.
All this clearly shows that 10g’s Table COMPRESSION would work great for initial BULK LOADS, however subsequent UPDATE’s, DELETE’s and INSERT’s will not result in COMPRESSED blocks.
Now, let’s see 11g’s Test Results.
The following SQL statements were executed against 11.2.0.1 database version.
TEST table of 100MB in size was created as before.
SQL> select bytes from dba_segments where segment_name = 'TEST';
BYTES
----------
100663296
So 100MB of table created.
Let’s create a table with COMPRESS FOR ALL OPERATIONS option. This is only available in 11g.
SQL> create table test_compressed compress for all operations as select * from
test;
Table created.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
BYTES
----------
31457280
Check out the size of the compressed table vs. uncompressed table. 30% less space usage on a compressed table. Not a big difference compared to 10g.
Let’s check other DML statements.
Let’s do a plain insert to the compressed table.
SQL> insert into TEST_COMPRESSED select * from test;
789757 rows created.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
BYTES
----------
75497472
11g’s Advanced compression, compressed 100MB of data to 40MB and inserted to the compressed table, WITHOUT BULK LOAD option.
Now let’s do the BULK LOAD onto 11g’s COMPRESSED table.
SQL> insert into /*+ APPEND */ test_compressed select * from TEST;
789757 rows created.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
BYTES
----------
109051904
It has a same impact as PLAIN insert.
What about deletes and updates?
SQL> delete from test_compressed where rownum < 100000;
99999 rows deleted.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
BYTES
----------
109051904
No change in deletes. This is expected as the blocks are compressed when the new rows are added to the existing blocks and that the threshold reaches PCTFREE.
SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
2 rownum < 100000;
99999 rows updated.
SQL> commit;
Commit complete.
SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';
BYTES
----------
109051904
There is no change in this case as existing blocks were able to accommodate updates. However the same update generated more data in 10g.
Subscribe to:
Posts (Atom)