Friday, September 14, 2012

RMAN Recovery Scenarios


RMAN Backup and Recovery Scenarios


RMAN Backup and Recovery Scenarios

 

Complete Closed Database Recovery. System tablespace is missing

If the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie:
cp -p /usr/backup/RMAN/system01.dbf  /usr/oradata/u01/IASDB/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;

Complete Open Database Recovery. Non system tablespace is missing

If a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database remain open.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;

Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing

If a non system tablespace is missing or corrupted and the database crashed,recovery can be performed after the database is open.
Pre requisites: A closed or open database backup and archived logs.
1.   startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)
2.   Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
3.   alter database datafile 6 offline; (tablespace cannot be used because the database is not open)
4.   alter database open;
5.   recover datafile 6;
6.   alter tablespace <tablespace_name> online;

Recovery of a Missing Datafile that has no backups (database is open).

If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist.
Pre requisites: All relevant archived logs.
1.   alter tablespace <tablespace_name> offline immediate;
2.   alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’;
3.   recover tablespace <tablespace_name>;
4.   alter tablespace <tablespace_name> online;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’

Restore and Recovery of a Datafile to a different location.

If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.
Pre requisites: All relevant archived logs.
1.    Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to ‘/user/oradata/u02/IASDB/user01.dbf’;
4.    recover tablespace <tablespace_name>;
5.    alter tablespace <tablespace_name> online;

Control File Recovery

Always multiplex your controlfiles. Controlfiles are missing, database crash.
Pre requisites: A backup of your controlfile and all relevant archived logs.
1.    startup; (you get ora-205, missing controlfile, instance start but database is not mounted)
2.    Use OS commands to restore the missing controlfile to its original location:
cp -p /usr/backup/RMAN/control01.dbf /usr/oradata/u01/IASDB/control01.dbf
cp -p /usr/backup/RMAN/control02.dbf /usr/oradata/u01/IASDB/control02.dbf
3.    alter database mount;
4.    recover automatic database using backup controlfile;
5.    alter database open resetlogs;
6.    make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.

Incomplete Recovery, Until Time/Sequence/Cancel

Incomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence, or when an important object was dropped, and recovery needs to be made until before the object was dropped.
Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.
1.  If the database is open, shutdown abort
2.  Use OS commands to restore all datafiles to its original locations:
cp -p /usr/backup/RMAN/u01/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u02/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u03/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u04/*.dbf /usr/oradata/u01/IASDB/
etc…
3.  startup mount;
4.  recover automatic database until time ’2004-03-31:14:40:45′;
5.  alter database open resetlogs;
6.  make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you   may use instead of until time, until sequence or until cancel:
recover automatic database until sequence 120 thread 1; OR
recover database until cancel;

Rman Recovery Scenarios

Rman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived redolog files are made using Rman. Incremental Rman backups may be used also.
Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or other database.
Configuration and operation recommendations:
Set the parameter controlfile autobackup to ON to have with each backup a
controlfile backup also:
configure controlfile autobackup on;
set the parameter retention policy to the recovery window you want to have,
ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands:
configure retention policy to redundancy 2;
Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup:
backup database plus archivelog;
Perform daily maintenance routines to maintain on your backup directory the number of backups you need only:
crosscheck backup;
crosscheck archivelog all;
delete noprompt obsolete backup;
To work with Rman and a database based catalog follow these steps:
1. sqlplus /
2. create tablespace repcat;
3. create user rmuser identified by rmuser default tablespace repcat temporary tablespace temp;
4. grant connect, resource, recovery_catalog_owner to rmuser
5. exit
6. rman catalog rmuser/rmuser          # connect to rman catalog as the rmuser
7. create catalog                     # create the catalog
8. connect target /                   #

Complete Closed Database Recovery. System tablespace is missing

In this case complete recovery is performed, only the system tablespace is missing,so the database can be opened without reseting the redologs.
1.  rman target /
2.  startup mount;
3.  restore database;
4.  recover database;
5.  alter database open;

Complete Open Database Recovery. Non system tablespace is missing,database is up

1.   rman target /
2.   sql ‘alter tablespace <tablespace_name> offline immediate’;
3.   restore datafile 3;
4.   recover datafile 3;
5.   sql ‘alter tablespace <tablespace_name> online’;

Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing

A user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1.    sqlplus /nolog
2.    connect / as sysdba
3.    startup mount
4.    alter database datafile ‘<datafile_name>’ offline;
5.    alter database open;
6.    exit;
7.    rman target /
8.    restore datafile ‘<datafile_name>’;
9.    recover datafile ‘<datafile_name>’;
10.   sql ‘alter tablespace <tablespace_name> online’;

Recovery of a Datafile that has no backups (database is up).

If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option offline immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1.    sqlplus ‘/ as sysdba’
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf;
4.    exit
5.    rman target /
6.    recover tablespace <tablespace_name>;
7.    sql ‘alter tablespace <tablespace_name> online’;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’

Restore and Recovery of a Datafile to a different location. Database is up.

If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1.    Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to ‘/user/oradata/u02/IASDB/user01.dbf’;
4.    rman target /
5.    recover tablespace <tablespace_name>;
6.    sql ‘alter tablespace <tablespace_name> online’;

Control File Recovery

Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the Database. If both controlfiles are missing, the database will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It is the number following the ‘c-’ at the start of the name.
1.   rman target /
2.   set dbid <dbid#>
3.   startup nomount;
4.   restore controlfile from autobackup;
5.   alter database mount;
6.   recover database;
7.   alter database open resetlogs;
8.   make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.

Incomplete Recovery, Until Time/Sequence/Cancel

Incomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing.
Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it.
In this case recovery needs to be performed until before the object was dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed.
1.   If the database is open, shutdown it to perform full restore.
2.   rman target \
3.   startup mount;
4.   restore database;
5.   recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1.
6.  alter database open resetlogs;
7.  make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until sequence, until time, ie: ’2012-01-04:01:01:10′.

RMAN scenarios


http://itcareershift.com/blog1/2010/11/25/real-life-oracle-dba-scenarios-using-rman-backups-and-troubleshooting-new-oracle-dba-career/
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmadvre.htm
http://www.reachdba.com/showthread.php?453-RMAN-All-major-Restoration-and-Recovery-Scenarios


Note: When database is in open state then we can not delete datafiles from OS level (File is being used by another User)


--------------------------------------------------------------DataFile removed physcially from OS level and no Backup of that datafile------------------------------------

1) rman backup taken 01-01-12
2) add datafile and add some tables and data on 02-01-12
3) remove datafile from OS level on 02-01-12 no backup till now
4) alter tablespace USR offline;
ORA-01116: error in opening database file 11
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
5) alter tablespace drtbs offline immediate;
select TABLESPACE_NAME,STATUS from dba_tablespaces;
6) select file_id, file_name from dba_data_files where Tablespace_name='USR';
7) RMAN TARGET
8) rman > list of backup datafile 11;
9) list backup of datafile 12; ---- no exist because no backup
10) RMAN> restore tablespace USR;
11) RMAN> recover tablespace USR;
select file_id, file_name from dba_data_files where Tablespace_name='USR';


--------------------------------------------------------------------------Loss of Controlfile-----------------------------

1) select name from v$controlfile;
2) remove controlfile physcially from OS level
3) alter tablespace users online;
alter tablespace users online
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup nomount;

Since we are not using a RMAN catalog we need to set the DBID

RMAN> set dbid=2415549446;

executing command: SET DBID

Restore the controlfile

RMAN> run {
 restore controlfile from autobackup;
 }

error RMAN-06172 because when check show all; controlfile autoback is not present...also not able to configure again becuase of nomount state

set controlfile autobackup format for device type disk to '/orabkp\%F';
restore controlfile from autobackup;


RMAN> alter database mount;
RMAN> recover database;
SQL> alter database open resetlogs;

-------------------------------------recover and open the database if the archive log required for recovery is miss

SQL> recover database until cancel using backup controlfile;
CANCEL
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent


1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) recover database until cancel using backup controlfile;
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”


------------------------------------------Redo log removed from OS level-----------------------------------

select member from v$Logfile;

rm redo*.log

If one or all of the online redo logfiles are delete then the database hangs and in the alert log file
ARC1: Failed to archive thread 1 sequence 10 (0)

SQL> shutdown immediate;
SQL> startup mount;

SELECT GROUP#,SEQUENCE#,STATUS,FIRST_CHANGE# from v$log;

   GROUP#  SEQUENCE# STATUS           FIRST_CHANGE#
--------- ---------- ---------------- -------------
        1         10 CURRENT             8.6056E+10
        3          9 INACTIVE            8.6056E+10
        2          8 INACTIVE            8.6056E+10

RMAN TARGET /
RMAN> run {
2> set until sequence 10;
3> restore database;
4> recover database;
5>  alter database open resetlogs;
6> }
EXIT

The recovery process creates the online redo logfiles at the operating system level also.
Since we have done an incomplete recover with open resetlogs, we should take a fresh complete backup of the database.

NOTE: Please make sure you remove all the old archived logfiles from the archived area.




------------------------------------------------------Drop tablespace by mistake (drop tablespace test including contents and datafiles;)------Point inTime

DBA realized the mistake;

He will refer alert log for the exact timing when tablespace was dropped.

-------Alert log--------------

Sun Feb  4 10:59:43 2012
drop tablespace test including contents and datafiles
Sun Feb 4 10:59:47 2012
Completed: drop tablespace test including contents and datafiles



SQL> shutdown abort
rman target /

RMAN> RUN
{
STARTUP NOMOUNT
SET UNTIL TIME "TO_DATE ('04-02-07 10:58:00', 'DD-MM-YY HH24:MI:SS')";
RESTORE CONTROLFILE;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}

select name from v$tablespace;

-----------------------------------------------------------Complete loss of all database files including SPFILE using RMAN

ls -l
rm *.dbf
ls -l *.dbf
ls: *.dbf: No such file or directory
mv spfileopsdba.ora spfileopsdba.org
ls -lt spfile* *SPFILE REMOVED


Database Details
------------------
Database Name=OPSDBA
Machine Name=ITLINUXDEVBLADE07
DBID=1499754868 (select dbid from v$database)
---------------------

Step 1: RECOVERY OF SPFILE

Create spfile.rcv as:
set dbid= 1499754868
run {
startup nomount force  ;
 };

rman target / catalog rman10/rman10@rman10p cmdfile=spfile.rcv

Now restore the spfile

set dbid=1499754868
run {
allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
restore spfile ;
release channel ch1 ;
}


Step 2: RESTORE OF CONTROLFILES

Same Steps as spfile with the restore command changed. So the new script is

set dbid=1499754868
run {
allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
restore controlfile ;
release channel ch1 ;
}

Step 3: RESTORE OF DATABASE
SQL> conn sys as sysdba
SQL> alter database mount;

Now get the log sequence number of the database from the catalog database:

select sequence# from rc_backup_redolog where db_name=’OPSDBA’;

RMAN> run {
2> allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opsdbad.opt)';
3> restore database ;
4> recover database until logseq=6; -- GOT FROM THE ABOVE QUERY
5> release channel ch1 ;
6> }
7>


Step 4: alter database open resetlogs;



--------------------------- SYSTEM / SYSAUX / UNDO tablespace is missing

In this case complete recovery is performed, only the system tablespace is missing, so the database can be opened without resetlogs option.

$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSTEM;
RMAN> recover tablespace SYSTEM;
RMAN> alter database open;

$ rman target /
RMAN> startup mount;
RMAN> restore tablespace SYSAUX;
RMAN> recover tablespace SYSAUX;
RMAN> alter database open;



------------------------------Non system tablespace is missing, database is up

$ rman target /
RMAN> sql ‘alter tablespace <tbs> offline immediate’ ;
RMAN> restore tablespace <tbs> ;
RMAN> recover tablespace <tbs> ;
RMAN> sql ‘alter tablespace <tbs> online’ ;

To restore/recover only datafile(s)
$ rman target /

RMAN>. sql 'alter database datafile <file#> offline';
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online' ;

-------------------------------Non system tablespace is missing,database is closed
sqlplus “/ as sysdba “
startup mount
alter database datafile <file#> offline;
alter database open;
exit;

$rman target /
RMAN> restore datafile <file#>;
RMAN> recover datafile <file#>;
RMAN> sql 'alter tablespace <tablespace_name> online';


-----------------------restore a tablespace to a new location
 rman target / catalog rman/rman@rcat
run {
allocate channel ch1 type disk;
sql 'ALTER TABLESPACE USERS OFFLINE IMMEDIATE';
set newname for datafile '/disk1/oracle/users_1.dbf' to '/disk2/oracle/users_1.dbf';
restore tablespace users;
# make the control file recognize the restored file as current
switch datafile all;
}

RMAN> recover tablespace USERS;
RMAN> sql 'alter tablespace USERS online';

-------------------Recovery of a Datafile that has no backups (database is up)
If a non system datafile that was not backed up since the last backup, is missing, recovery can be performed if all archived logs since the creation of the missing datafile exist.

Pre requisites: All relevant archived logs.

$ rman target /
RMAN> sql ‘alter database datafile <file#> offline’;
RMAN> restore datafile <file#> ;
-- no need to create a blank file, restore command takes care of that.
RMAN> recover datafile <file#>;
RMAN> sql 'alter database datafile <file#> online';

-------------------------------Recovering After the Loss of All Members of an Online Redo Log Group

If a media failure damages all members of an online redo log group, then different scenarios can occur depending on the type of online redo log group affected by the failure and the archiving mode of the database.
If the damaged log group is inactive, then it is not needed for crash recovery; if it is active, then it is needed for crash recovery.
SQL> startup mount

Case-1 If the group is INACTIVE
Then it is not needed for crash recovery
Clear the archived or unarchived group. (For archive status, check in v$log)

1.1 Clearing Inactive, Archived Redo

alter database clear logfile group 1 ;
alter database open ;

1.2 Clearing Inactive, Not-Yet-Archived Redo

alter database clear unarchived logfile group 1 ;
OR
(If there is an offline datafile that requires the cleared log to bring it online, then the keywords UNRECOVERABLE DATAFILE are required. The datafile and its entire tablespace have to be dropped because the redo necessary to bring it online is being cleared, and there is no copy of it. )

alter database clear unarchived logfile group 1 unrecoverable datafile;

Take a complete backup of database.

And now open database:
alter database open ;


Case-2 If the group is ACTIVE

Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;

Case-3 If the group is CURRENT

Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;


--------------------------
1) OPEN REDO01.LOG (INACTIVE) AND EDIT AND CORRUPT IT

SQL> ALTER SYSTEM SWITCH LOGFILE; -------HANG
alter database clear logfile group 1 ;

ORA-00350: log 1 of instance rkangel (thread 1) needs to be archived
ORA-00312: online log 1 thread 1: 'G:\RKANGEL\REDO01.LOG'

SQL> SHUT IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
SQL> ALTER DATABASE OPEN;
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL> ALTER SYSTEM SWITCH LOGFILE;

-------------2) OPEN REDO02.LOG (ACTIVE) AND EDIT AND CORRUPT IT
SQL> ALTER SYSTEM SWITCH LOGFILE; -------HANG
ERROR at line 1:
ORA-03113: end-of-file on communication channel

SQL> STARTUP
ORACLE instance started.

Total System Global Area 6413680640 bytes
Fixed Size                  2267184 bytes
Variable Size            4563404752 bytes
Database Buffers         1828716544 bytes
Redo Buffers               19292160 bytes
Database mounted.
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: 'G:\RKANGEL\REDO02.LOG'
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 52429318)

ALTER DATABASE CLEAR LOGFILE GROUP 2;
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
alter database clear logfile group 2 unrecoverable datafile;
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance rkangel (thread 1)
ORA-00312: online log 2 thread 1: 'G:\RKANGEL\REDO02.LOG'


-----note: For ACTIVE or CURRENT logfile
Restore backup and perform an incomplete recovery.
And open database using resetlogs
alter database open resetlogs;


1) Set _ALLOW_RESETLOGS_CORRUPTION=TRUE in init.ora file.
2) Startup Mount
3) recover database until cancel using backup controlfile;
4) Alter database open resetlogs.
5) reset undo_management to “manual” in init.ora file.
6) startup database
7) Create new undo tablespace
changed undo_management to “AUTO” and undo_tablespace to “NewTablespace”



--------------------------------------Recovering a NOARCHIVELOG Database
Restore of a database running in NOARCHIVELOG mode is similar to restore of a database in ARCHIVELOG mode. The main differences are:

Only consistent backups can be used in restoring a database in NOARCHIVELOG mode.

Media recovery is not possible because no archived redo logs exist.

When recovering a NOARCHIVELOG database, specify the NOREDO option on the RECOVER command to indicate that RMAN should not attempt to apply archived redo logs.



----------------------------------------------------------------Remove Temporary datafile from OS level---------------------------------

shut immediate
Remove Temporary datafile from OS level
startup
auotmatically create temp

shut immediate
startup mount
Remove Temporary datafile from OS level
alter database open;
auotmatically create temp


----------------------------------------------------------------Remove UNDO datafile from OS level----------------------------------------

shut immediate
startup mount
Remove UNDO datafile from OS level
alter database open;

ERROR at line 1:
ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
ORA-01110: data file 3: 'G:\RKANGEL\UNDOTBS01.ORA'

RMAN> restore tablespace UNDOTBS1;
RMAN> recover tablespace UNDOTBS1;
RMAN> alter database open;


----------------------------------------How to recover dropped / deleted table using RMAN backup----------------------

restore backup on to the UAT machine
export the table from UAT
import into LIVE

1) incomplete recovery ( just before to the drop table) can recover your table from RMAN backup.
or
2) You can recover /duplcate the database to another database with RMAN backup
Export the concerned table from the restored/duplicated database to the database you want.

3) use flashback (in oracle 10g) for indivisual table recovery purpose.
Oracle 10g makes life easier with the ability to recover a dropped table similar to recovering a file from a Windows Recycle Bin.
And in case if this on 10g,
Eg: You dropped the DEPT table, which belongs to the USER_DATA tablespace
You can use the following command to recover a dropped table:

FLASHBACK TABLE <table_name> TO BEFORE DROP;



4) RMAN TSPITR is most useful for recovering the following:
An erroneous DROP TABLE or TRUNCATE TABLE statement
A table that has become logically corrupted
An incorrect batch job or other DML statement that has affected only a subset of the database
A logical schema to a point different from the rest of the physical database when multiple schemas exist in separate tablespaces of one physical database

--------------------------------------------------------Point in time recovery using RMAN (until a log sequence number) ----------------delete some data from table by mistake

Someone delete all data from a table at 1:47 PM (delete from myobjects) SELECT * FROM DBA_AUDIT_TRAIL;

windows dir d:\archive
20/09/2012  01:44 PM             1,024 ARC0000000010_0794497245.0001
20/09/2012  01:47 PM        41,461,248 ARC0000000011_0794497245.0001


archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u02/ORACLE/opsdba/arch
Oldest online log sequence     21
Next log sequence to archive   23
Current log sequence           23
--Note the current log sequence number (22)

We need to determine the log sequence we need to recover until
select sequence#,first_change#, to_char(first_time,'HH24:MI:SS') from v$log order by 3

SEQUENCE# FIRST_CHANGE# TO_CHAR(
--------- ------------- --------
       21    8.6056E+10 13:48:16
       22    8.6056E+10 13:48:22
       23    8.6056E+10 13:48:29

Log sequence 21 was first written to at 1:48 PM so we should recover to a log sequence before this – i.e sequence# 10 (from window dir )


SQL> shutdown immediate;
SQL> startup mount;
RMAN> run {
set until sequence=10;  >>> add one to the sequence number we have to recover until
restore database;
recover database;
}
alter database open resetlogs;

ORA-01190: control file or data file 1 is from before the last RESETLOGS
ORA-01110: data file 1: 'G:\RKANGEL\SYSTEM01.ORA'

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 09/20/2012 14:15:38
ORA-01147: SYSTEM tablespace file 1 is offline
ORA-01110: data file 1: 'G:\RKANGEL\SYSTEM01.ORA'

SQL> ALTER DATABASE DATAFILE 1 ONLINE;
SQL> RECOVER DATABASE;

ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.


select count(*) from myobjects;


---------------------------------------------------Recover Dropped Table from Recyclebin using Flashback Table-------------------------

Oracle Flashback Table enables you to restore a table to its state as of a previous point in time. It provides a fast, online solution for recovering a table that has been accidentally modified or deleted by a user or application. In many cases, Oracle Flashback Table eliminates the need for you to perform more complicated point-in-time recovery operations.

Oracle Flashback Table:

Restores all data in a specified table to a previous point in time described by a timestamp or SCN.

Performs the restore operation online.

Automatically maintains all of the table attributes, such as indexes, triggers, and constraints that are necessary for an application to function with the flashed-back table.

Maintains any remote state in a distributed environment. For example, all of the table modifications required by replication if a replicated table is flashed back.

Maintains data integrity as specified by constraints. Tables are flashed back provided none of the table constraints are violated. This includes any referential integrity constraints specified between a table included in the FLASHBACK TABLE statement and another table that is not included in the FLASHBACK TABLE statement.

Even after a flashback operation, the data in the original table is not lost. You can later revert to the original state.

FLASHBACK TABLE <table_name> TO BEFORE DROP;

Some other variations of the flashback database command include.

FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;

--------------------------------------------------Recover deleted Table Data from Recyclebin using Flashback Table---------------------

On Oracle Database 11g (10gR2...), we can rewind one or more tables back to their contents at a previous time without affecting other database objects.

Before we use Flashback Table, We must enable row movement on the table. because rowids will change after the flashback.

Example: Flashback the table back to previous time using SCN


select count(*) from LDBO.test;
COUNT(*)
----------
68781

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
1584494

SQL> delete from LDBO.test where rownum <= 50000;

50000 rows deleted.

SQL> commit;

Commit complete.

SQL> select count(*) from LDBO.test;

COUNT(*)
----------
18781

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
-----------
1587106

Enable row movement:

SQL> alter table LDBO.test enable row movement;

Table altered.

SQL> FLASHBACK TABLE LDBO.test to scn 1584494;

Flashback complete.

SQL> select count(*) from LDBO.test;


QL> alter table LDBO.test disable row movement;

Table altered.


We can rewind the table back to previous time using timestamp:

SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';

Session altered.

SQL> select sysdate from dual;

SYSDATE
-------------------
2009/08/30 17:01:09

SQL> delete from LDBO.test ;

68781 rows deleted.

SQL> commit;

Commit complete.

SQL> select sysdate from dual;

SYSDATE
-------------------
2009/08/30 17:03:18

SQL> select count(*) from LDBO.test;

COUNT(*)
----------
0

SQL> alter table LDBO.test enable row movement;

Table altered.

SQL> flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS');

Flashback complete.

SQL> select count(*) from LDBO.test;

COUNT(*)
----------
68781

--------------------------------------------------------------------------------------------Flashback Table recover table to different table----------------------

FLASHBACK TABLE test TO BEFORE DROP RENAME TO test2;

flashback table LDBO.test to timestamp TO_TIMESTAMP('2009/08/30 17:01:09','YYYY/MM/DD HH24:MI:SS') RENAME TO test2;




Wednesday, August 29, 2012

Oracle 11g Advanced Compression


You can implement advanced compression option into your databases having more update, insert operation tables to manage high growing amount of data.

Followings are very simple command to implement advanced compression

alter table  tablename compress for all operations;

alter index index1 rebuild compress;
alter index index2 rebuild compress;


SELECT table_name, compression, compress_for FROM user_tables where table_name='tablename';

select index_name,COMPRESSION,STATUS from dba_indexes where ='tablename';


Satish,

Please implement the same on UAT first to get performance.



Following is test case to show the difference between 10gR2 Table Compression feature and 11gR2’s Advanced Compression. Oracle provided table level compression feature in 10gR2. While this compression provided some storage reduction, 10g’s table compression only compressed the data during BULK LOAD operations. New and updated data were not compressed.

With 11g’s Advanced Compression new and updated data are also compressed; achieving highest level in storage reduction, while providing performance improvements as compressed blocks result in more data being moved per I/O. 

Note1: Basic compression comes with oracle 11g Enterprise Edition, To make table as OLTP compressed its again extra cost (US$11,500.00/ Processor) perpetual option with Enterprise Edition.

Note2: There is tradeoff between Disk IO and CPU. it depends on how your system is configured. If your performance bottleneck is disk I/O, you almost certainly will benefit from using compression, because it saves a lot of disk reads. If you are on the other hand low on CPU, you might not always.

------------------------------Test Case-----------------------------------------------

Following test case was executed in 10g database server. 

A table called TEST was created without COMPRESSION option. 

SQL> select table_name,compression from dba_tables where table_name = 'TEST';

TABLE_NAME COMPRESS
------------------------- -------------
TEST DISABLED


SQL> select bytes from dba_segments where segment_name = 'TEST';

SUM(BYTES)
------------------
92274688 


The size of the table was around 92MB.

Now create another table called TEST_COMPRESSED with COMPRESS option. 

SQL> create table TEST_COMPRESSED COMPRESS as select * from test;

Table created.

SQL> select table_name, compression from dba_tables where table_name like 'TEST
%';

TABLE_NAME COMPRESS
------------------------------ ---------------
TEST_COMPRESSED ENABLED
TEST DISABLED


Now let’s check the size of the COMPRESSED table.


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------------
30408704

Check out the size of the COMPRESSED table. It is only 30MB, around 30% reduction in size. So far so good. 

Now let’s do a plain insert into the COMPRESSED table.

SQL> insert into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

Let’s check the size of the COMPRESSED table. 

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED'

2 /

SUM(BYTES)
----------
117440512


Wow! From 30MB to 117MB? So, plain INSERT statement does not COMPRESS the data in 10g.

(You will see this is not the case with 11g)

Now let’s do the same insert with a BULK LOAD 

SQL> insert /*+ APPEND */ into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';


SUM(BYTES)
----------
142606336

Ok, now the size of the COMPRESSED table is 142MB from 117MB. For the same number of rows, the table size only increased by 25MB. So BULK LOAD compresses the data. 

Let’s check other DML statements such as DELETE and UPDATE against the COMPRESSED table. 


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
142606336

No change in total size of the table. DELETE has no impact as expected.


Let’s check UPDATE. 

SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
rownum < 100000;

99999 rows updated.

SQL> commit;


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
150994944

The table size is increased by 8MB? No compression for UPDATE statement either. 


All this clearly shows that 10g’s Table COMPRESSION would work great for initial BULK LOADS, however subsequent UPDATE’s, DELETE’s and INSERT’s will not result in COMPRESSED blocks.





Now, let’s see 11g’s Test Results.

The following SQL statements were executed against 11.2.0.1 database version.


TEST table of 100MB in size was created as before.



SQL> select bytes from dba_segments where segment_name = 'TEST';

BYTES
----------
100663296

So 100MB of table created.

Let’s create a table with COMPRESS FOR ALL OPERATIONS option. This is only available in 11g.


SQL> create table test_compressed compress for all operations as select * from
test;

Table created.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
31457280


Check out the size of the compressed table vs. uncompressed table. 30% less space usage on a compressed table. Not a big difference compared to 10g.




Let’s check other DML statements.

Let’s do a plain insert to the compressed table.

SQL> insert into TEST_COMPRESSED select * from test;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
75497472



11g’s Advanced compression, compressed 100MB of data to 40MB and inserted to the compressed table, WITHOUT BULK LOAD option. 

Now let’s do the BULK LOAD onto 11g’s COMPRESSED table.


SQL> insert into /*+ APPEND */ test_compressed select * from TEST;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904
It has a same impact as PLAIN insert. 

What about deletes and updates?


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


No change in deletes. This is expected as the blocks are compressed when the new rows are added to the existing blocks and that the threshold reaches PCTFREE.


SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where 

2 rownum < 100000;

99999 rows updated.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


There is no change in this case as existing blocks were able to accommodate updates. However the same update generated more data in 10g. 


Saturday, August 25, 2012

CONSTRAINT ERROR Find What data creating problem


d:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\admin\utlexcpt.sql

create table exceptions(row_id rowid,
               owner varchar2(30),
               table_name varchar2(30),
       constraint varchar2(30));

if someone disable the constraint and try to enable, it show error exceptions table stores the table and rowid

 Alter Table <table_name>  Enable constraint <ck_constriantname> exceptions into exceptions;


Friday, August 24, 2012

RESTRICT NON DBA USER TO ACCESS DEVELOPMENT TOOLS LIKE SQLPLUS , TOAD,...


Create or Replace TRIGGER TG_OTHER_TOOL
  v_prog sys.v_$session.program%TYPE;
  v_module sys.v_$session.program%TYPE ;
lnUsercode number ;
lcUserdetails Varchar2(100)  ;
lnTotalrecords number ;
Begin
  Select program,module into v_prog,v_module From sys.v_$session
      Where  audsid = USERENV('SESSIONID')
    and  audsid != 0  -- Don't Check SYS Connections
    and  rownum = 1;  -- Parallel processes will have the same AUDSID's
  Select Count(*) into lnTotalrecords From Dba_Users Dba_Users
  Where Dba_Users.Username=User and Dba_Users.Default_tablespace='USR' ;
  If lnTotalrecords!=0 then
      IF UPPER(V_MODULE) NOT IN ('LD.EXE','LDSCHEDULER.EXE','LDSCHEDULER.VSHOST.EXE','DBMS_SCHEDULER','DLLHOST.EXE')
     Then
        Raise_application_error(-20000,'On Database Development tools are Restricted');
     end if;
  end if ;
  lnUsercode:=-1;
  If ((upper(v_prog)='LD.EXE' and upper(v_module)='LD.EXE') or
      (upper(v_prog)='LDSCHEDULER.EXE' and upper(v_module)='LDSCHEDULER.EXE') or
      (upper(v_prog)='LDSCHEDULER.VSHOST.EXE' and upper(v_module)='LDSCHEDULER.VSHOST.EXE') or
      (upper(v_prog)='ORACLE.EXE (J000)' and upper(v_module)='DBMS_SCHEDULER') or
      (upper(v_prog)='DBMS_SCHEDULER' and upper(v_module)='DBMS_SCHEDULER') or
      (upper(v_prog)='DLLHOST.EXE' and upper(v_module)='DLLHOST.EXE')) then
     lcUserdetails:=sys_context('userenv','ip_address')  ;
     Insert into tbltempoperationstatistics (nOpercode,cOperationname) Values (lnUsercode,lcUserdetails) ;
  else
     lcUserdetails:='USER LOGGED ON' ;
     Insert into tbltempoperationstatistics (nOpercode,cOperationname) Values (lnUsercode,lcUserdetails) ;
  end if ;  
  Commit ;
  Exception
      When NO_DATA_FOUND then
           NULL ;
End;
/





 
if Non DBA user is trying to login into sqlplusw / PlSQL developer or some other tool then it shows error because of splogininformation,tglogin.
if i rename sqlplusw.exe to abc.exe then non DBA user can login into sqlplusw.exe
i think you check v$session.program in SP for that.
you have to check v$session.module.
select program,module from v$session;
abc.exe   SQL*Plus
module shows currently executing module
program shows operating system program name
 

The MODULE column of V$SESSION only gets populated with the DBMS_APPICATION_INFO package. 
SQL*Plus automatically uses this package and registers the application with the database. 
This is why you can see it in the MODULE column of V$SESSION. Unfortunately, this won't happen until near the end of the SQL*Plus initialization. 

You can not set_module before logging. For obvious reason - you are not logged


http://docs.oracle.com/cd/A58617_01/server.804/a53717/ape.htm

PRODUCT_USER_PROFILE is owned by SYSTEM not SYS.

PRODUCT_USER_PROFILE is only work for SQLPLUSW not for other tool

One alternative is to make use of PRODUCT_USER_PROFILE table to restrict the nondba users from firing any command after logging in.


e.g to disallow nondba users from selecting any data from SQL * plus prompt, log in as system and 
insert into product_user_profile (product,userid,attribute,char_value)values ('SQL*Plus','%','SELECT','DISABLED'); 
insert into product_user_profile (product,userid,attribute,char_value)values ('SQL*Plus','%','UPDATE','DISABLED'); 
insert into product_user_profile (product,userid,attribute,char_value)values ('SQL*Plus','%','DELETE','DISABLED'); 
insert into product_user_profile (product,userid,attribute,char_value)values ('SQL*Plus','%','INSERT','DISABLED'); 
commit; 








Oracle alert.log to Table



The following script creates two tables: read_alert andread_alert_disk.
read_alert_disk is an external table and contains the content of the alert log.
read_alert will be empty after this script has been executed. It is used by the update_alert_log script, shown further below.
define alert_length="2000"

drop table alert_log;

create table alert_log (
  alert_date date,
  alert_text varchar2(&&alert_length)
)
storage (initial 512k next 512K pctincrease 0);

create index alert_log_idx on alert_log(alert_date)
storage (initial 512k next 512K pctincrease 0);

column db    new_value _DB    noprint;
column bdump new_value _bdump noprint;

select instance_name db from v$instance;

select value bdump from v$parameter 
 where name ='background_dump_dest';


drop   directory BDUMP;
create directory BDUMP as '&&_bdump';

drop table alert_log_disk;

create table alert_log_disk ( text varchar2(&&alert_length) )
organization external (
  type oracle_loader
  default directory BDUMP
      access parameters (
          records delimited by newline nologfile nobadfile
          fields terminated by "&" ltrim
      )
  location('alert_&&_DB..log')
)
reject limit unlimited;

update_alert_log.sql

Now, after the two tables are created, the alert_log table can be filled with the following script. It only loads those records that are greater than the last time it loaded. And it loads the date/time on every line for convienance. It also helps when the alertlogs get rotated. You still keep the history within an Oracle table. Finally, it also strips out all the «crap» that is really not needed to see if you are looking for errors.
update_alert_log.sql
set serveroutput on 

declare
  
  isdate         number := 0;
  start_updating number := 0;
  rows_inserted  number := 0;
  
  alert_date     date;
  max_date       date;
  
  alert_text     alert_log_disk.text%type;

begin
  
  /* find a starting date */
  select max(alert_date) into max_date from alert_log;
  
  if (max_date is null) then
    max_date := to_date('01-jan-1980', 'dd-mon-yyyy');
  end if;
  
  for r in (
    select substr(text,1,180) text from alert_log_disk
     where text not like '%offlining%' 
       and text not like 'ARC_:%' 
       and text not like '%LOG_ARCHIVE_DEST_1%'
       and text not like '%Thread 1 advanced to log sequence%'
       and text not like '%Current log#%seq#%mem#%'
       and text not like '%Undo Segment%lined%'
       and text not like '%alter tablespace%back%'
       and text not like '%Log actively being archived by another process%'
       and text not like '%alter database backup controlfile to trace%'
       and text not like '%Created Undo Segment%'
       and text not like '%started with pid%'
       and text not like '%ORA-12012%'
       and text not like '%ORA-06512%'
       and text not like '%ORA-000060:%'
       and text not like '%coalesce%'
       and text not like '%Beginning log switch checkpoint up to RBA%'
       and text not like '%Completed checkpoint up to RBA%'
       and text not like '%specifies an obsolete parameter%'
       and text not like '%BEGIN BACKUP%'
       and text not like '%END BACKUP%'
  )
  loop
  
    isdate     := 0;
    alert_text := null;
  
    select count(*) into isdate 
      from dual 
     where substr(r.text, 21) in ('2003','2004','2005','2006','2007')
       and r.text not like '%cycle_run_year%';
  
    if (isdate = 1) then  
  
      select to_date(substr(r.text, 5),'Mon dd hh24:mi:ss rrrr') 
        into alert_date 
        from dual;
  
      if (alert_date > max_date) then
        start_updating := 1;
      end if;
  
    else
      alert_text := r.text;
    end if;
  
    if (alert_text is not null) and (start_updating = 1) then
     
      insert into alert_log values (alert_date, substr(alert_text, 1, 180));
      rows_inserted := rows_inserted + 1;
      commit;
  
    end if;
  
  end loop;
  
  sys.dbms_output.put_line('Inserting after date '||to_char(max_date, 'MM/DD/RR HH24:MI:SS'));
  sys.dbms_output.put_line('Rows Inserted: '||rows_inserted);
  
  commit;

end;
/
Let's execute the script:
SQL> @update_alert_log
Inserting after date 01/01/80 00:00:00
Rows Inserted: 17361

PL/SQL procedure successfully completed.
The alert_log table now contains the errors as recorded in thealert.log file:
select alert_date, substr(alert_text,1, 69) 
  from alert_log;

Thursday, August 23, 2012

Restrict Users to access other application exe


create or replace procedure sp_ltmlogin AS
  MODULE SYS.V_$SESSION.MODULE%TYPE ;
L_USER NUMBER ;
BEGIN
 SELECT MODULE INTO MODULE FROM SYS.V_$SESSION WHERE  AUDSID = USERENV('SESSIONID') AND  AUDSID != 0 AND  ROWNUM = 1;
 SELECT COUNT(*) INTO L_USER FROM DBA_USERS DBA_USERS WHERE DBA_USERS.USERNAME=USER AND DBA_USERS.PROFILE='LTM';
  IF L_USER=0 THEN
     IF UPPER(MODULE) ='LDLIVETRADEMONITOR.EXE'
     THEN
        RAISE_APPLICATION_ERROR(-20000,'you are not authorized to access LTM software');
     END IF;
  END IF ;
  IF L_USER > 0 THEN
     IF UPPER(MODULE) ='LD.EXE'
     THEN
        RAISE_APPLICATION_ERROR(-20000,'you are not authorized to access LD software');
     END IF;
  END IF ;

  EXCEPTION
      WHEN NO_DATA_FOUND THEN
           NULL ;
END;
/



create or replace
trigger tg_ltmlogin
      after logon on Database
Begin
    sp_ltmlogin() ;
End;
/


To restrict user to login only LTM user, you have to create a LTM profile and move that users to LTM profile

Followings are the steps to implement the same

1     Connect to sysdba user
    Create profile for LTM users and move them to LTM profile

CREATE PROFILE LTM
  LIMIT PASSWORD_REUSE_MAX DEFAULT
        PASSWORD_REUSE_TIME DEFAULT;

ALTER USER LTM PROFILE LTM;

      



Tuesday, August 21, 2012

Drop User Trigger


create or replace trigger tg_dropuser before drop on database
when (ora_dict_obj_type ='USER') 
 declare
 l_name varchar2(30);
 begin
 l_name := ORA_DICT_OBJ_NAME;
execute immediate 'delete from ldbo.tbloperatormenurights where coperator = ''' ||ora_dict_obj_name || '''';
 end;
/


create or replace trigger trUserDrop after drop on database 
when (ora_dict_obj_type ='USER') 
begin 
dbms_output.put_line('user dropped '|| ora_dict_obj_name); 
end; 
/


if you have the privilege granted like Grant Create User, Drop, etc., this is good example:
Use a database BEFORE DROP trigger. Make sure to create this as some other user (then the 
user doing the drop, else they can drop the trigger!)
 grant create session, create user, drop user to a 
identified by a;
Grant succeeded.
 grant create session to b identified by b;
Grant succeeded.

 create table app_users ( username varchar2(30) );
Table created.
 insert into app_users values ( 'B' );
1 row created.


 create or replace trigger drop_user_trigger
 before drop on database
 when ( user = 'A' )
 declare
 l_cnt number;
 l_name varchar2(30);
 begin
 if ( ora_dict_obj_type = 'USER' )
 then
 l_name := ORA_DICT_OBJ_NAME;
 select count(*) into l_cnt
 from dual
 where exists ( select null
 from app_users
 where username = l_name );
 if ( l_cnt <> 1 )
then
 raise_application_error( -20001, 'You cannot drop that user' );
end if;
 end if;
 end;
/
Trigger created.
 @connect a/a
 drop user scott;
drop user scott
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: You cannot drop that user
ORA-06512: at line 15

drop user b;
User dropped.


Tuesday, August 14, 2012

Oracle Fine grained auditing / track select records

In regular object-based auditing, the records can show that a user selected from a specific table, along with other helpful information such as timestamp, client machine name, etc.
What it does not record is what data the user selected from the table.

Audit Trail does not record which particular record was selected. Since reading is not a transaction, the facts are not recorded in Oracle' redo logs, rollback segments or anywhere else.

Also we cannot create trigger on select statement, we can create trigger only on insert / update / delete.



Fine grained auditing extends Oracle standard auditing capabilities by allowing the user to audit actions based on user-defined predicates. It is independant of the AUDIT_TRAIL parameter setting and all audit records are stored in the FGA_LOG$ table, rather than the AUD$ table.



Notice that FGA will show the SQL text, regardless of the setting of AUDIT_TRAIL – no “EXTENDED” necessary here



-------------------------

http://www.dba-oracle.com/security/fga_enhancements.htm
http://peerdba.wordpress.com/2011/01/09/fine-grained-auditing-fga/

-------------------------

begin
dbms_fga.drop_policy(
   object_schema => 'LDBO',
   object_name   => 'TRANSACTIONS',
   policy_name   => 'AUDIT_TRANS'
);
END;
/


Below, I create a policy called AUDIT_TRANS that acts as a trigger for any queries against the TRANSACTIONS where anyone views a TRANSACTIONS row where Quantity>1000000.

begin
   dbms_fga.add_policy(
      object_schema   => 'LDBO',
      object_name     => 'TRANSACTIONS',
      policy_name     => 'AUDIT_TRANS',
      audit_condition => 'Quantity>1000000',
      audit_column    => 'QUANTITY',
      handler_schema  => null,
      handler_module  => null,
      enable          => true
   );
end;
/




This was used to turn auditing on only for select statements against the table. The same can be now be rewritten as:

begin
   dbms_fga.add_policy (
      object_schema=>'CLAIM_SCHEMA',
      object_name=>'CLAIMS',
      policy_name=>'LARGE_CLAIM',
      audit_condition=>
        'CLAIM_AMOUNT>500 OR PAID_AMOUNT>500',
      audit_column=>
        'SSN, PROC_CODE',
      statement_types => 'SELECT'

  );
end;
/

To audit insert, delete, and update for the same table on the same policy condition and columns, we can use:

begin
   dbms_fga.add_policy (
      object_schema=>'CLAIM_SCHEMA',
      object_name=>'CLAIMS',
      policy_name=>'LARGE_CLAIM',
      audit_condition=>
        'CLAIM_AMOUNT>500 OR PAID_AMOUNT>500',
      audit_column=>
        'SSN, PROC_CODE',
      statement_types => 'SELECT,INSERT,UPDATE,DELETE'
  );
end;
/

The above code writes an entry into the table fga_log$ when the table is subjected to insert, update, delete, and select statements; when the auditing condition is satisfied and the audit columns are referenced.



select
   timestamp     c1,
   db_user       c2,
   os_user       c3,
   object_schema c4,
   object_name   c5,
   policy_name   c6,
   sql_text      c7
from
   dba_fga_audit_trail
order by
   timestamp;



--------------------------------------

select count(*)  from sys.fga_log$ where dbuid not in ('USER1','USER2');
select dbuid, count(*)  from sys.fga_log$  group by dbuid  where dbuid  not in ('USER1','USER2');
select * from dba_audit_policies;
select * from dba_audit_policies  where  enabled='NO';
select * from dba_audit_policies  where  enabled='YES';
select * from dba_fga_audit_trail where db_user not in ('USER1','USER2');
select distinct object_name, policy_name from dba_fga_audit_trail where db_user not in ('USER1','USER2');
SELECT  policy_name, object_name, statement_type, os_user, db_user FROM dba_fga_audit_trail;
select * from dba_fga_audit_trail where db_user not in ('USER1','USER2');

---------------------------------–syntax for enable and disable of policy –take below select statements and execute.

select ‘begin dbms_fga.disable_policy(object_schema => ”APP_USER”, object_name => ”’ || object_name || ”’, policy_name => ”’ || policy_name ||”’);end; /’
from dba_fga_audit_trail where db_user not in (‘DBA_USER’);




Reset Oracle Parameter Value


select name,value,isdefault,isses_modifiable,issys_modifiable,
isinstance_modifiable,isdeprecated,
from v$parameter;

alter system reset some_param scope=both sid='*' ;


alter system reset some_param scope=spfile sid='*' ;

alter system reset some_param scope=memory sid='*' ;


scope =BOTH/SPFILE/MEMORY



For string parameters, setting to an empty string will restore the default.

ALTER SYSTEM SET parameter = '' scope=SPfile;


For any parameter the RESET option will restore the default.
ALTER SYSTEM RESET parameter scope=SPfile sid='*' ;

note
When resetting a parameter, you must specify sid=mySid or sid='*' even for non-RAC instances.


ALTER SYSTEM RESET memory_target scope=SPfile sid='*' ;


donot do that with memory parameter, other oracle will not startup then restart oracle services

memory_target will bet set to zero.

then reset memory_target again


Friday, August 10, 2012

Check Constraint on SYSDATE


create table table1 (startdate date,CloseDate date);

ALTER TABLE Table1
ADD (CONSTRAINT GT_Table1_CloseDate
CHECK (CloseDate > SYSDATE),
CONSTRAINT LT_Table1_CloseDate
CHECK (CloseDate <= SYSDATE + 365)),
CONSTRAINT GT_Table1_StartDate
CHECK (StartDate > (CloseDate + (SYSDATE + 730))));



Error report:
SQL Error: ORA-02436: date or system variable wrongly specified in CHECK constraint
02436. 00000 -  "date or system variable wrongly specified in CHECK constraint"
*Cause:    An attempt was made to use a date constant or system variable,
           such as USER, in a check constraint that was not completely
           specified in a CREATE TABLE or ALTER TABLE statement.  For
           example, a date was specified without the century.
*Action:   Completely specify the date constant or system variable.
           Setting the event 10149 allows constraints like "a1 > '10-MAY-96'",
           which a bug permitted to be created before version 8.




A check constraint, unfortunately, cannot reference a function like SYSDATE. You would need to create a trigger that checked these values when DML occurs, i.e.

CREATE OR REPLACE TRIGGER trg_check_dates
  BEFORE INSERT OR UPDATE ON table1
  FOR EACH ROW
BEGIN
  IF( :new.CloseDate <= SYSDATE )
  THEN
    RAISE_APPLICATION_ERROR( -20001,
          'Invalid CloseDate: CloseDate must be greater than the current date - value = ' ||
          to_char( :new.CloseDate, 'YYYY-MM-DD HH24:MI:SS' ) );
  END IF;
  IF( :new.CloseDate > add_months(SYSDATE,12) )
  THEN
    RAISE_APPLICATION_ERROR( -20002,
         'Invalid CloseDate: CloseDate must be within the next year - value = ' ||
         to_char( :new.CloseDate, 'YYYY-MM-DD HH24:MI:SS' ) );
  END IF;
  IF( :new.StartDate <= add_months(:new.CloseDate,24) )
  THEN
    RAISE_APPLICATION_ERROR( -20002,
          'Invalid StartDate: StartDate must be within 24 months of the CloseDate - StartDate = ' ||
          to_char( :new.StartDate, 'YYYY-MM-DD HH24:MI:SS' ) ||
          ' CloseDate = ' || to_char( :new.CloseDate , 'YYYY-MM-DD HH24:MI:SS' ) );
  END IF;
END;
/


Each and every time the record is updated SYSDATE will have a different value. Therefore the constraint will validate differently each time. Oracle does not allow sysdate in a constraint for that reason.

You may be able to solve your problem with a trigger that checks if CloseDate has actually changed and raise an exception when the new value is not within range.

Excel Security Settings using Command / ASP.NET code


cmd>reg add "HKCU\Software\Microsoft\Office\12.0\Excel\Security" /v AccessVBOM /t reg_dword /d 1 /f


Microsoft.Win32.Registry.SetValue("HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Excel\Security", "AccessVBOM", "1",RegistryValueKind.dWord)

Windows Regional settings using command or in ASP.NET code


cmd> REG ADD "HKCU\Control Panel\International" /v sShortDate /d "dd/MM/yyyy" /f


Microsoft.win32 Namespace is containing register class; Register class is used to manipulate the system registry. It represents a Key level node in the windows registry.

Microsoft.Win32.Registry.SetValue("HKEY_CURRENT_USER\Control Panel\International", "sShortDate", "dd/MM/yyyy")

====================

Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Microsoft.Win32.Registry.SetValue("HKEY_CURRENT_USER\Control Panel\International", "sShortDate", "dd/MM/yyyy")

End Sub




Allow your application from Windows Firewall During Installation

1)

void AddToFirewallException(string ApplicationName, string FilePath)
        {
            // Add exception to windows firewall after installation
            string RegPath =
                @"SYSTEM\ControlSet001\Services\SharedAccess\Parameters\
        FirewallPolicy\StandardProfile\AuthorizedApplications\List";
            string KeyValue = FilePath + ":*:Enabled:" + ApplicationName;

            RegistryKey Key = Registry.LocalMachine.OpenSubKey(RegPath, true);
            Key.SetValue(FilePath, KeyValue);
            Key.Close();
            Key = null;
        }

the above code also works fine when called from a winform, but raise an exception when I called it in the custom action in VS setup project, Actually I want to add my app in the allowed program list during the installation.

2) Method


During the installation of my application, I needed to add it to the Windows firewall as an allowed application and open two ports for another application. This code will function as a custom action during the install to open the firewall on install and close it on uninstall. In trying to keep things as simple as possible, the following C# class library will be called from the setup - openFirewall() and closeFirewall().
First, I generated the FWSetupAction project as a C# class library. After that, I use the properties page to switch the output type to a console application to step through it with the debugger. When it's operational, switch back to the class library for integration with the MSI setup logic and incorporate it as a custom action.
After the initial project creation, rename Class1.cs to Firewall.cs in the Solution Navigator. If you're writing code anew, add the NetFwTypeLib reference first to allow intellisense to help you recognize the terms you'll be coding. This reference will be required for correct compilation, so whether you put it in before coding or after doesn't matter, but it will be needed. To add the reference, right click on References and select Browse. Browse to%windir%\system32\hnetcfg.dll and select it - the NetFwTypeLib will be created.
Edit the Firewall.cs class to have the following code:


using System;
using System.Collections.Generic;
using System.Text;
using System.Reflection;
using NetFwTypeLib;
using Microsoft.Win32;
namespace FWSetupAction
{
public class Firewall
{
    protected int[] discoPorts = { 0xD100, 0xD101 };
    protected INetFwProfile fwProfile;

    public void openFirewall()
    {
        ///////////// Firewall Authorize Application ////////////
        String imageFilename = getImageFilename();
        setProfile();
        NetFwAuthorizedApplications apps = fwProfile.AuthorizedApplications;
        INetFwAuthorizedApplication app = 
          ( INetFwAuthorizedApplication ) getInstance( "INetAuthApp" );
        app.Name = "Application Name";
        app.ProcessImageFileName = imageFilename;
        apps.Add( app );
        apps = null;

        //////////////// Open Needed Ports /////////////////
        INetFwOpenPorts openports = fwProfile.GloballyOpenPorts;
        foreach( int port in discoPorts )
        {
            INetFwOpenPort openport = 
              ( INetFwOpenPort ) getInstance( "INetOpenPort" );
            openport.Port = port;
            openport.Protocol = NET_FW_IP_PROTOCOL_.NET_FW_IP_PROTOCOL_UDP;
            openport.Name = "New Open Port";
            openports.Add( openport );
        }
        openports = null;
    } // openFirewall

    public void closeFirewall()
    {
        String imageFilename = getImageFilename();
        setProfile();
        INetFwAuthorizedApplications apps = fwProfile.AuthorizedApplications;
        apps.Remove( imageFilename );
        apps = null;
        INetFwOpenPorts ports = fwProfile.GloballyOpenPorts;
        ports.Remove( discoPorts[ 0 ], NET_FW_IP_PROTOCOL_.NET_FW_IP_PROTOCOL_UDP );
        ports.Remove( discoPorts[ 1 ], NET_FW_IP_PROTOCOL_.NET_FW_IP_PROTOCOL_UDP );
        ports = null;
    }

    protected string getImageFilename()
    {
        // Get install directory from the registry
        RegistryKey pRegKey = Registry.LocalMachine;
        pRegKey = pRegKey.OpenSubKey( "SOFTWARE\\Company Directory\\AppDir" );
        Object insDir = pRegKey.GetValue( "InstallDir" );
        return insDir + "RVP.exe";
    }

    protected void setProfile()
    {
        // Access INetFwMgr
        INetFwMgr fwMgr = ( INetFwMgr ) getInstance( "INetFwMgr" );
        INetFwPolicy fwPolicy = fwMgr.LocalPolicy;
        fwProfile = fwPolicy.CurrentProfile;
        fwMgr = null;
        fwPolicy = null;
    }

    protected Object getInstance( String typeName )
    {
        if( typeName == "INetFwMgr" )
        {
            Type type = Type.GetTypeFromCLSID(
            new Guid( "{304CE942-6E39-40D8-943A-B913C40C9CD4}" ) );
            return Activator.CreateInstance( type );
        }
        else if( typeName == "INetAuthApp" )
        {
            Type type = Type.GetTypeFromCLSID(
            new Guid( "{EC9846B3-2762-4A6B-A214-6ACB603462D2}" ) );
            return Activator.CreateInstance( type );
        }
        else if( typeName == "INetOpenPort" )
        {
            Type type = Type.GetTypeFromCLSID(
            new Guid( "{0CA545C6-37AD-4A6C-BF92-9F7610067EF5}" ) );
            return Activator.CreateInstance( type );
        }
        else return null;
    }

    static void Main( string[] args )
    {
        Firewall fw = new Firewall();
        fw.openFirewall();
        fw.closeFirewall();
    }
}
}

Once compiled, you're ready to test. Set a breakpoint on each of the firewall entry methods - openFirewall()and closeFirewall(), and step through the program. Use a DOS box to verify the operations. The netsh firewall command will verify the operation of the code:
  • netsh fire show allowed - shows the programs that are allowed
  • netsh fire show port - shows the ports that are open






Thursday, August 9, 2012

add_months Function Quarter start date and end date


how can i get the start date n end date of quarter?eg. if i pass date as 30/07/2012 then start date should be 01/07/2012 to 30/09/2012,If i pass 21/12/2012 then it should be 01/10/2012 to
31/12/2012


SQL> select TRUNC(sysdate+1, 'Q'),TRUNC(ADD_MONTHS(sysdate, +3), 'Q')-1 from dual;

TRUNC(SYS TRUNC(ADD
--------- ---------
01-JUL-12 30-SEP-12



SQL> select TRUNC(to_date('30-jul-12'), 'Q'),TRUNC(ADD_MONTHS('30-jul-12',+3), 'Q')-1 from dual;

TRUNC(TO_ TRUNC(ADD
--------- ---------
01-JUL-12 30-SEP-12


SQL> select TRUNC(to_date('21-dec-12'),'Q'),TRUNC(ADD_MONTHS('21-dec-12',+3), 'Q')-1 from dual;

TRUNC(TO_ TRUNC(ADD
--------- ---------
01-OCT-12 31-DEC-12



references

http://www.dba-oracle.com/job_scheduling/dates_times.htm

http://www.oracle.com/technetwork/issue-archive/2012/12-jan/o12plsql-1408561.html
http://www.techonthenet.com/oracle/functions/add_months.php

http://www.sql-server-helper.com/functions/get-first-day-of-quarter.aspx

http://www.tek-tips.com/viewthread.cfm?qid=1635253
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3868465700346538981



Wednesday, August 8, 2012

Two User Logon Not Allowed on same Machine


create or replace trigger TG_SINGLELOGIN
after logon on database
declare
cnt pls_integer;
begin
select count(*) into cnt from v$session
where username !=sys_context('userenv','session_user')  and terminal =sys_context('userenv','terminal');
if cnt >= 1 then
raise_application_error (-20001, 'You are already connected on another user with this machine');
end if;
exception
when no_data_found then raise;
end;
/

Concurrent User Login Denied from Different Machine


create or replace trigger TG_MULTILOGIN
after logon on database
declare
cnt pls_integer;
begin
select count(*) into cnt from v$session
where username=sys_context('userenv','session_user')  and terminal !=sys_context('userenv','terminal') and status like '%ACTIVE%';
if cnt >= 1 then
raise_application_error (-20001, 'You are already connected on another machine');
end if;
exception
when no_data_found then raise;
end;
/


RAISE_APPLICATION_ERROR (-20001, 'You are not allowed to logon'); 
wont work for a  user who has ADMINISTER DATABASE TRIGGER privilege. 

The ADMINISTER DATABASE TRIGGER privilege allows you to create database-level triggers (server error, login, and logout triggers). It also allows you to log in regardless of errors thrown by a login trigger as a failsafe. If you inadvertently coded your login trigger to throw an error no matter who was logging in, for example, you need to allow someone to log in to fix the trigger.

if someone is exit from oracle without logout then record is still present in v$session then  error occur.

we can use status like '%ACTIVE%'; to avoid killed and sniped session


Followers