Friday, April 27, 2012

Building a failover database using Oracle Database 11g Standard Edition


We are all aware of Data Guard and the various disaster protection mechanisms that come with Oracle Database Enterprise Edition. In this article I will try to show you how to build a remote failover database, when all you have are Standard Edition databases at both ends.
If we want to keep an exact replica of a production database we have to take care for three things. First, we have to ship the changes (archive logs) to the failover database. Second, we have to keep track of what was shipped, so we know what needs to be recovered if something goes wrong. Third, we have to apply the changes at the failover database.
Before 11gR2 came out, detecting and shipping log files between hosts had to be done outside of the database. In GNU/Linux environments most people use rsync or a similar program to do the job. On the other hand I always prefer to do such tasks within the database and relaying on the OS is not always an option (what if one of the databases runs on Windows?). In this tutorial I will show you how to pickup archivelogs by using File Watchers (introduced in 11gR2) and transfer them via FTP to a remote host.
Demonstration scenario
I will be using two hosts, both running Oracle Linux 5.5. The one with the production database is called el5-prd and the one that will host the failover database is called el5-backup. The production host has Database 11gR2 installed. There is a default database configured and it includes the sample schemas. The el5-backuphas only the Oracle software installed. Both installations reside in /u01/app/oracle/product/11.2.0/db_orcland the software owner is user oracle.
We should perform the following steps to build the failover configuration:
The list is quite long, so let's begin.
Set the production database in archivelog mode
First we have to create an OS directory, where the database should write the archivelogs.
Login to the production hosts as the oracle software owner and create an empty directory. I will create a directory named archivelog in my FRA.
[oracle@el5-prd ~]$ mkdir /u01/app/oracle/fast_recovery_area/ORCL/archivelog
[oracle@el5-prd ~]$
Next, put the database in archivelog mode.
[oracle@el5-prd]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/' scope=spfile;

System altered.

SQL> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  422670336 bytes
Fixed Size                  1345380 bytes
Variable Size             264243356 bytes
Database Buffers          150994944 bytes
Redo Buffers                6086656 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> alter database force logging;

Database altered.

SQL> exit
Disconnected from Oracle Database 11g Release 11.2.0.3.0 - Production
[oracle@el5-prd]$
Perform full database backup and create copies of control and parameter files
We perform a full backup of the production database by using RMAN.
[oracle@el5-prd ~]$ rman target=/

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 14:24:16 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1297199097)

RMAN> backup database plus archivelog;

Starting backup at 10-DEC-11
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=1 STAMP=769530270
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_annnn_TAG20111210T142431_7g6mw03l_.bkp tag=TAG20111210T142431 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

Starting backup at 10-DEC-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T142433_7g6mw1lw_.bkp tag=TAG20111210T142433 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:26
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_ncsnf_TAG20111210T142433_7g6mytqr_.bkp tag=TAG20111210T142433 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

Starting backup at 10-DEC-11
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=7 RECID=2 STAMP=769530364
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_annnn_TAG20111210T142604_7g6myw91_.bkp tag=TAG20111210T142604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

RMAN> exit

Recovery Manager complete.
[oracle@el5-prd ~]$
Next we create copies of the control and parameter file, placing them in the oracle user home directory.
[oracle@el5-prd]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 14:28:45 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> alter database create standby controlfile as '/home/oracle/orcl-backup.ctl'; 
Database altered.

SQL> create pfile='/home/oracle/initORCL-backup.ora' from spfile;

File created.

SQL>
Prepare the failover server for restore
After you have completed a "software only" installation of Database 11gR2 you have to create the following directories that are needed for successfully restoring the database backup:
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/oradata/ORCL
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/fast_recovery_area/ORCL
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/admin/ORCL/adump
Next we take the control file copy from the production server.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/orcl-backup.ctl /u01/app/oracle/oradata/ORCL/control01.ctl
oracle@el5-prd's password:
orcl-backup.ctl                               100% 9520KB   9.3MB/s   00:01
[oracle@el5-backup ~]$
Another copy of the control file goes to the FRA.
[oracle@el5-backup ~]$ cp /u01/app/oracle/oradata/ORCL/control01.ctl /u01/app/oracle/fast_recovery_area/ORCL/control02.ctl
[oracle@el5-backup ~]$
We also need the parameter and the password file.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/initORCL-backup.ora /home/oracle/
oracle@el5-prd's password:
initORCL-backup.ora                           100%  945     0.9KB/s   00:00
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL /u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL
oracle@el5-prd's password:
orapwORCL                                     100% 1536     1.5KB/s   00:00
[oracle@el5-backup ~]$
Let's copy the archivelogs and the backup as well.
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/fast_recovery_area/ORCL/archivelog /u01/app/oracle/fast_recovery_area/ORCL/
oracle@el5-prd's password:
o1_mf_1_7_7g7hwtfw_.arc                       100%   23KB  22.5KB/s   00:00
o1_mf_1_6_7g7hs8tx_.arc                       100% 4085KB   4.0MB/s   00:00
[oracle@el5-backup ~]$
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/fast_recovery_area/ORCL/backupset /u01/app/oracle/fast_recovery_area/ORCL/
oracle@el5-prd's password:
o1_mf_annnn_TAG20111210T222058_7g7hsbnq_.bkp  100% 4086KB   4.0MB/s   00:00
o1_mf_annnn_TAG20111210T222250_7g7hwv01_.bkp  100%   24KB  24.0KB/s   00:00
o1_mf_ncsnf_TAG20111210T222059_7g7hws7f_.bkp  100% 9600KB   9.4MB/s   00:00
o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp  100% 1172MB  13.6MB/s   01:26
[oracle@el5-backup ~]$
The final set of files are the redo log files.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/u01/app/oracle/oradata/ORCL/redo* /u01/app/oracle/oradata/ORCL
oracle@el5-prd's password:
redo01.log                                    100%   50MB  16.7MB/s   00:03
redo02.log                                    100%   50MB  25.0MB/s   00:02
redo03.log                                    100%   50MB  10.0MB/s   00:05
[oracle@el5-backup ~]$
The last thing we have to do is to create a listener.ora file.
[oracle@el5-backup ~]$ cat >> /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora << EOF > LISTENER = > (DESCRIPTION_LIST = > (DESCRIPTION = > (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) > (ADDRESS = (PROTOCOL = TCP)(HOST = el5-backup)(PORT = 1521)) > ) > ) > > ADR_BASE_LISTENER = /u01/app/oracle > EOF
[oracle@el5-backup ~]$
As you can see, the listener for our failover database will use the default 1521 port. Time to restore from the backup.
Restore the backup on el5-backup
Before running the restore we have to start up and bring the failover database to a mount state. The first step is to start the listener on el5-backup.
[oracle@el5-backup ~]$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 10-DEC-2011 22:45:13

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/11.2.0/db_orcl/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/el5-backup/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5-backup)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                10-DEC-2011 22:45:15
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/el5-backup/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5-backup)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@el5-backup ~]$
Next we have to set the SID and create a SPFILE from the paramater file that we have in our home directory.
[oracle@el5-backup ~]$ export ORACLE_SID=ORCL
[oracle@el5-backup ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 22:45:52 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile='/home/oracle/initORCL-backup.ora';

File created.

SQL>
We can now restore the database by using RMAN.
[oracle@el5-backup ~]$ rman target=/

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 22:47:11 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount;

Oracle instance started
database mounted

Total System Global Area     422670336 bytes

Fixed Size                     1345380 bytes
Variable Size                268437660 bytes
Database Buffers             146800640 bytes
Redo Buffers                   6086656 bytes

RMAN> restore database;

Starting restore at 10-DEC-11
Starting implicit crosscheck backup at 10-DEC-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
Crosschecked 4 objects
Finished implicit crosscheck backup at 10-DEC-11

Starting implicit crosscheck copy at 10-DEC-11
using channel ORA_DISK_1
Finished implicit crosscheck copy at 10-DEC-11

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_7_7g7hwtfw_.arc
File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_6_7g7hs8tx_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/ORCL/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/ORCL/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/ORCL/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/ORCL/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/ORCL/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp tag=TAG20111210T222059
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:36
Finished restore at 10-DEC-11

RMAN> exit

Recovery Manager complete.
[oracle@el5-backup ~]$
We successfully created an identical copy of the production database.
Install ftpd on el5-backup
Installing the FTP daemon on Oracle Linux is pretty straightforward.
[root@el5-backup ~]# yum install vsftpd
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package vsftpd.i386 0:2.0.5-16.el5_4.1 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch         Version                  Repository           Size
================================================================================
Installing:
 vsftpd         i386         2.0.5-16.el5_4.1         el5_u5_base         140 k

Transaction Summary
================================================================================
Install       1 Package(s)
Upgrade       0 Package(s)

Total download size: 140 k
Is this ok [y/N]: Y
Downloading Packages:
vsftpd-2.0.5-16.el5_4.1.i386.rpm                         | 140 kB     00:01
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : vsftpd                                                   1/1

Installed:
  vsftpd.i386 0:2.0.5-16.el5_4.1

Complete!
[root@el5-backup ~]#
You should not forget to reconfigure the firewall on the failover server to allow FTP communication. First add ip_conntrack_ftp to the IPTABLES_MODULES line in /etc/sysconfig/iptables-config. The line in iptables-config should look like this:
IPTABLES_MODULES="ip_conntrack_netbios_ns ip_conntrack_ftp"
Next, edit /etc/sysconfig/iptables and add a rule for the FTP traffic (be sure to put the line before the REJECT rule). The line you have to add in iptables looks like this:
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 21 -j ACCEPT
Bounce iptables and set the FTP service to autostart with the server.
[root@el5-backup ~]# service iptables restart
Flushing firewall rules:                                   [  OK  ]
Setting chains to policy ACCEPT: filter                    [  OK  ]
Unloading iptables modules:                                [  OK  ]
Applying iptables firewall rules:                          [  OK  ]
Loading additional iptables modules: ip_conntrack_netbios_n[  OK  ]
[root@el5-backup ~]# chkconfig vsftpd on
[root@el5-backup ~]# service vsftpd start
Starting vsftpd for vsftpd:                                [  OK  ]
[root@el5-backup ~]#
You might want to test the access from el5-prd to the failover server.
[oracle@el5-prd ~]$ ftp el5-backup
Connected to el5-backup.
220 (vsFTPd 2.0.5)
530 Please login with USER and PASS.
530 Please login with USER and PASS.
KERBEROS_V4 rejected as an authentication type
Name (el5-backup:oracle): oracle
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> bye
221 Goodbye.
[oracle@el5-prd ~]$
Setup ACLs for FTP transfer and install FTP packages on el5-prd
Our next task is to prepare the production server for communicating with el5-backup over FTP. We start by creating a dedicated database user that will be used for shipping and tracking the archivelog files. I will name it logship.
[oracle@el5-prd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> create user logship identified by logship;

User created.

SQL> grant connect, resource to logship;

Grant succeeded.

SQL>
Next we should configure an Access Control List (ACL) that will allow FTP connections to el5-backup for userlogship. We have to use the CREATE_ACL, ADD_PRIVILEGE and ASSIGN_ACL procedures from the DBMS_NETWORK_ACL_ADMIN package. We will call the procedures with the following parameters:
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
    acl          => 'ftp.xml',
    description  => 'Allow FTP connections',
    principal    => 'SYSTEM',
    is_grant     => TRUE,
    privilege    => 'connect',
    start_date   => SYSTIMESTAMP,
    end_date     => NULL);

 DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE (
    acl         => 'ftp.xml',
    principal   => 'LOGSHIP',
    is_grant    => FALSE,
    privilege   => 'connect',
    position    => NULL,
    start_date  => NULL,
    end_date    => NULL);

  DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
    acl         => 'ftp.xml',
    host        => 'el5-backup',
    lower_port  => NULL,
    upper_port  => NULL);
Here is an output of their execution:
SQL> exec dbms_network_acl_admin.create_acl (acl => 'ftp.xml', description => 'Allow FTP connections', principal => 'LOGSHIP', is_grant => TRUE, privilege => 'connect', start_date => SYSTIMESTAMP,end_date => NULL);

PL/SQL procedure successfully completed.

SQL> exec dbms_network_acl_admin.add_privilege (acl => 'ftp.xml', principal => 'LOGSHIP', is_grant => FALSE, privilege => 'connect', position => NULL, start_date => NULL, end_date => NULL);

PL/SQL procedure successfully completed.

SQL> exec dbms_network_acl_admin.assign_acl (acl => 'ftp.xml', host => 'el5-backup', lower_port => NULL,upper_port => NULL);

PL/SQL procedure successfully completed.

SQL>
For connecting to el5-backup from the production database we will be using the FTP API developed by Tim Hall. You need to download the FTP package and the package body creation scripts and run them as userlogship.
SQL> conn logship/logship;
Connected.
SQL> @ftp.pks;

Package created.

No errors.
SQL> @ftp.pkb;

Package body created.

No errors.
SQL>
Setup archivelog directory and test FTP transfer
We move on by creating a directory object within the production database that points to the location of the archivelog files.
[oracle@el5-prd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sun Dec 11 08:44:57 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> create directory arc_dir as '/u01/app/oracle/fast_recovery_area/ORCL/archivelog';

Directory created.

SQL> grant read on directory arc_dir to logship;

Grant succeeded.

SQL>
It is a good idea to test the FTP communication from within the database. You can create a dummy test file in the archivelog dir:
[oracle@el5-prd ~]$ cat >> /u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt << EOF > FTP test file > EOF
[oracle@el5-prd ~]$
You can then connect as user logship and run the following PL/SQL block:
declare l_conn utl_tcp.connection;
begin
  l_conn := ftp.login('el5-backup','21','oracle','welcome1');
  ftp.put(p_conn => l_conn, p_from_dir  => 'ARC_DIR', p_from_file => 'testfile.txt', p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt');
  ftp.logout(l_conn);
end;
/
If everything goes fine the testfile.txt will appear at el5-backup.
[oracle@el5-backup ~]$ cd /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
[oracle@el5-backup archivelog]$ cat testfile.txt
FTP test file
[oracle@el5-backup archivelog]$
Setup a file watcher on el5-prd
We will be using a database File Watcher for detecting new archivelog files and triggering FTP transfer.
First we login as user logship and create a table for storing detected archivelog files and the date of attempted transfer. This table is needed only for our own convinience - it can be used to check for missing log files if anything goes wrong.
SQL> conn logship/logship;
Connected.
SQL> create sequence transfered_logs_seq start with 1 increment by 1 cache 20 nocycle;

Sequence created.

SQL> create table transfered_logs (id number, transfer_date date, file_name varchar2(4000), error char(1));

Table created.

SQL>
Next we set the file detection interval to 1 minute. Of course, you can tune this to match closer you archivelog generation interval.
SQL> conn / as sysdba
Connected.
SQL> exec dbms_scheduler.set_attribute('file_watcher_schedule', 'repeat_interval', 'freq=minutely; interval=1');

PL/SQL procedure successfully completed.

SQL>
In order to have access to the archivelog directory, the file watcher needs an OS user account. We will create a credentials that the watcher can use and provide them with the oracle's username and password. For my demo install the oracle's password is welcome1.
SQL> exec dbms_scheduler.create_credential(credential_name => 'local_credential', username => 'oracle', password => 'welcome1');

PL/SQL procedure successfully completed.

SQL>
The final preparation is to create a PL/SQL procedure that the file watcher will call upon detecting a new archivelog file. The procedure I am using looks like this:
create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as
  v_transfer_id  number;
  v_file_name  varchar2(4000);
  v_ftp_conn  utl_tcp.connection;
begin
  v_transfer_id := transfered_logs_seq.nextval;
  v_file_name := p_sched_result.actual_file_name;
  v_ftp_conn := ftp.login('el5-backup','21','oracle','welcome1');
  ftp.put(p_conn => v_ftp_conn, p_from_dir  => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name);
  ftp.logout(v_ftp_conn);
  insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null);
  commit;
exception when others then
  insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y');
  commit;
end;
/
This procedure will try to ftp the file that the watcher will pass to it. If the operation is successful the procedure will insert a record in the TRANSFERED_LOGS table with the file name and date and time of its transfer. If an error occurs the procedure will set the ERROR column for the record to "Y".
Let's create this procedure in the logship schema.
SQL> conn logship/logship
Connected.
SQL> create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as 2 v_transfer_id number; 3 v_file_name varchar2(4000); 4 v_ftp_conn utl_tcp.connection; 5 begin 6 v_transfer_id := transfered_logs_seq.nextval; 7 v_file_name := p_sched_result.actual_file_name; 8 v_ftp_conn := ftp.login('el5-backup','21','oracle','welcome1'); 9 ftp.put(p_conn => v_ftp_conn, p_from_dir => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name); 10 ftp.logout(v_ftp_conn); 11 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null); 12 commit; 13 exception when others then 14 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y'); 15 commit; 16 end; 17 /

Procedure created.

SQL> show errors
No errors.
SQL>
Time to create the file watcher. This is done by calling the CREATE_FILE_WATCHER procedure from the DBMS_SCHEDULER package. I call the procedure with the following parameters.
BEGIN
  DBMS_SCHEDULER.create_file_watcher(
    file_watcher_name => 'arc_watcher',
    directory_path    => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog',
    file_name         => '*.arc',
    credential_name   => 'local_credential',
    destination       => NULL,
    enabled           => FALSE);
END;
/
Here is the execution:
SQL> conn / as sysdba
Connected.
SQL> exec dbms_scheduler.create_file_watcher(file_watcher_name => 'arc_watcher', directory_path => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog', file_name => '*.arc', credential_name => 'local_credential', destination => NULL, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL>
Next we create a program that will bind the file watcher and the TRANSFER_ARC_LOG PL/SQL procedure.
SQL> exec dbms_scheduler.create_program(program_name => 'arc_watcher_prog', program_type => 'stored_procedure', program_action => 'logship.trasnfer_arc_log', number_of_arguments => 1, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.define_metadata_argument(program_name => 'arc_watcher_prog', metadata_attribute => 'event_message', argument_position => 1);

PL/SQL procedure successfully completed.

SQL>
The final touch is creating a job for the ARC_WATCHER_PROG.
SQL> exec dbms_scheduler.create_job(job_name => 'arc_watcher_job', program_name => 'arc_watcher_prog', event_condition => NULL, queue_spec => 'arc_watcher', auto_drop => FALSE, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL>
An important step is to set a value for the PARALLEL_INSTANCES attribute for our job. We will set this to TRUE to let the scheduler run multiple instances of our job. If you omit this step the system will process archivelogs one a time and if it's busy with a file it will just ignore any new archivelogs that appear in this period. You definitely do not want this to happen.
SQL> exec dbms_scheduler.set_attribute('arc_watcher_job','parallel_instances',TRUE);

PL/SQL procedure successfully completed.

SQL>
As finally everything is in place, we can enable the watcher, its program and the job. This is done by executing the DBMS_SCHEDULER.ENABLE procedure.
SQL> exec dbms_scheduler.enable('arc_watcher');

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.enable('arc_watcher_prog');

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.enable('arc_watcher_job');

PL/SQL procedure successfully completed.

SQL>
Test that archivelogs are shipped
For testing if archivelog transfers are happening we will take a look at the archivelog directory on failover server.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
total 5664
drwxr-xr-x 2 oracle oinstall    4096 Dec 27 07:56 .
drwxr-xr-x 4 oracle oinstall    4096 Dec 27 07:22 ..
-rw-r----- 1 oracle oinstall 1043968 Dec 27 07:52 1_10_769951554.arc
-rw-r----- 1 oracle oinstall 1701888 Dec 27 07:52 1_7_769951554.arc
-rw-r----- 1 oracle oinstall  544256 Dec 27 07:52 1_8_769951554.arc
-rw-r----- 1 oracle oinstall 2481152 Dec 27 07:52 1_9_769951554.arc
[oracle@el5-backup ~]$
We then execute ALTER SYSTEM SWITCH LOGFILE on el5-prd.
SQL> alter system switch logfile;

System altered.

SQL>
We connect with the logship user and check the contents of TRANSFERED_LOGS.
SQL> conn logship/logship;
Connected.
SQL> select count(*) from transfered_logs;

  COUNT(*)
----------
         0

SQL>
OK, the archivelog directory is checked in 60 seconds interval, so you might have to wait some more. After one minute tops the new file should be detected and transferred.
SQL> select count(*) from transfered_logs;

  COUNT(*)
----------
         1

SQL>
The new log is detected in a transfer attempt was made. Check the archivelog directory on el5-backupagain to see if the file is there.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
total 6920
drwxr-xr-x 2 oracle oinstall    4096 Dec 27 08:00 .
drwxr-xr-x 4 oracle oinstall    4096 Dec 27 07:22 ..
-rw-r----- 1 oracle oinstall 1043968 Dec 27 07:52 1_10_769951554.arc
-rw-r--r-- 1 oracle oinstall 1282048 Dec 27 08:00 1_11_769951554.arc
-rw-r----- 1 oracle oinstall 1701888 Dec 27 07:52 1_7_769951554.arc
-rw-r----- 1 oracle oinstall  544256 Dec 27 07:52 1_8_769951554.arc
-rw-r----- 1 oracle oinstall 2481152 Dec 27 07:52 1_9_769951554.arc
[oracle@el5-backup ~]$
The logfile appears as expected. This concludes the detect and transfer part of our configuration.
A mechanism to apply and delete the shipped logs
Having the log files transferred to a failover server is not enough. If you really want to have an identical copy that is ready to take over the primary role you should take care to apply the database changes described in the logs. The easiest way is to simply start RMAN and apply the log files manually.
[oracle@el5-prd ~]$ rman target=/ 
Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 27 11:24:16 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1297199097)

RMAN> recover database noredo;

Starting recover at 17-DEC-11
using channel ORA_DISK_1

Starting recover at 17-DEC-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

starting media recovery

archived log for thread 1 with sequence 8 is already on disk as file /u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc thread=1 sequence=8
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_9_769951554.arc thread=1 sequence=9
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_10_769951554.arc thread=1 sequence=10
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_11_769951554.arc thread=1 sequence=11
unable to find archived log
archived log thread=1 sequence=12

Finished recover at 17-DEC-11
You can then open the failover database and use in place of the production by executing
alter database open resetlogs
The thing is that you probably want to automate the process. This automation can not happen in the failover database as it is not really operational (it's not in open state). You will probably go with some kind of OS level automation, but this will be platform dependent.
For GNU/Linux environments, what you can do is to create a simple shell script that looks like this:
rman target / nocatalog << EOF
run {
  recover database noredo;
  delete noprompt force archivelog until time 'SYSDATE-7';
}
exit
EOF
This script will call RMAN, apply the received archivelogs and delete all log files that are older than 7 days (I keep the others just in case). You can then setup a cron job to run the script at an appropriate interval and you should not worry for managing the archivelogs manually.
Final remarks
In this tutorial I showed you how to build a platform independant archivelogs shipping mecahnism, that does all the work from within the database. This approach has its limitations and it's not in anyway a substitution of Data Guard and the other recovery features of Enterprise Edition. It's just a simple workaround when you are forced to use Database SE and you are looking for a simple way to be more protected from failiures.
There are several areas for improvement in this mechanism, especially when it comes to security. You should keep in mind that FTP is not really secure, so if you're dealing with sensitive data you might want to consider using SFTP or something else that provides encryption. Another issue is keeping plaintext passwords in TRANSFER_ARC_LOG procedure (you might want to wrap this one) and the database dictionary.

Analyze Process and Lock


Analyze table estimate or compute statistics will acquire an exclusive lock on the library cache object, preventing any ddl changes, however dml on the table should be able to proceed. Analyze table validate structure, however, acquires an exclusive lock on the table, preventing any inserts/updates/deletes. In general, ANALYZE ... VALIDATE STRUCTURE requires an exclusive lock on the object being analyzed. Other permutations of ANALYZE should allow concurrent DML access.An exclusive lock doesn't prevent other people from reading the data, users should still be able to select from the table.

Issuing an analyze on an index puts a shared lock on the table. This means that you cannot do DML on the table that is locked. The DML
operation will wait for the analyze to release the lock. This lock can be viewed in v$lock. The lock Type will be TM and the Object id of the table is v$lock.id1. If there are transactions already against the table, then trying to do an analyze will give ora-54 resource busy.

select * from v$lock where type='TM';

CURSOR_SHARING



CURSOR_SHARING

The default value of parameter Cursor_sharing is Exact. Who changed it?

CURSOR_SHARING is a parameter which decides whether a SQL send from user is a candidate for fresh parsing or will use an existing plan.

CURSOR_SHARING (Default value: Exact): Share the plan only if text of SQL matches exactly with the text of SQL lying in shared pool.

Note: When cursor_sharing =EXACT it uses the index and is 10times faster than cursor_sharing= SIMILAR

trigger to fire under certain conditions

Issue

I do not want a trigger to fire under certain conditions like running on some stored procedure.
I cannot disable the trigger. For ex. There is a table on which if records are deleted then the same is written to audit table. However if on the same table if records are deleted when bill is processed which is normal behaviour I do not want the trigger to fire. Please note I cannot disable the trigger because at that time there could be someone else deleting the record manually.

Solution
You may add flag column in table.

In bill or any process flag your data created or modified.

and use "IF" statement or "WHEN" clause of the trigger to only fire for certain pre-known value of a column.

CREATE TRIGGER TRG_XYZ
after insert on TBL_XYZ
for each row
WHEN(NEW.BILL_FLG != 'Y')
begin


-------------
fine grained auditing oracle

SELECT policy_name, object_name, statement_type, os_user, db_user FROM dba_fga_audit_trail;

The following policy audits any queries of salaries greater than £50,000.

CONN sys/password AS sysdba

BEGIN
  DBMS_FGA.add_policy(
    object_schema   => 'AUDIT_TEST',
    object_name     => 'EMP',
    policy_name     => 'SALARY_CHK_AUDIT',
    audit_condition => 'SAL > 50000',
    audit_column    => 'SAL');
END;
/



Tuesday, April 24, 2012

Access Network Services (UTL_INADDR,UTL_TCP,UTL_HTTP, UTL_SMTP, UTL_MAIL) in Oracle 11g ( ora 24247 network access denied)

From 11g the built-in packages which access the network resources e.g. UTL_HTTP, UTL_SMTP, UTL_MAIL etc. now requires an access control list to be used. If you see this warning that means there are some objects in your database which are using one of these packages. Once the upgrade is complete you need to configure an Access Control List for the users who are using the packages otherwise your applications will fail.
/*
  To see if there are any objects depending upon network packages like UTL_TCP ,
  UTL_SMTP etc.
*/
SELECT owner , name , type , referenced_name FROM DBA_DEPENDENCIES
WHERE referenced_name IN ('UTL_TCP','UTL_SMTP','UTL_MAIL','UTL_HTTP','UTL_INADDR')
  AND owner NOT IN ('SYS','PUBLIC','ORDPLUGINS');



Kindly run the below sqls in SYS. It is only for 11g.


Exec dbms_network_acl_admin.create_acl ('utl_http_access.xml','Normal Access','DPCDSL',TRUE,'connect',NULL,NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_http_access.xml', principal =>  'DPCDSL',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_http_access.xml', '*',NULL,NULL);
Commit ;
               
Exec dbms_network_acl_admin.create_acl ('utl_inaddr_access.xml','Normal Access','DPCDSL',TRUE,'resolve',NULL, NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_inaddr_access.xml', principal =>  'DPCDSL',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_inaddr_access.xml', '*',NULL,NULL);
commit;

Exec dbms_network_acl_admin.create_acl ('utl_mail.xml','Allow mail to be send','DPCDSL',TRUE,'connect' );
Exec dbms_network_acl_admin.add_privilege ('utl_mail.xml','DPCDSL',TRUE,'resolve');
Exec dbms_network_acl_admin.assign_acl('utl_mail.xml','*',NULL,NULL);
commit ;


Exec dbms_network_acl_admin.create_acl ('utl_http.xml','HTTP Access','DPCDSL',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_http.xml','DPCDSL',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_http.xml','*',NULL,NULL);
commit;

---------------------------
Exec dbms_network_acl_admin.create_acl ('utl_smtp.xml','SMTP Access','DPCDSL',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_smtp.xml','DPCDSL',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_smtp.xml','*',NULL,NULL);
commit;

In addition to the above, install the oracle mail scripts to instal UTL_MAIL packages to enable sending emails form inside of database

SQL>@$ORACLE_HOME/rdbms/admin/utlmail.sql

SQL>@$ORACLE_HOME/rdbms/admin/prvtmail.plb

SQL>ALTER SYSTEM SET smtp_out_server='smtp.oracle.com' scope=BOTH;



Grant Execute on utl_inaddr to DPCDSL ;
Grant Execute on utl_http to DPCDSL ;


SELECT global_name,utl_inaddr.get_host_address FROM global_name;

SELECT UTL_INADDR.get_host_address ('www.oracle.com') FROM DUAL;

SELECT UTL_HTTP.request ('http://www.oracle.com') FROM DUAL;


---------------------------------Removing ACL and priviliges
Unassign ACL
begin
  dbms_network_acl_admin.unassign_acl(
    acl        => 'utl_http.xml',
    host       => '*',
    lower_port => 80,
    upper_port => 80
  );
end;

Delete Privilege

begin
  dbms_network_acl_admin.delete_privilege(
    'utl_http.xml', 'DPCDSL', NULL, 'connect'
  );
end;


Drop ACL

begin
  dbms_network_acl_admin.drop_acl(
    'utl_http.xml'
  );
end;


------------------------------testing----------------------
create or replace procedure getTitle(pUrl VARCHAR2)
is
  vResult CLOB;
begin
  vResult := replace(UTL_HTTP.REQUEST(pUrl),chr(10),' ');
  vResult := regexp_replace(vResult,'.*<title> ?(.+) ?</title>.*','\1',1,1,'i');
  dbms_output.put_line(vResult);
end;
/

/*
  This is just a dummy procedure and will only display
  the title if the title tag is defined in the first 2000
  characters in web page.
*/

set serveroutput on
execute getTitle('http://www.oracle.com');




create or replace procedure getTitle(pUrl VARCHAR2)
is
  vResult CLOB;
begin
  vResult := replace(UTL_HTTP.REQUEST(pUrl),chr(10),' ');
  vResult := regexp_replace(vResult,'.*.*','\1',1,1,'i');
  dbms_output.put_line(vResult);
end;
/

set serveroutput on
execute getTitle('http://www.oracle.com');

Monday, April 23, 2012

Auto-Changing SQL Prompt

You have to insert the following line of code in glogin.sql which is usually found in $ORACLE_HOME/sqlplus/admin


set termout off
set echo off
define X=NotConnected
define Y=DBNAME

Column Usr New_Value X
Column DBName New_Value Y


Select SYS_CONTEXT('USERENV','SESSION_USER' ) Usr From Dual;

Select Global_Name DBNAME from Global_Name;

set termout on
set sqlprompt '&X@&Y> '

Connect as an Oracle User Without Knowing the Password

SQL> alter user ldbo grant connect through ksh;

User altered.

SQL> connect n/ksh$1#@apx1112srv;
Connected.
SQL> show user
USER is "LDBO"


2. If we can allow ourselves to change the password, but don’t want to change it permanently, there is a way to change the password back.
The DBA_USERS table contains the PASSWORD column. This column contains the encrypted password of the user.
Follow these steps to change the user’s password:
Get the encrypted password of the user from DBA_USERS table and save it.
Change the password by using the “alter user <user> identified by <pwd>” command.
Perform the following step to change the password back:
Use the “alter user <user> identified by values <encrypted_password>” command.
Use the encrypted password you have and pay attention to the “values” keyword in the command, it specifies that the password given in the command is already encrypted.

exp-00091 exporting questionable statistics

Use statistics=NONE in exp statement

ORA-01092 ORACLE instance terminated. Disconnection forced ORA-00704 bootstrap process failure

shut immediate
startup upgrade

@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catupgrd.sql
@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\catproc.sql
@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlirp.sql
@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlrp.sql

shut immediate
startup

SELECT * FROM dba_registry;

ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862]

Fri Apr 20 12:24:30 2012
Errors in file d:\oracle\product\10.2.0\admin\tss1112\bdump\tss5_mmon_2416.trc:
ORA-00600: internal error code, arguments: [kjhn_post_ha_alert0-862], [], [], [], [], [], [], []

BEGIN :success := dbms_ha_alerts_prvt.check_ha_resources; END;



To implement the workaround, please execute the following steps as the SYS user:

1. Collect the following information and spool it to a file for your records.

a. output of select * from v$instance
b. show parameter instance_name
c. set pages 1000
d. select * from recent_resource_incarnations$

2. Create a backup table of recent_resource_incarnations$.

SQL> create table recent_resource_inc$bk as select * from recent_resource_incarnations$;


3. Truncate recent_resource_incarnations$. Be sure to do this while the instance is up and running. Do not issue this statement if a shutdown is pending.

SQL> truncate table recent_resource_incarnations$;


4. Perform a clean shutdown, followed by a startup.

Job Scheduler for RMAN backup

1)
RMAN TARGET SYS/.....@KSH1213SRV
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'F:\RMANBACKUP\%F';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'F:\RMANBACKUP\%U';

or

-----------------------------Advance setting with RMAN maintenace command----------------------------------------------------

# BACKUP.rcv
# Configure RMAN settings
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE  DISK TO '%n_cf_%T_%s_%F.bck';
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DEFAULT DEVICE TYPE TO disk;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '%n_df_%T_%s.bck';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '%n_df_%T_%s.bck';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '%n_df_%T_%s.bck';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT '%n_df_%T_%s.bck';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '%n_sn_%T_%s.bck';

# Perform backup of database and archivelogs, deleting backed up archivelogs
BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

# Maintenance commands for crosschecks and deleting expired backups
ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK;
CROSSCHECK BACKUP;
CROSSCHECK ARCHIVELOG ALL;
# Cleaning up to save space.
DELETE NOPROMPT EXPIRED BACKUP;
DELETE NOPROMPT OBSOLETE DEVICE TYPE DISK;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
exit

------------------------------------------------------------------------------------------------------------------------------------------------
2)
create user rman identified by .........
Temporary tablespace temporary
Default tablespace usr
Quota unlimited on usr;

Grant recovery_catalog_owner,connect, resource to rman;
Grant Create type to rman;

3)

 RMAN TARGET SYS/.....@KSH1213SRV CATALOG RMAN/....@KSH1213SRV
CREATE CATALOG;
REGISTER DATABASE;

CREATE SCRIPT RMANBACKUP
{
BACKUP DATABASE;
}


4)
BEGIN
  dbms_scheduler.create_job(
  job_name   => 'RMAN_BACKUP',
  job_type   => 'EXECUTABLE',
  job_action => 'rman target sys/.......@KSH1213SRV CATALOG RMAN/..........@KSH1213SRV SCRIPT RMANBACKUP',
  start_date      => '01-APR-12 07:00.00.00 PM ASIA/CALCUTTA',
  repeat_interval => 'freq=DAILY',
  enabled         => TRUE,
  comments   => 'BACKUP RMAN');
END;
/

EXEC dbms_scheduler.RUN_JOB('RMAN_BACKUP');

Followers