Showing posts with label standby database. Show all posts
Showing posts with label standby database. Show all posts

Friday, January 22, 2016

Standby Database not automatically startup issue



What is the value of REMOTE_LOGIN_PASSWORDFILE parameter?

Please change the owner of the Oracle process
Start->Settings->Control Panel->Services
Locate and highlight the "OracleServiceSID"
Click on the "Startup" button
In the "Log On As", choose "This Account"
Use the "..." button to browse to the "OracleAdmin" or Administrator user
Choose in "List Names From" your host
Provide the password of this user and confirm it
Repeat for the "OracleStartSID", "OracleTNSListener" and any
other Oracleservice you are using.
Be sure that the service startup is "Automatic"


ORA_<SID>_AUTOSTART =
ORA_<SID>_PFILE =
ORA_<SID>_SHUTDOWN =
ORA_<SID>_SHUTDOWNTYPE =
ORA_<SID>_SHUTDOWN_TIMEOUT =



Research:
=========
WIN: Automatic Startup of the Database when Using O/S Authentication ( Note 116979.1 )
How to configure Database Control to Start Automatically on Server Reboot / Shutdown Note 1282530.1
Windows Service Not Starting Automatically At Server reboot ( Doc ID 1264404.1 )




Friday, September 14, 2012

11g Active Data Guard - enabling Real-Time Query


Active Data Guard is a good new feature in 11g (although requires a license) which enables us to query the Standby database while redo logs are being applied to it. In earlier releases, we had to stop the log apply, open the database in read only mode and then start the log apply again when the database was taken out of the read only mode.
With Oracle 11g Active Data Guard, we can make use of our standby site to offload reporting and query type applications while at the same time not compromising on the high availability aspect.
How do we enable Active Data Guard?
If we are not using the Data Guard Broker, we need to open the standby database, set it in read only mode and then start the managed recovery as shown below.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1069252608 bytes
Fixed Size 2154936 bytes
Variable Size 847257160 bytes
Database Buffers 213909504 bytes
Redo Buffers 5931008 bytes
Database mounted.
Database opened.
SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.
If we are using the Data Guard Broker CLI, DGMGRL, the procedure is a bit different and is not very clearly explained in the documentation.
You need to stop redo apply first via the SET STATE dgmgrl command, then from a SQL*PLUS session, open the database in read only mode, and then back again from dgmgrl via set SET STATE command, start the redo apply again.
Stop redo apply with the following command from Data Guard Broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-OFF’;
Open standby read-only via SQL*Plus
SQL> alter database open read only;
Restart redo apply via broker CLI
DGMGRL> EDIT DATABASE ‘PRODDB’ SET STATE=’APPLY-ON’;
I tried to run the same only via DGMGRL and got this error:
DGMGRL> edit database PRODDB set state=”APPLY-OFF”;
Succeeded.
DGMGRL> edit database PRODDB set state=”READ ONLY”;
Error: ORA-16516: current state is invalid for the attempted operation
After we have enabled the Real-Time Query feature, we can confirm the same via the DGMGRL command – SHOW DATABASE
DGMGRL> show database verbose PRODDB_DR
Database – PRODDB_DR
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: ON
Note:
Even though we have enabled Real-Time Query feature, if we go to Data Guard page via the Enterprise Manager Grid Control GUI, it will show that Real-Time Query is in a Disabled state.
This is apparently a bug which applies to OEM Grid Control 10.2.0.1 to 10.2.0.5 with a 11.2 target database.
Bug 7633734: DG ADMIN PAGE REAL TIME QUERY SHOWS DISABLED WHEN ENABLED FOR 11.2 DATABASES

Monday, July 23, 2012

Standby Database: Forcefully log switch in every 30 min automatically


Controlling Archive Lag

You can force all enabled redo log threads to switch their current logs at regular time intervals. In a primary/standby database configuration, changes are made available to the standby database by archiving redo logs at the primary site and then shipping them to the standby database. The changes that are being applied by the standby database can lag behind the changes that are occurring on the primary database, because the standby database must wait for the changes in the primary database redo log to be archived (into the archived redo log) and then shipped to it. To limit this lag, you can set the ARCHIVE_LAG_TARGET initialization parameter. Setting this parameter lets you specify in seconds how long that lag can be.

Setting the ARCHIVE_LAG_TARGET Initialization Parameter

When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause the database to examine the current redo log of the instance periodically. If the following conditions are met, then the instance will switch the log:
  • The current log was created prior to n seconds ago, and the estimated archival time for the current log is m seconds (proportional to the number of redo blocks used in the current log), where n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter.
  • The current log contains redo records.
In an Oracle Real Application Clusters environment, the instance also causes other threads to switch and archive their logs if they are falling behind. This can be particularly useful when one instance in the cluster is more idle than the other instances (as when you are running a 2-node primary/secondary configuration of Oracle Real Application Clusters).
The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many seconds of redo the standby could lose in the event of a primary shutdown or failure if the Oracle Data Guard environment is not configured in a no-data-loss mode. It also provides an upper limit of how long (in seconds) the current log of the primary database can span. Because the estimated archival time is also considered, this is not the exact log switch time.
The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value).
ARCHIVE_LAG_TARGET = 1800

A value of 0 disables this time-based log switching functionality. This is the default setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived.
ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER SYSTEM SET statement.
Caution:
The ARCHIVE_LAG_TARGET parameter must be set to the same value in all instances of an Oracle Real Application Clusters environment. Failing to do so results in unpredictable behavior.

Factors Affecting the Setting of ARCHIVE_LAG_TARGET

Consider the following factors when determining if you want to set the ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter.
  • Overhead of switching (as well as archiving) logs
  • How frequently normal log switches occur as a result of log full conditions
  • How much redo loss is tolerated in the standby database
Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already occur more frequently than the interval specified. However, in the case of irregularities of redo generation speed, the interval does provide an upper limit for the time range each current log covers.
If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there can be a negative impact on performance. This can force frequent log switches. Set the parameter to a reasonable value so as not to degrade the performance of the primary database.

Friday, April 27, 2012

Building a failover database using Oracle Database 11g Standard Edition


We are all aware of Data Guard and the various disaster protection mechanisms that come with Oracle Database Enterprise Edition. In this article I will try to show you how to build a remote failover database, when all you have are Standard Edition databases at both ends.
If we want to keep an exact replica of a production database we have to take care for three things. First, we have to ship the changes (archive logs) to the failover database. Second, we have to keep track of what was shipped, so we know what needs to be recovered if something goes wrong. Third, we have to apply the changes at the failover database.
Before 11gR2 came out, detecting and shipping log files between hosts had to be done outside of the database. In GNU/Linux environments most people use rsync or a similar program to do the job. On the other hand I always prefer to do such tasks within the database and relaying on the OS is not always an option (what if one of the databases runs on Windows?). In this tutorial I will show you how to pickup archivelogs by using File Watchers (introduced in 11gR2) and transfer them via FTP to a remote host.
Demonstration scenario
I will be using two hosts, both running Oracle Linux 5.5. The one with the production database is called el5-prd and the one that will host the failover database is called el5-backup. The production host has Database 11gR2 installed. There is a default database configured and it includes the sample schemas. The el5-backuphas only the Oracle software installed. Both installations reside in /u01/app/oracle/product/11.2.0/db_orcland the software owner is user oracle.
We should perform the following steps to build the failover configuration:
The list is quite long, so let's begin.
Set the production database in archivelog mode
First we have to create an OS directory, where the database should write the archivelogs.
Login to the production hosts as the oracle software owner and create an empty directory. I will create a directory named archivelog in my FRA.
[oracle@el5-prd ~]$ mkdir /u01/app/oracle/fast_recovery_area/ORCL/archivelog
[oracle@el5-prd ~]$
Next, put the database in archivelog mode.
[oracle@el5-prd]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/' scope=spfile;

System altered.

SQL> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  422670336 bytes
Fixed Size                  1345380 bytes
Variable Size             264243356 bytes
Database Buffers          150994944 bytes
Redo Buffers                6086656 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> alter database force logging;

Database altered.

SQL> exit
Disconnected from Oracle Database 11g Release 11.2.0.3.0 - Production
[oracle@el5-prd]$
Perform full database backup and create copies of control and parameter files
We perform a full backup of the production database by using RMAN.
[oracle@el5-prd ~]$ rman target=/

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 14:24:16 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1297199097)

RMAN> backup database plus archivelog;

Starting backup at 10-DEC-11
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=1 STAMP=769530270
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_annnn_TAG20111210T142431_7g6mw03l_.bkp tag=TAG20111210T142431 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

Starting backup at 10-DEC-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T142433_7g6mw1lw_.bkp tag=TAG20111210T142433 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:26
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_ncsnf_TAG20111210T142433_7g6mytqr_.bkp tag=TAG20111210T142433 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

Starting backup at 10-DEC-11
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=7 RECID=2 STAMP=769530364
channel ORA_DISK_1: starting piece 1 at 10-DEC-11
channel ORA_DISK_1: finished piece 1 at 10-DEC-11
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_annnn_TAG20111210T142604_7g6myw91_.bkp tag=TAG20111210T142604 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-DEC-11

RMAN> exit

Recovery Manager complete.
[oracle@el5-prd ~]$
Next we create copies of the control and parameter file, placing them in the oracle user home directory.
[oracle@el5-prd]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 14:28:45 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> alter database create standby controlfile as '/home/oracle/orcl-backup.ctl'; 
Database altered.

SQL> create pfile='/home/oracle/initORCL-backup.ora' from spfile;

File created.

SQL>
Prepare the failover server for restore
After you have completed a "software only" installation of Database 11gR2 you have to create the following directories that are needed for successfully restoring the database backup:
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/oradata/ORCL
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/fast_recovery_area/ORCL
[oracle@el5-backup ~]$ mkdir -p /u01/app/oracle/admin/ORCL/adump
Next we take the control file copy from the production server.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/orcl-backup.ctl /u01/app/oracle/oradata/ORCL/control01.ctl
oracle@el5-prd's password:
orcl-backup.ctl                               100% 9520KB   9.3MB/s   00:01
[oracle@el5-backup ~]$
Another copy of the control file goes to the FRA.
[oracle@el5-backup ~]$ cp /u01/app/oracle/oradata/ORCL/control01.ctl /u01/app/oracle/fast_recovery_area/ORCL/control02.ctl
[oracle@el5-backup ~]$
We also need the parameter and the password file.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/home/oracle/initORCL-backup.ora /home/oracle/
oracle@el5-prd's password:
initORCL-backup.ora                           100%  945     0.9KB/s   00:00
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL /u01/app/oracle/product/11.2.0/db_orcl/dbs/orapwORCL
oracle@el5-prd's password:
orapwORCL                                     100% 1536     1.5KB/s   00:00
[oracle@el5-backup ~]$
Let's copy the archivelogs and the backup as well.
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/fast_recovery_area/ORCL/archivelog /u01/app/oracle/fast_recovery_area/ORCL/
oracle@el5-prd's password:
o1_mf_1_7_7g7hwtfw_.arc                       100%   23KB  22.5KB/s   00:00
o1_mf_1_6_7g7hs8tx_.arc                       100% 4085KB   4.0MB/s   00:00
[oracle@el5-backup ~]$
[oracle@el5-backup ~]$ scp -r oracle@el5-prd:/u01/app/oracle/fast_recovery_area/ORCL/backupset /u01/app/oracle/fast_recovery_area/ORCL/
oracle@el5-prd's password:
o1_mf_annnn_TAG20111210T222058_7g7hsbnq_.bkp  100% 4086KB   4.0MB/s   00:00
o1_mf_annnn_TAG20111210T222250_7g7hwv01_.bkp  100%   24KB  24.0KB/s   00:00
o1_mf_ncsnf_TAG20111210T222059_7g7hws7f_.bkp  100% 9600KB   9.4MB/s   00:00
o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp  100% 1172MB  13.6MB/s   01:26
[oracle@el5-backup ~]$
The final set of files are the redo log files.
[oracle@el5-backup ~]$ scp oracle@el5-prd:/u01/app/oracle/oradata/ORCL/redo* /u01/app/oracle/oradata/ORCL
oracle@el5-prd's password:
redo01.log                                    100%   50MB  16.7MB/s   00:03
redo02.log                                    100%   50MB  25.0MB/s   00:02
redo03.log                                    100%   50MB  10.0MB/s   00:05
[oracle@el5-backup ~]$
The last thing we have to do is to create a listener.ora file.
[oracle@el5-backup ~]$ cat >> /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora << EOF > LISTENER = > (DESCRIPTION_LIST = > (DESCRIPTION = > (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) > (ADDRESS = (PROTOCOL = TCP)(HOST = el5-backup)(PORT = 1521)) > ) > ) > > ADR_BASE_LISTENER = /u01/app/oracle > EOF
[oracle@el5-backup ~]$
As you can see, the listener for our failover database will use the default 1521 port. Time to restore from the backup.
Restore the backup on el5-backup
Before running the restore we have to start up and bring the failover database to a mount state. The first step is to start the listener on el5-backup.
[oracle@el5-backup ~]$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 10-DEC-2011 22:45:13

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/11.2.0/db_orcl/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 11.2.0.3.0 - Production
System parameter file is /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/el5-backup/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5-backup)(PORT=1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                10-DEC-2011 22:45:15
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/db_orcl/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/el5-backup/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=el5-backup)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@el5-backup ~]$
Next we have to set the SID and create a SPFILE from the paramater file that we have in our home directory.
[oracle@el5-backup ~]$ export ORACLE_SID=ORCL
[oracle@el5-backup ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 10 22:45:52 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile='/home/oracle/initORCL-backup.ora';

File created.

SQL>
We can now restore the database by using RMAN.
[oracle@el5-backup ~]$ rman target=/

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 10 22:47:11 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount;

Oracle instance started
database mounted

Total System Global Area     422670336 bytes

Fixed Size                     1345380 bytes
Variable Size                268437660 bytes
Database Buffers             146800640 bytes
Redo Buffers                   6086656 bytes

RMAN> restore database;

Starting restore at 10-DEC-11
Starting implicit crosscheck backup at 10-DEC-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
Crosschecked 4 objects
Finished implicit crosscheck backup at 10-DEC-11

Starting implicit crosscheck copy at 10-DEC-11
using channel ORA_DISK_1
Finished implicit crosscheck copy at 10-DEC-11

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_7_7g7hwtfw_.arc
File Name: /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2011_12_10/o1_mf_1_6_7g7hs8tx_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/ORCL/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/ORCL/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/ORCL/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/ORCL/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/ORCL/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2011_12_10/o1_mf_nnndf_TAG20111210T222059_7g7hsdkp_.bkp tag=TAG20111210T222059
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:36
Finished restore at 10-DEC-11

RMAN> exit

Recovery Manager complete.
[oracle@el5-backup ~]$
We successfully created an identical copy of the production database.
Install ftpd on el5-backup
Installing the FTP daemon on Oracle Linux is pretty straightforward.
[root@el5-backup ~]# yum install vsftpd
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package vsftpd.i386 0:2.0.5-16.el5_4.1 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch         Version                  Repository           Size
================================================================================
Installing:
 vsftpd         i386         2.0.5-16.el5_4.1         el5_u5_base         140 k

Transaction Summary
================================================================================
Install       1 Package(s)
Upgrade       0 Package(s)

Total download size: 140 k
Is this ok [y/N]: Y
Downloading Packages:
vsftpd-2.0.5-16.el5_4.1.i386.rpm                         | 140 kB     00:01
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : vsftpd                                                   1/1

Installed:
  vsftpd.i386 0:2.0.5-16.el5_4.1

Complete!
[root@el5-backup ~]#
You should not forget to reconfigure the firewall on the failover server to allow FTP communication. First add ip_conntrack_ftp to the IPTABLES_MODULES line in /etc/sysconfig/iptables-config. The line in iptables-config should look like this:
IPTABLES_MODULES="ip_conntrack_netbios_ns ip_conntrack_ftp"
Next, edit /etc/sysconfig/iptables and add a rule for the FTP traffic (be sure to put the line before the REJECT rule). The line you have to add in iptables looks like this:
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 21 -j ACCEPT
Bounce iptables and set the FTP service to autostart with the server.
[root@el5-backup ~]# service iptables restart
Flushing firewall rules:                                   [  OK  ]
Setting chains to policy ACCEPT: filter                    [  OK  ]
Unloading iptables modules:                                [  OK  ]
Applying iptables firewall rules:                          [  OK  ]
Loading additional iptables modules: ip_conntrack_netbios_n[  OK  ]
[root@el5-backup ~]# chkconfig vsftpd on
[root@el5-backup ~]# service vsftpd start
Starting vsftpd for vsftpd:                                [  OK  ]
[root@el5-backup ~]#
You might want to test the access from el5-prd to the failover server.
[oracle@el5-prd ~]$ ftp el5-backup
Connected to el5-backup.
220 (vsFTPd 2.0.5)
530 Please login with USER and PASS.
530 Please login with USER and PASS.
KERBEROS_V4 rejected as an authentication type
Name (el5-backup:oracle): oracle
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> bye
221 Goodbye.
[oracle@el5-prd ~]$
Setup ACLs for FTP transfer and install FTP packages on el5-prd
Our next task is to prepare the production server for communicating with el5-backup over FTP. We start by creating a dedicated database user that will be used for shipping and tracking the archivelog files. I will name it logship.
[oracle@el5-prd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 14 07:24:02 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> create user logship identified by logship;

User created.

SQL> grant connect, resource to logship;

Grant succeeded.

SQL>
Next we should configure an Access Control List (ACL) that will allow FTP connections to el5-backup for userlogship. We have to use the CREATE_ACL, ADD_PRIVILEGE and ASSIGN_ACL procedures from the DBMS_NETWORK_ACL_ADMIN package. We will call the procedures with the following parameters:
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
    acl          => 'ftp.xml',
    description  => 'Allow FTP connections',
    principal    => 'SYSTEM',
    is_grant     => TRUE,
    privilege    => 'connect',
    start_date   => SYSTIMESTAMP,
    end_date     => NULL);

 DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE (
    acl         => 'ftp.xml',
    principal   => 'LOGSHIP',
    is_grant    => FALSE,
    privilege   => 'connect',
    position    => NULL,
    start_date  => NULL,
    end_date    => NULL);

  DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
    acl         => 'ftp.xml',
    host        => 'el5-backup',
    lower_port  => NULL,
    upper_port  => NULL);
Here is an output of their execution:
SQL> exec dbms_network_acl_admin.create_acl (acl => 'ftp.xml', description => 'Allow FTP connections', principal => 'LOGSHIP', is_grant => TRUE, privilege => 'connect', start_date => SYSTIMESTAMP,end_date => NULL);

PL/SQL procedure successfully completed.

SQL> exec dbms_network_acl_admin.add_privilege (acl => 'ftp.xml', principal => 'LOGSHIP', is_grant => FALSE, privilege => 'connect', position => NULL, start_date => NULL, end_date => NULL);

PL/SQL procedure successfully completed.

SQL> exec dbms_network_acl_admin.assign_acl (acl => 'ftp.xml', host => 'el5-backup', lower_port => NULL,upper_port => NULL);

PL/SQL procedure successfully completed.

SQL>
For connecting to el5-backup from the production database we will be using the FTP API developed by Tim Hall. You need to download the FTP package and the package body creation scripts and run them as userlogship.
SQL> conn logship/logship;
Connected.
SQL> @ftp.pks;

Package created.

No errors.
SQL> @ftp.pkb;

Package body created.

No errors.
SQL>
Setup archivelog directory and test FTP transfer
We move on by creating a directory object within the production database that points to the location of the archivelog files.
[oracle@el5-prd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sun Dec 11 08:44:57 2011

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release 11.2.0.3.0 - Production

SQL> create directory arc_dir as '/u01/app/oracle/fast_recovery_area/ORCL/archivelog';

Directory created.

SQL> grant read on directory arc_dir to logship;

Grant succeeded.

SQL>
It is a good idea to test the FTP communication from within the database. You can create a dummy test file in the archivelog dir:
[oracle@el5-prd ~]$ cat >> /u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt << EOF > FTP test file > EOF
[oracle@el5-prd ~]$
You can then connect as user logship and run the following PL/SQL block:
declare l_conn utl_tcp.connection;
begin
  l_conn := ftp.login('el5-backup','21','oracle','welcome1');
  ftp.put(p_conn => l_conn, p_from_dir  => 'ARC_DIR', p_from_file => 'testfile.txt', p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/testfile.txt');
  ftp.logout(l_conn);
end;
/
If everything goes fine the testfile.txt will appear at el5-backup.
[oracle@el5-backup ~]$ cd /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
[oracle@el5-backup archivelog]$ cat testfile.txt
FTP test file
[oracle@el5-backup archivelog]$
Setup a file watcher on el5-prd
We will be using a database File Watcher for detecting new archivelog files and triggering FTP transfer.
First we login as user logship and create a table for storing detected archivelog files and the date of attempted transfer. This table is needed only for our own convinience - it can be used to check for missing log files if anything goes wrong.
SQL> conn logship/logship;
Connected.
SQL> create sequence transfered_logs_seq start with 1 increment by 1 cache 20 nocycle;

Sequence created.

SQL> create table transfered_logs (id number, transfer_date date, file_name varchar2(4000), error char(1));

Table created.

SQL>
Next we set the file detection interval to 1 minute. Of course, you can tune this to match closer you archivelog generation interval.
SQL> conn / as sysdba
Connected.
SQL> exec dbms_scheduler.set_attribute('file_watcher_schedule', 'repeat_interval', 'freq=minutely; interval=1');

PL/SQL procedure successfully completed.

SQL>
In order to have access to the archivelog directory, the file watcher needs an OS user account. We will create a credentials that the watcher can use and provide them with the oracle's username and password. For my demo install the oracle's password is welcome1.
SQL> exec dbms_scheduler.create_credential(credential_name => 'local_credential', username => 'oracle', password => 'welcome1');

PL/SQL procedure successfully completed.

SQL>
The final preparation is to create a PL/SQL procedure that the file watcher will call upon detecting a new archivelog file. The procedure I am using looks like this:
create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as
  v_transfer_id  number;
  v_file_name  varchar2(4000);
  v_ftp_conn  utl_tcp.connection;
begin
  v_transfer_id := transfered_logs_seq.nextval;
  v_file_name := p_sched_result.actual_file_name;
  v_ftp_conn := ftp.login('el5-backup','21','oracle','welcome1');
  ftp.put(p_conn => v_ftp_conn, p_from_dir  => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name);
  ftp.logout(v_ftp_conn);
  insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null);
  commit;
exception when others then
  insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y');
  commit;
end;
/
This procedure will try to ftp the file that the watcher will pass to it. If the operation is successful the procedure will insert a record in the TRANSFERED_LOGS table with the file name and date and time of its transfer. If an error occurs the procedure will set the ERROR column for the record to "Y".
Let's create this procedure in the logship schema.
SQL> conn logship/logship
Connected.
SQL> create or replace procedure trasnfer_arc_log(p_sched_result SYS.SCHEDULER_FILEWATCHER_RESULT) as 2 v_transfer_id number; 3 v_file_name varchar2(4000); 4 v_ftp_conn utl_tcp.connection; 5 begin 6 v_transfer_id := transfered_logs_seq.nextval; 7 v_file_name := p_sched_result.actual_file_name; 8 v_ftp_conn := ftp.login('el5-backup','21','oracle','welcome1'); 9 ftp.put(p_conn => v_ftp_conn, p_from_dir => 'ARC_DIR', p_from_file => v_file_name, p_to_file => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog/'||v_file_name); 10 ftp.logout(v_ftp_conn); 11 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, null); 12 commit; 13 exception when others then 14 insert into transfered_logs values (v_transfer_id, sysdate, v_file_name, 'Y'); 15 commit; 16 end; 17 /

Procedure created.

SQL> show errors
No errors.
SQL>
Time to create the file watcher. This is done by calling the CREATE_FILE_WATCHER procedure from the DBMS_SCHEDULER package. I call the procedure with the following parameters.
BEGIN
  DBMS_SCHEDULER.create_file_watcher(
    file_watcher_name => 'arc_watcher',
    directory_path    => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog',
    file_name         => '*.arc',
    credential_name   => 'local_credential',
    destination       => NULL,
    enabled           => FALSE);
END;
/
Here is the execution:
SQL> conn / as sysdba
Connected.
SQL> exec dbms_scheduler.create_file_watcher(file_watcher_name => 'arc_watcher', directory_path => '/u01/app/oracle/fast_recovery_area/ORCL/archivelog', file_name => '*.arc', credential_name => 'local_credential', destination => NULL, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL>
Next we create a program that will bind the file watcher and the TRANSFER_ARC_LOG PL/SQL procedure.
SQL> exec dbms_scheduler.create_program(program_name => 'arc_watcher_prog', program_type => 'stored_procedure', program_action => 'logship.trasnfer_arc_log', number_of_arguments => 1, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.define_metadata_argument(program_name => 'arc_watcher_prog', metadata_attribute => 'event_message', argument_position => 1);

PL/SQL procedure successfully completed.

SQL>
The final touch is creating a job for the ARC_WATCHER_PROG.
SQL> exec dbms_scheduler.create_job(job_name => 'arc_watcher_job', program_name => 'arc_watcher_prog', event_condition => NULL, queue_spec => 'arc_watcher', auto_drop => FALSE, enabled => FALSE);

PL/SQL procedure successfully completed.

SQL>
An important step is to set a value for the PARALLEL_INSTANCES attribute for our job. We will set this to TRUE to let the scheduler run multiple instances of our job. If you omit this step the system will process archivelogs one a time and if it's busy with a file it will just ignore any new archivelogs that appear in this period. You definitely do not want this to happen.
SQL> exec dbms_scheduler.set_attribute('arc_watcher_job','parallel_instances',TRUE);

PL/SQL procedure successfully completed.

SQL>
As finally everything is in place, we can enable the watcher, its program and the job. This is done by executing the DBMS_SCHEDULER.ENABLE procedure.
SQL> exec dbms_scheduler.enable('arc_watcher');

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.enable('arc_watcher_prog');

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.enable('arc_watcher_job');

PL/SQL procedure successfully completed.

SQL>
Test that archivelogs are shipped
For testing if archivelog transfers are happening we will take a look at the archivelog directory on failover server.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
total 5664
drwxr-xr-x 2 oracle oinstall    4096 Dec 27 07:56 .
drwxr-xr-x 4 oracle oinstall    4096 Dec 27 07:22 ..
-rw-r----- 1 oracle oinstall 1043968 Dec 27 07:52 1_10_769951554.arc
-rw-r----- 1 oracle oinstall 1701888 Dec 27 07:52 1_7_769951554.arc
-rw-r----- 1 oracle oinstall  544256 Dec 27 07:52 1_8_769951554.arc
-rw-r----- 1 oracle oinstall 2481152 Dec 27 07:52 1_9_769951554.arc
[oracle@el5-backup ~]$
We then execute ALTER SYSTEM SWITCH LOGFILE on el5-prd.
SQL> alter system switch logfile;

System altered.

SQL>
We connect with the logship user and check the contents of TRANSFERED_LOGS.
SQL> conn logship/logship;
Connected.
SQL> select count(*) from transfered_logs;

  COUNT(*)
----------
         0

SQL>
OK, the archivelog directory is checked in 60 seconds interval, so you might have to wait some more. After one minute tops the new file should be detected and transferred.
SQL> select count(*) from transfered_logs;

  COUNT(*)
----------
         1

SQL>
The new log is detected in a transfer attempt was made. Check the archivelog directory on el5-backupagain to see if the file is there.
[oracle@el5-backup ~]$ ls -la /u01/app/oracle/fast_recovery_area/ORCL/archivelog/
total 6920
drwxr-xr-x 2 oracle oinstall    4096 Dec 27 08:00 .
drwxr-xr-x 4 oracle oinstall    4096 Dec 27 07:22 ..
-rw-r----- 1 oracle oinstall 1043968 Dec 27 07:52 1_10_769951554.arc
-rw-r--r-- 1 oracle oinstall 1282048 Dec 27 08:00 1_11_769951554.arc
-rw-r----- 1 oracle oinstall 1701888 Dec 27 07:52 1_7_769951554.arc
-rw-r----- 1 oracle oinstall  544256 Dec 27 07:52 1_8_769951554.arc
-rw-r----- 1 oracle oinstall 2481152 Dec 27 07:52 1_9_769951554.arc
[oracle@el5-backup ~]$
The logfile appears as expected. This concludes the detect and transfer part of our configuration.
A mechanism to apply and delete the shipped logs
Having the log files transferred to a failover server is not enough. If you really want to have an identical copy that is ready to take over the primary role you should take care to apply the database changes described in the logs. The easiest way is to simply start RMAN and apply the log files manually.
[oracle@el5-prd ~]$ rman target=/ 
Recovery Manager: Release 11.2.0.3.0 - Production on Sat Dec 27 11:24:16 2011

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1297199097)

RMAN> recover database noredo;

Starting recover at 17-DEC-11
using channel ORA_DISK_1

Starting recover at 17-DEC-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

starting media recovery

archived log for thread 1 with sequence 8 is already on disk as file /u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_8_769951554.arc thread=1 sequence=8
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_9_769951554.arc thread=1 sequence=9
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_10_769951554.arc thread=1 sequence=10
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/1_11_769951554.arc thread=1 sequence=11
unable to find archived log
archived log thread=1 sequence=12

Finished recover at 17-DEC-11
You can then open the failover database and use in place of the production by executing
alter database open resetlogs
The thing is that you probably want to automate the process. This automation can not happen in the failover database as it is not really operational (it's not in open state). You will probably go with some kind of OS level automation, but this will be platform dependent.
For GNU/Linux environments, what you can do is to create a simple shell script that looks like this:
rman target / nocatalog << EOF
run {
  recover database noredo;
  delete noprompt force archivelog until time 'SYSDATE-7';
}
exit
EOF
This script will call RMAN, apply the received archivelogs and delete all log files that are older than 7 days (I keep the others just in case). You can then setup a cron job to run the script at an appropriate interval and you should not worry for managing the archivelogs manually.
Final remarks
In this tutorial I showed you how to build a platform independant archivelogs shipping mecahnism, that does all the work from within the database. This approach has its limitations and it's not in anyway a substitution of Data Guard and the other recovery features of Enterprise Edition. It's just a simple workaround when you are forced to use Database SE and you are looking for a simple way to be more protected from failiures.
There are several areas for improvement in this mechanism, especially when it comes to security. You should keep in mind that FTP is not really secure, so if you're dealing with sensitive data you might want to consider using SFTP or something else that provides encryption. Another issue is keeping plaintext passwords in TRANSFER_ARC_LOG procedure (you might want to wrap this one) and the database dictionary.

Followers