Tuesday, October 6, 2009

how to recover deleted file from network drive

Issue: how to recover deleted file from network drive

Solution:

settings on Server

1. go the properties of Share Network Drive
2. Shadow Copies
3. Setting as per your requirement


Settings on Client

1. Locate the folder where the deleted file was stored (on the network), right-click the folder, and click Properties. The Properties dialog box will appear.

2. On the Previous Versions tab, double-click the most recent version of the folder that contains the file that you want to recover. A list of files that are stored in that previous version will appear.

3. Right-click the file that was deleted and clicks Copy and paste.

Windows terminate process forcefully

Windows terminate process forcefully

taskkill /f /im process-name.exe

/f - Stands for that process be forcefully terminated.

/im - Stands for the image name of the process to be terminated

In order to kill all these process I made a batch file which contains the forcefull termination command for all these programs and then I added the batch file in windows startup.

1. open Notepad and paste the following commands one per each line by line

taskkill /f /im wmpnscfg.exe
taskkill /f /im ctfmon.exe
taskkill /f /im mobsync.exe.exe

Read more: http://www.technixupdate.com/terminate-programs-forcefully-in-windows/#ixzz0QE2O6QIR


2. Save the file as terminate.bat or with any other name but with .bat extension.

3. Now Drag and Drop terminate.bat file in All Programs >> StartUp

4. Now, restart windows all these useless programs which you used to kill manually will be automatically terminated.

Note: You can also place the terminate.bat file on desktop and run it manually and kill all the useless process.

Read more: http://www.technixupdate.com/terminate-programs-forcefully-in-windows/#ixzz0QE2POIGP

Friday, September 25, 2009

Oracle: Which user process lock the other process

SELECT pr.username "O/S Id",
ss.username "Oracle User Id",
ss.status "status",
ss.sid "Session Id",
ss.serial# "Serial No",
lpad(pr.spid,7) "Process Id",
substr(sqa.sql_text,1,900) "Sql Text",
First_load_time "Load Time"
FROM v$process pr, v$session ss, v$sqlarea sqa
WHERE pr.addr=ss.paddr
AND ss.username is not null
AND ss.sql_address=sqa.address(+)
AND ss.sql_hash_value=sqa.hash_value(+)
AND ss.status='ACTIVE'
ORDER BY 1,2,7 ;
Spool Out ;


=============

set lin 132
set pages 66
column "SID" format 999
column "SER" format 99999
column "Table" format A10
column "SPID" format A5
column "CPID" format A5
column "OS User" format A7
column "Table" format A10
column "SQL Text" format A40 wor
column "Mode" format A20
column "Node" format A10
column "Terminal" format A8



rem spool /tmp/locks.lst

select
s.sid "SID",
s.serial# "SER",
o.object_name "Table",
s.osuser "OS User",
s.machine "Node",
s.terminal "Terminal",
--p.spid "SPID",
--s.process "CPID",
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') "Mode",
substr (c.sql_text, 1, 150) "SQL Text"
from v$lock l,
v$lock d,
v$session s,
v$session b,
v$process p,
v$transaction t,
sys.dba_objects o,
v$open_cursor c
where l.sid = s.sid
and o.object_id (+) = l.id1
and c.hash_value (+) = s.sql_hash_value
and c.address (+) = s.sql_address
and s.paddr = p.addr
and d.kaddr (+) = s.lockwait
and d.id2 = t.xidsqn (+)
and b.taddr (+) = t.addr
and l.type = 'TM'
group by
o.object_name,
s.osuser,
s.machine,
s.terminal,
p.spid,
s.process,
s.sid,
s.serial#,
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>'),
substr (c.sql_text, 1, 150)
order by
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') desc,
o.object_name asc,
s.sid asc;
rem spool off;

Saturday, August 8, 2009

Linux YUM: download all rpm and dependancies from Internet Automatically

COPY file rpmforge.repo to /etc/yum.repos/

#rpmforge.repo


# Name: RPMforge RPM Repository for Red Hat Enterprise 5 - dag
# URL: http://rpmforge.net/
[rpmforge]
name = Red Hat Enterprise $releasever - RPMforge.net - dag
#baseurl = http://apt.sw.be/redhat/el5/en/$basearch/dag
mirrorlist = http://apt.sw.be/redhat/el5/en/mirrors-rpmforge
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1

Wednesday, July 22, 2009

Linux DNS resolver

Not able to send mail to @jalindia.co.in due to jalindia.co.in DNS issue. they use their own dns to resolve jalindia.co.in.

they should use some other dns server to resove it.
Resolve it into our dns server uniconindia.in

============dns server 10.100.0.100============
create zone file for jalindia.co.in at dns server 10.100.0.10
make zone file entry into jalindia.co.in.zone /etc/named.conf


vi /var/named/chroot/etc/named.conf
zone "jalindia.co.in" IN {
type master;
file "jalindia.co.in.zone";
allow-update { none; };
};



#cd /var/named/chroot/var/named/
#copy uniconstocks.com.zone jalindia.co.in.zone

#vi jalindia.co.in.zone

jalindia.co.in. 86400 IN MX 1 jal-gate-svr1.jalindia.co.in.
jalindia.co.in. 86400 IN MX 2 jal-gate-svr2.jalindia.co.in.


jalindia.co.in. 86400 IN NS secondarydns.jalindia.co.in.
jalindia.co.in. 86400 IN NS primarydns.jalindia.co.in.


jal-gate-svr1.jalindia.co.in. 86400 IN A 115.119.16.172
jal-gate-svr2.jalindia.co.in. 86400 IN A 115.119.16.168
primarydns.jalindia.co.in. 86400 IN A 10.10.10.10
secondarydns.jalindia.co.in. 86400 IN A 10.10.10.11



===========mailserver 10.100.0.77============
host jalindia.co.in
nslookup jalindia.co.in
dig jalindia.co.in
dig mx jalindia.co.in
dig jalindia.co.in
host jal-gate-svr2.jalindia.co.in
host jal-gate-svr1.jalindia.co.in
host primarydns.jalindia.co.in
host secondarydns.jalindia.co.in
vi /etc/resolv.conf
ssh 10.100.0.100
telnet jal-gate-svr1.jalindia.co.in 25

==============================
named-checkconf /etc/named.conf
named-checkconf /etc/named.conf
host jalindia.co.in
dig mx jalindia.co.in
telnet jal-gate-svr1.jalindia.co.in.
route
telnet 59.160.230.70 25
telnet 115.119.16.172 25
telnet 115.119.16.168 25
=============

Saturday, July 18, 2009

RMAN Recovery with Previous Incarnation

rman>list incarnation of database;
sql>shutdown immediate
sql>startup mount
rman>reset database to incarnation 3;
rman>restore database until scn 4312345;
rman>recover database until scn 4312345;
rman>list incarnation;
rman> alter database open resetlogs;
rman>list incarnation;

Friday, July 17, 2009

ASP Pages Cache problem

Don't Cache ASP pages

================
LD ASP Error:
we hide the client but data is fetched from some branch.
but on LD ASp server, no data is fetched for that hidden client.

Reason
data is coming from cache.


=================IIS Settings================

iis > Http Header > http headers enable content expiration

==========
ISAPI applications (Active Server Pages Web pages) can be cached on Internet Information Server.Use these steps to disable caching:

inetmgr > iis > website >home directory > virtual directory > Configuration > cache isapi application

Now IIS Application so that it won't cache your ASP pages. But this alone is not enough. At the top of the .asp page that you do not want cached, add the following line: <% Response.Expires=0 %>

==========

stopping and restarting both the web site and the application
pool, we've also stopped and restarted IIS Admin Service and the WWW
publishing service.



====================ASP Page================
<% Response.Expires = 0 Response.Expiresabsolute = Now() - 1 Response.AddHeader "pragma","no-cache" Response.AddHeader "cache-control","private" Response.CacheControl = "no-cache" %>

===============Meta Tags===============





===================Registry Settings=================

HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriEnabledCache=0

================

If problem is not solved then this is due to networking stack between client browser and the server. It looks like a proxy or browser-side cache.

=============Browser-side Cache===========
Retain server results in a browser-side cache. The cache holds query-result pairs; the queries are cache keys and the server results are server results. So, whenever the browser performs an XMLHttpRequest Call, it first checks the cache. If the query is held as a key in the cache, the corresponding value is used as the result, and there is no need to access the server.



==========

ipconfig /flushdns


Modify the behavior of the Microsoft Windows DNS caching algorithm by setting two registry entries in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters registry key.

The MaxCacheTtl represents the maximum time that the results of a DNS lookup will be cached. The default value is 86,400 seconds. If you set this value to 1, DNS entries will only be cashed for a single second.

MaxNegativeCacheTtl represents the maximim time that the results of a failed DNS lookup will be cached. The default value is 900 seconds. If you set this value to 0, failed DNS lookups will not be cached.

==========



Wednesday, July 8, 2009

Recover Database with missing Archieve Logs

Recover Database witn missing Archieve Logs

I am trying the restore my old database but due to missing of one archieve log. i m not able to restore and recover from rman

Not able to open database

ora 19870 ora 19505 ora 27041 osd 04002

rman 00571 rman 00569 rman 03002 rman 06053 rman 06025


--------------A---------------------------------------
crosscheck copy of archivelog all;
crosscheck archivelog all;
resync catalog;
delete force obsolete;
delete expired archive all;


Note: not able to resync because not able to connect to recovery catalog because database is not open.

---------------B----------------------------------------
Point in time Recovery
-------------
1)
restore database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')";
recover database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')";

rman-03002 rman-20207

2)
restore database until scn 1000;
recover database until scn 1000;

3)
restore database until sequence 923 thread 1;
recover database until sequence 923 thread 1;


Note: not recover because of missing sequence.


---------------C----------------------------------------

list incranation
reset database to incranation inc_key;

restore database until sequence 923 thread 1;
recover database until sequence 923 thread 1;


Note: not recover because of missing sequence.

--------------D----------------------------------------
alter database backup controlfile to trace as 'c:\newcontrolfile.txt'

create new control file from above.


--------------E-FINAL SOLUTION---------------------------------------

shutdown immediate;
add into init.ora _allow_resetlogs_corruption=true
startup mount;
sql>recover database until cancel using backup controlfile;

Specify log: {=suggested | filename | AUTO | CANCEL}

CANCEL

Alter database open resetlogs








===============================================================


Monday, June 29, 2009

OEM Backup Notification

If Backup is failed or stop then i will get notification

Steps :
- Open a database target (SNS0910)
- Click on User-Defined-Metrics
- Create
- Metric Name = Last datafile backup
- Type = Number
- Output = Single Value
- SQL Query : the time in hour since the oldest checkpoint time of the newest backup


select (sysdate-min(t))*24 from
(
select max(b.CHECKPOINT_TIME) t
from v$backup_datafile b, v$tablespace ts, v$datafile f
where INCLUDED_IN_DATABASE_BACKUP='YES'
and f.file#=b.file#
and f.ts#=ts.ts#
group by f.file#
)
- Credentials : dbsnmp/*****
- Threshold Operator > Warning 24 Critical 48
- Repeat every 1 hour
- OK



Same for redologs, with a name of Last redolog backup query of

select (sysdate-max(NEXT_TIME))*24 from v$BACKUP_REDOLOG


It is now possible to define alerts.
- Preferences
- Notification Rules
- Create
- Apply to specific targets : Add you productive databases group
- Deselect Availability Down
- Metric: Add : Show all: Check User defined metric : Select : Last datafile backup , Last redolog backup
- Severity : Critical and Clear
- Policy : None
- Method : Email



======================

select
to_char(max(completion_time) ,'DDMMYYHH24MISS') lastbackup
from (SELECT completion_time
FROM v$backup_set
UNION
SELECT completion_time
FROM v$datafile_copy
union
select sysdate-365 from dual
)

Warining 0
Critical 320000000000


===========

SELECT
ELAPSED_SECONDS/60 minutes
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY desc;

Warining 30
Critical 60

Saturday, June 27, 2009

OEM Backup Error Solution

OEM Backup Error
--------------------


Recovery Manager: Release 10.2.0.3.0 - Production on Sat Jun 27 11:24:24 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


RMAN>

connected to target database: SNS0910 (DBID=45805873)

RMAN>

connected to recovery catalog database

RMAN>

echo set on


RMAN> set command id to 'BACKUP_SNS0910.UNI_062709112403';

executing command: SET COMMAND ID


RMAN> backup device type disk tag 'BACKUP_SNS0910.UNI_062709112403' database;

Starting backup at 27-JUN-09

allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=509 devtype=DISK

channel ORA_DISK_1: starting compressed full datafile backupset

channel ORA_DISK_1: specifying datafile(s) in backupset

input datafile fno=00005 name=E:\SNSD0910\USERS01.ORA

input datafile fno=00004 name=E:\SNSD0910\INDEX01.ORA

input datafile fno=00002 name=E:\SNSD0910\UNDOTBS01.ORA

input datafile fno=00003 name=E:\SNSD0910\SYSAUX01.ORA

input datafile fno=00001 name=E:\SNSD0910\SYSTEM01.ORA

channel ORA_DISK_1: starting piece 1 at 27-JUN-09

channel ORA_DISK_1: finished piece 1 at 27-JUN-09

piece handle=E:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\SNS0910TEST\6UKIKHFC_1_1 tag=BACKUP_SNS0910.UNI_062709112403 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:19:15

Finished backup at 27-JUN-09



Starting Control File and SPFILE Autobackup at 27-JUN-09

piece handle=E:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\SNS0910TEST\CONTROL\C-45805873-20090627-01 comment=NONE

Finished Control File and SPFILE Autobackup at 27-JUN-09



RMAN> backup device type disk tag 'BACKUP_SNS0910.UNI_062709112403' archivelog all not backed up;

Starting backup at 27-JUN-09

current log archived

using channel ORA_DISK_1

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of backup command at 06/27/2009 11:43:48

RMAN-06059: expected archived log not found, lost of archived log compromises recoverability

ORA-19625: error identifying file D:\ARCHIVE0910\ARC00928_0681409713.001

ORA-27041: unable to open file

OSD-04002: unable to open file

O/S-Error: (OS 2) The system cannot find the file specified.


RMAN> exit;

Recovery Manager complete.



---------------------------------------
Solutions:

-------------Solution Through OEM------

OEM > Maintenance > Manage Current Backup>
Backup Sets
Crosscheck > Validate
Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired

Image Copies
Crosscheck > Validate
Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired

-----------------Solution Through command prompt----------

RMAN > crosscheck archivelog all;

If error is still there then follow following steps

RMAN> crosscheck copy of archivelog all;
RMAN> crosscheck archivelog all;
RMAN> resync catalog;
RMAN> delete force obsolete;
RMAN> delete expired archive all;

---------------------------------------

Tuesday, June 23, 2009

ORACLE EXPORT/IMPORT UTILITY - DATA PUMP

==================ROLES======================================


SQLPLUSW sys/linux@sns0910srv as sysdba;

create user dpuser identified by dpuser;

grant connect, resource to dpuser;

CREATE DIRECTORY dpump_dir1 AS 'E:\oracle\product\10.2.0\flash_recovery_area\sns0910test\dp';

GRANT create session, create table to dpuser;

GRANT EXP_FULL_DATABASE,IMP_FULL_DATABASE to dpuser;

grant read, write on directory dpump_dir1 to dpuser;



=======================Init.ora parameters======================

Init.ora parameters that affect the performance of Data Pump:

Oracle recommends the following settings to improve performance.

Disk_Asynch_io= true

Db_block_checking=false

Db_block_checksum=false

=========================================
FULL=Y

expdp dpuser/dpuser@sns0910srv full=Y directory=dpump_dir1 dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp dpuser/dpuser@sns0910srv full=Y directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log

=========================================
SCHEMAS=schema, schema, schema…

expdp dpuser/dpuser@sns0910srv SCHEMAS=LDBO directory=dpump_dir1 dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp dpuser/dpuser@sns0910srv SCHEMAS=LDBO directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log

============

TABLES=[schemas].tablename, [schemas].tablename,…

expdp dpuser/dpuser@sns0910srv TABLES=LDBO.ACCOUNTS directory=dpump_dir1 dumpfile=LDBO.dmp logfile=expdpLDBO.log

impdp dpuser/dpuser@sns0910srv TABLES=LDBO.ACCOUNTS directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log


===============

TABLESPACES=tablespacename, tablespacename, tablespacename…

TRANSPORT TABLESPACES=tablespacename…

================

Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U"

wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:

expdp dpuser/dpuser@sns0910srv schemas=LDBO directory=dpump_dir1 parallel=4 dumpfile=LDBO_%U.dmp logfile=expdpLDBO.log

==============

expdp dpuser/dpuser@sns0910srv schemas=LDBO include=TABLE:"IN ('ACCOUNTS', 'ACCOUNTADDRESSDETAIL')" directory=dpump_dir1

dumpfile=LDBO.dmp logfile=expdpLDBO.log

expdp dpuser/dpuser@sns0910srv schemas=LDBO exclude=TABLE:"= ''" directory=dpump_dir1 dumpfile=LDBO.dmp logfile=expdpLDBO.log

=================

SELECT * from dba_datapump_jobs

==================ERROR=====

ORA-31631: privileges are required
ORA-39109: Unprivileged users may not operate upon other users' schemas

solutions----

SQL> GRANT create session, create table to dpuser;

SQL> GRANT EXP_FULL_DATABASE,IMP_FULL_DATABASE to dpuser;



====================Data Pump API======================

SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'FULL',
remote_link => NULL,
job_name => 'DB_EXPORT',
version => 'COMPATIBLE');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'LDBO.dmp',
directory => 'dpump_dir1');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'LDBO.log',
directory => 'dpump_dir1',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''LDBO''');

DBMS_DATAPUMP.start_job(l_dp_handle);

DBMS_DATAPUMP.detach(l_dp_handle);
END;
/


----------------ERROR---------------------------
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3162
ORA-06512: at line 14

Friday, June 19, 2009

Configure Oracle E-MAIL notification for DB shutdown or startup events Manually

Configure Oracle E-MAIL notification for DB shutdown or startup events Manually

conn sys/linux@sns0910srv as sysdba

@E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlmail.sql
@E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\prvtmail.plb;


alter system set smtp_out_server = 'mail.uniconindia.in' scope=spfile;

shutdown immediate;
startup

grant execute on utl_mail to ldbo;



create or replace trigger ldbo.db_shutdown
before shutdown on database
begin
sys.utl_mail.send (
sender =>'dbanotification@uniconindia.in',
recipients =>'dbamonitoring@uniconindia.in',
subject => 'Oracle Database Server DOWN',
message => 'May be LD Server Down for maintenance'||
' but also contact to DBA for further details. '
);
end;
/


create or replace trigger ldbo.db_startup
after startup on database
begin
sys.utl_mail.send (
sender =>'dbanotification@uniconindia.in',
recipients =>'dbamonitoring@uniconindia.in',
subject => 'Oracle Database Server UP',
message => 'LD Server OPEN for normal use.'
);
end;
/

Tuesday, June 9, 2009

Oracle 10g Data Guard

Oracle 10g Data Guard

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.

Data Guard Configurations:

A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other. For example, you can have a standby database on the same system as the production database, along with two standby databases on other systems at remote locations.

You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager.

Primary Database

A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.

The primary database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

Standby Database

A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.

Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

A standby database can be either a physical standby database or a logical standby database:

Physical standby database

Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database.

Logical standby database

Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and database upgrades.

Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:

Log Transport Services

Control the automated transfer of redo data from the production database to one or more archival destinations.

Log Apply Services

Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.

Role Management Services

Change the role of a database from a standby database to a primary database or from a primary database to a standby database using either a switchover or a failover operation.

A database can operate in one of the two mutually exclusive roles: primary or standby database.

  • Failover

During a failover, one of the standby databases takes the primary database role.

  • Switchover

Primary and standby database can continue to alternate roles. The primary database can switch the role to a standby database; and one of the standby databases can switch roles to become the primary.

The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo data:

For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database,

For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database

Data Guard Interfaces

Oracle provides three ways to manage a Data Guard environment:

1. SQL*Plus and SQL Statements

Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:

SQL> alter database commit to switchover to physical standby;

2. Data Guard Broker GUI Interface (Data Guard Manager)

Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.

3. Data Guard Broker Command-Line Interface (CLI)

It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.

The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following are some of the operations that the broker automates and simplifies:

  • Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, log transport services, and log apply services, where any of the databases could be Real Application Clusters (RAC) databases.
  • Adding up to 8 additional new or existing (physical or logical, RAC, or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the same configuration.
  • Managing an entire Data Guard configuration, including all databases, log transport services, and log apply services, through a client connection to any database in the configuration.
  • Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.
  • Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.

You can perform all management operations locally or remotely through the broker’s easy-to-use interfaces: the Data Guard web pages of Oracle Enterprise Manager, which is the broker’s graphical user interface (GUI), and the Data Guard command-line interface (CLI) called DGMGRL.

Configuring Oracle DataGuard using SQL commands - Creating a physical standby database

Step 1) Getting the primary database ready (on Primary host)

We are assuming that you are using SPFILE for your current(Primary) instance. You can check if your instance is using SPFILE or not using spfile parameter.

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string

You can create spfile as shown below. (on primary host)

SQL> create spfile from pfile;

File created.

The primary database must meet two conditions before a standby database can be created from it:

  1. It must be in force logging mode and
  2. It must be in archive log mode (also automatic archiving must be enabled and a local archiving destination must be defined.

Before putting database in force logging mode, check if the database is force logging mode using

SQL> select force_logging from v$database;

FOR

NO

Also the database is not in archive log mode as shown below.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination d:/oracle/product/10.2.0/dbs/arch
Oldest online log sequence 10
Current log sequence 12

Now we will start the database in archive log and force logging mode.

Starting database in archive log mode:-

We need to set following 2 parameters to set the database in archive log mode.

Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
log_archive_format = “ARCH_%r_%t_%s.ARC”

If a database is in force logging mode, all changes, except those in temporary tablespaces, will be logged, independently from any nologging specification.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE
instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers 6397952 bytes
Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database force logging;

Database altered.

SQL> alter database open;

Database altered.

So now our primary database is in archive log mode and in force logging mode.

SQL> select log_mode, force_logging from v$database;

LOG_MODE FOR
———— —
ARCHIVELOG YES

init.ora file for primary

control_files = d:/oracle/product/10.2.0/oradata_orcl/orclcontrol.ctl
db_name = orcl

db_domain = UNICON.COM

db_block_size = 8192
pga_aggregate_target = 250M

processes = 300
sessions = 300
open_cursors = 1024

undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0

sga_target = 600M

nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’PRIMARY’
Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE

Step 2) Creating the standby database

Since we are creating a physical stand by database we have to copy all the datafiles
of primary database to standby location. For that, you need to shutdown main database, copy the files of main database to new location and start the main database again.

Step 3) Creating a standby database control file

A control file needs to be created for the standby system. Execute the following on the primary system:

SQL> alter database create standby controlfile as ‘d:/oracle/product/10.2.0/dbf/standby.ctl’;

Database altered.

Step 4) Creating an init file

SQL> show parameters spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string d:/oracle/product/10.2.0/dbs/s
pfiletest.ora

Step 5) Changing init.ora file for standby database

A pfile is created from the spfile. This pfile needs to be modified and then be used on the standby system to create an spfile from it. So create a pfile from spfile on primary database.

create pfile=’/some/path/to/a/file’ from spfile

SQL> create pfile=d:’/oracle/product/10.2.0/dbs/standby.ora’ from spfile;

File created.

The following parameters must be modified or added:

  • control_files
  • standby_archive_dest
  • db_file_name_convert (only if directory structure is different on primary and standby server)
  • log_file_name_convert (only if directory structure is different on primary and standby server)
  • log_archive_format
  • log_archive_dest_1 — This value is used if a standby becomes primary during a switchover or a failover.
  • standby_file_management — Set to auto

init.ora parameters for standby

control_files = d:/oracle/product/10.2.0/oradata/orcl/standby_orcl.ctl
db_name = orcl
db_domain = UNICON.COM
db_block_size = 8192
pga_aggregate_target = 250M
processes = 300
sessions = 300
open_cursors = 1024
undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0
sga_target = 600M
nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’STANDBY’
Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE
standby_archive_dest=d:/oracle/product/10.2.0/prim_archive

db_file_name_convert=’d:/oracle/product/10.2.0/oradata’,\

‘d:/oracle/product/10.2.0/oradata/orcl’
log_file_name_convert=’d:/oracle/product/10.2.0/oradata’,\

‘d:/oracle/product/10.2.0/oradata/orcl’

standby_file_management=auto

FAL_Client=’to_standby’

Step 7) Creating the spfile on the standby database
set ORACLE_SID=orcl
sqlplus “/ as sysdba”

create spfile from pfile=’/…/../modified-pfile’;

Step 8- On standby database
SQL> startup nomount pfile=standby.ora
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 754981192 bytes
Database Buffers 314572800 bytes
Redo Buffers 2203648 bytes

SQL> alter database mount standby database;

Database altered.

Add following parameters to standby side

FAL_Client=’to_standby’
FAL_Server=’to_primary’
Log_archive_dest_2=’Service=to_primary VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=primary’
Log_archive_dest_state_2=ENABLE
remote_login_passwordfile=’SHARED’

Add following parameters to primary side

Log_archive_dest_2=’Service=to_standby lgwr’
Log_archive_dest_state_2=ENABLE
Standby_File_Management=’AUTO’
REMOTE_LOGIN_PASSWORDFILE=’SHARED’

Create password file on both sides

orapwd file=d:/oracle/product/10.2.0/db_1/dbs/orapworcl.ora password=oracle entries=5 force=y

FTP the password file to standby location

Step 9) Configuring the listener

Creating net service names

Net service names must be created on both the primary and standby database that will be used by log transport services. That is: something like to following lines must be added in the tnsnames.ora.

Setup listener configuration

On Primary:

ORCL=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
)
)

SID_LIST_ORCL=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

On Standby:

SID_LIST_orcl=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

orcl =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T)(PORT = 1538))
)
)

TNSNAMES settings

On Primary:

orcl =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

TO_STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T.UNICON.COM)(PORT = 1538))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

On standby:

TO_PRIMARY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

orcl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T.UNICON.COM)(PORT = 1538))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

On Standby side following below commands

SQL> startup nomount pfile=standby_orcl.ora
ORACLE instance started.

Total System Global Area 629145600 bytes
Fixed Size 1980744 bytes
Variable Size 171968184 bytes
Database Buffers 452984832 bytes
Redo Buffers 2211840 bytes

SQL> alter database mount standby database;

Database altered.

Try to connect to stand by database from primary database

Following connections should work now
From Primary host:

sqlplus sys/oracle@orcl as sysdba –> This will connect to primary database
sqlplus sys/oracle@to_standby as sysdba –> This will connect to standby database from primary host

From Standby host

sqlplus sys/oracle@orcl as sysdba –> This will connect to standby database
sqlplus sys/oracle@to_primary as sysdba –> This will connect to primary database from standby host

LOG SHIPPING

On PRIMARY site enable Log_archive_dest_state_2 to start shipping archived redo logs.

SQL> Alter system set Log_archive_dest_state_2=ENABLE scope=both;

System Altered.

Check the sequence # and the archiving mode by executing following command.

SQL> Archive Log List

Then switch the logfile on primary side

SQL> Alter system switch logfile;

System Altered.

Start physical apply log service on standby side.

SQL> Alter Database Recover Managed Standby Database Disconnect;

Database Altered.

Now the session will be available to you and MRP will work as a background process and apply the redo logs.

You can check whether the log is applied or not by querying V$ARCHIVED_LOG.

SQL> Select Name, Applied, Archived from v$Archived_log;

This query will return the name of archived files and their status of being archived and applied.

Once you complete above step, you are done with physical standby. We can verify the log files which got applied using following commands.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09

9 rows selected.

On Primary side

SQL> Select Status, Error from v$Archive_dest where dest_id=2;

STATUS ERROR
——— —————————————————————–
VALID

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09
10 12-JUN-09 12-JUN-09

10 rows selected.

If you can see, after switching archive log on primary side, an archive log file got applied to standby database.

Again on primary

SQL> alter system switch logfile;

System altered.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09
10 12-JUN-09 12-JUN-09
11 12-JUN-09 12-JUN-09

11 rows selected.

Oracle- OEM - Export Database / Table / Schema / Tablespace (Data Pump)

1)
user: snsexport
passowrd:snsexp
role:exp_full_database

2)
Now Try to export database by login above user


Errors: ORA-31626: job does not exist ORA-31633: unable to create master table "SNSEXPORT.EXPORTTEST" ORA-06512: at

"SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 863 ORA-00955: name is already used by an existing object

Exception : ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line

911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356 ORA-06512: at line 2


solutions
add privilieges for snsexport user


CREATE SESSION
BACKUP ANY TABLE
SELECT ANY TABLE
SELECT ANY SEQUENCE
EXECUTE ANY PROCEDURE
CREATE ANY DIRECTORY
EXECUTE ANY TYPE
ADMINISTER RESOURCE MANAGER
RESUMABLE
SELECT ANY DICTIONARY
READ ANY FILE GROUP
create table

-------------------------------------
3)
Now Following Error

ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS", line

142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1


Solution
Add role MGMT_USER

--------------------------

4) ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS",

line 142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1

Solution:

add the user snsexport

-login as user SYSTEM (or user SYS) to the ‘Enterprise Manager 10g
Database Control’
- At the top right, click on the link ‘Setup’
- On the page ‘Administrators’, click on the button ‘Create’
- On the page ‘Create Administrator: Properties’, add the user snsexport
- Click on the button: ‘Finish’
- On the page ‘Create Administrator: Review’, click on the button: ‘Finish’
- On the page ‘Administrators’, confirm that the user has been added.
- At the top right, click on the link ‘Logout’
--------------------------

5) Now Porblems are Resolved

Oracle - Create Standby / Duplicate / Test / Development / Clone Database

Steps to restore RMAN backup to different host, for example, RMAN backup from Production server to Test server.

Presumptions

* Production database: Server A.
* Standby and RMAN database: Server B.
* Test database: Server C
* tnsnames.ora

TEST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = test.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = test)
(SERVER = DEDICATED)
)
)

RMAN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stb.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = rman)
(SERVER = DEDICATED)
)
)

PRO =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = prod.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = prod)
(SERVER = DEDICATED)
)
)

STB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stb.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = stb)
(SERVER = DEDICATED)
)
)

* All servers have identifical directory structure.

1, Copy backup control file and init.ora file from Prod to Test.

2, Start Test server.

sqlplus /nolog
SQL>connect sys/linux@test as sysdba
SQL> startup nomount PFILE=’D:\Oracle\admin\pfile\init.ora’;



3. Copy RMAN backup files from Prod to Test, place them in the same directory as Prod.

4. Connect to RMAN

rman TARGET SYS/oracle@PROD CATALOG rman/rman@rman AUXILIARY SYS/linux@test

5, Duplicate database

RUN
{
allocate auxiliary channel c1 DEVICE TYPE disk;
DUPLICATE TARGET DATABASE to test nofilenamecheck ;
}

6, Duplicate database before current time

RUN
{
allocate auxiliary channel c1 DEVICE TYPE disk;
DUPLICATE TARGET DATABASE to test nofilenamecheck UNTIL TIME “TO_DATE(’06/09/2009',’MM/DD/YYYY’)”;
}

Saturday, June 6, 2009

Spool To Excel and Html

SPOOL to EXCEL

set head off;
set feed off;
set trimspool on;
set linesize 32767;
set pagesize 32767;
set echo off;
set termout off;
Spool c:\abc.xls
select * from dba_users;
Spool off
exit


SPOOL TO HTML


set pagesize 9999
set feedback off
SET TERMOUT on
SET NEWPAGE 1
SET UNDERLINE ON
set markup html on;
spool c:\abc.html
select * from dba_users;
Spool off
exit

Friday, June 5, 2009

OEM OS Host Credentials (User Authenication Error)

OEM OS Host Credentials (User Authenication Error)

Validation Error - Connection to host as user kgupta2 failed: ERROR: Wrong password for user

Solution
OEM>Preferences >== Preferred Credentials >== Target Types Host.
provide the hostname, password which is mentioned in following


You have to provide the 'Log on as a batch job' privilege:

1. Go to control panel/administrative tools
a. click on "local security policy"
b. click on "local policies"
c. click on "user rights assignments"
d. double click on "log on as a batch job"
e. click on "add" and add the user that was entered in the "normal username" or "privileged username" section of the EM Console.

2. Go to the Preferences link in the EM GUI
a. click on Preferred Credentials (link on the left menu)
b. under "Target Type: Host" click on "set credentials"
c. enter the OS user who has logon as a batch job privilege into the "normal username" and "normal password" fields

3. Test the connection
a. while in the Set Credentials window, click on "Test"

Tuesday, June 2, 2009

Oracle-Rename database User

Rename Oracle Database User

1) exp owner=kgupta2
2) create user kshitij identified by rakesh;
3) DROP USER kgupta2 CASCADE;
4) imp FROMUSER=kgupta2 TOUSER=kshitij

Oracle- Rename Database

Rename the Oracle Database

1) Full Database Backup

2) conn SYS/ORACLE AS SYSDBA

3) ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS;

4) Locate the latest dump file in your USER_DUMP_DEST directory (show parameter USER_DUMP_DEST) - rename it to something like

dbrename.sql.

5) Edit dbrename.sql, remove all headers and comments, and change the database's name.

6) Change "CREATE CONTROLFILE REUSE ..." to "CREATE CONTROLFILE SET ...".

7) Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE, don't ABORT!)

8) Run dbrename.sql.

9) ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name;

Saturday, May 23, 2009

ORacle - Change SID name after Creating Database

Change SID name after Creating Database

Recreate the Control file to achieve this .

1. SVRMGR>Alter Database backup controlfile to trace;
This will generate an Ascii Trace file in $USER_DUMP_DEST directory which will have the Control File Creation Script.

2. Shutdown the Database and Do a Physical Backup of all the Datafiles,Controlfiles,RedoLog files,Archived Redo log files etc

3. Rename the Init.ora and config.ora to Init.ora and Config.ora files in $ORACLE_HOME/dbs This is to prevent any errors during Database Startups looking for default 'pfile' names.

4. Rename the Old Controlfiles to say control01.old etc This is to Create New Controlfile and not reuse the existing one.

5. Edit the Control File creation Script ..It should read like:
Startup nomount;
Create Controlfile set Database 'NEW_SID' Resetlogs
......
;

6. Open your database:
alter database open resetlogs;

Followers