Wednesday, November 11, 2009

Oracle File Formats

-------------------Backup Format-----
1
%c The copy number of the backup piece within a set of duplexed
backup pieces. If you did not duplex a backup, then this variable
is 1 for backup sets and 0 for proxy copies.
If one of these commands is enabled, then the variable shows the
copy number. The maximum value for %c is 256.

%d The name of the database.

%D The current day of the month (in format DD)

%F Combination of DBID, day, month, year, and sequence into a unique
and repeatable generated name.

%M The month (format MM)

%n The name of the database, padded on the right with x characters
to a total length of eight characters. (AKA: Porn star alias name)
For example, if the scott is the database name, %n= scottxxx.

%p The piece number within the backup set. This value starts at 1
for each backup set and is incremented by 1 as each backup piece
is created. Note: If you specify PROXY, then the %p variable must
be included in the FORMAT string either explicitly or implicitly within %U.

%s The backup set number. This number is a counter in the control file that
is incremented for each backup set. The counter value starts at 1 and is
unique for the lifetime of the control file. If you restore a backup
control file, then duplicate values can result.
Also, CREATE CONTROLFILE initializes the counter back to 1.

%t The backup set time stamp, which is a 4-byte value derived as the
number of seconds elapsed since a fixed reference time.
The combination of %s and %t can be used to form a unique name for
the backup set.

%T The year, month, and day (YYYYMMDD)

%u An 8-character name constituted by compressed representations of
the backup set number and the time the backup set was created.

%U A convenient shorthand for %u_%p_%c that guarantees uniqueness in
generated backup filenames.
If you do not specify a format, RMAN uses %U by default.

%Y The year (YYYY)

%% Specifies the '%' character. e.g. %%Y translates to %Y.


------------ARCHIVELOG Format------

%s log sequence number

%S log sequence number, zero filled

%tthread number

%Tthread number, zero filled

%a activation ID

%d database ID

%R resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database

---------------------------

Tuesday, October 6, 2009

how to recover deleted file from network drive

Issue: how to recover deleted file from network drive

Solution:

settings on Server

1. go the properties of Share Network Drive
2. Shadow Copies
3. Setting as per your requirement


Settings on Client

1. Locate the folder where the deleted file was stored (on the network), right-click the folder, and click Properties. The Properties dialog box will appear.

2. On the Previous Versions tab, double-click the most recent version of the folder that contains the file that you want to recover. A list of files that are stored in that previous version will appear.

3. Right-click the file that was deleted and clicks Copy and paste.

Windows terminate process forcefully

Windows terminate process forcefully

taskkill /f /im process-name.exe

/f - Stands for that process be forcefully terminated.

/im - Stands for the image name of the process to be terminated

In order to kill all these process I made a batch file which contains the forcefull termination command for all these programs and then I added the batch file in windows startup.

1. open Notepad and paste the following commands one per each line by line

taskkill /f /im wmpnscfg.exe
taskkill /f /im ctfmon.exe
taskkill /f /im mobsync.exe.exe

Read more: http://www.technixupdate.com/terminate-programs-forcefully-in-windows/#ixzz0QE2O6QIR


2. Save the file as terminate.bat or with any other name but with .bat extension.

3. Now Drag and Drop terminate.bat file in All Programs >> StartUp

4. Now, restart windows all these useless programs which you used to kill manually will be automatically terminated.

Note: You can also place the terminate.bat file on desktop and run it manually and kill all the useless process.

Read more: http://www.technixupdate.com/terminate-programs-forcefully-in-windows/#ixzz0QE2POIGP

Friday, September 25, 2009

Oracle: Which user process lock the other process

SELECT pr.username "O/S Id",
ss.username "Oracle User Id",
ss.status "status",
ss.sid "Session Id",
ss.serial# "Serial No",
lpad(pr.spid,7) "Process Id",
substr(sqa.sql_text,1,900) "Sql Text",
First_load_time "Load Time"
FROM v$process pr, v$session ss, v$sqlarea sqa
WHERE pr.addr=ss.paddr
AND ss.username is not null
AND ss.sql_address=sqa.address(+)
AND ss.sql_hash_value=sqa.hash_value(+)
AND ss.status='ACTIVE'
ORDER BY 1,2,7 ;
Spool Out ;


=============

set lin 132
set pages 66
column "SID" format 999
column "SER" format 99999
column "Table" format A10
column "SPID" format A5
column "CPID" format A5
column "OS User" format A7
column "Table" format A10
column "SQL Text" format A40 wor
column "Mode" format A20
column "Node" format A10
column "Terminal" format A8



rem spool /tmp/locks.lst

select
s.sid "SID",
s.serial# "SER",
o.object_name "Table",
s.osuser "OS User",
s.machine "Node",
s.terminal "Terminal",
--p.spid "SPID",
--s.process "CPID",
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') "Mode",
substr (c.sql_text, 1, 150) "SQL Text"
from v$lock l,
v$lock d,
v$session s,
v$session b,
v$process p,
v$transaction t,
sys.dba_objects o,
v$open_cursor c
where l.sid = s.sid
and o.object_id (+) = l.id1
and c.hash_value (+) = s.sql_hash_value
and c.address (+) = s.sql_address
and s.paddr = p.addr
and d.kaddr (+) = s.lockwait
and d.id2 = t.xidsqn (+)
and b.taddr (+) = t.addr
and l.type = 'TM'
group by
o.object_name,
s.osuser,
s.machine,
s.terminal,
p.spid,
s.process,
s.sid,
s.serial#,
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>'),
substr (c.sql_text, 1, 150)
order by
decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') desc,
o.object_name asc,
s.sid asc;
rem spool off;

Saturday, August 8, 2009

Linux YUM: download all rpm and dependancies from Internet Automatically

COPY file rpmforge.repo to /etc/yum.repos/

#rpmforge.repo


# Name: RPMforge RPM Repository for Red Hat Enterprise 5 - dag
# URL: http://rpmforge.net/
[rpmforge]
name = Red Hat Enterprise $releasever - RPMforge.net - dag
#baseurl = http://apt.sw.be/redhat/el5/en/$basearch/dag
mirrorlist = http://apt.sw.be/redhat/el5/en/mirrors-rpmforge
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1

Wednesday, July 22, 2009

Linux DNS resolver

Not able to send mail to @jalindia.co.in due to jalindia.co.in DNS issue. they use their own dns to resolve jalindia.co.in.

they should use some other dns server to resove it.
Resolve it into our dns server uniconindia.in

============dns server 10.100.0.100============
create zone file for jalindia.co.in at dns server 10.100.0.10
make zone file entry into jalindia.co.in.zone /etc/named.conf


vi /var/named/chroot/etc/named.conf
zone "jalindia.co.in" IN {
type master;
file "jalindia.co.in.zone";
allow-update { none; };
};



#cd /var/named/chroot/var/named/
#copy uniconstocks.com.zone jalindia.co.in.zone

#vi jalindia.co.in.zone

jalindia.co.in. 86400 IN MX 1 jal-gate-svr1.jalindia.co.in.
jalindia.co.in. 86400 IN MX 2 jal-gate-svr2.jalindia.co.in.


jalindia.co.in. 86400 IN NS secondarydns.jalindia.co.in.
jalindia.co.in. 86400 IN NS primarydns.jalindia.co.in.


jal-gate-svr1.jalindia.co.in. 86400 IN A 115.119.16.172
jal-gate-svr2.jalindia.co.in. 86400 IN A 115.119.16.168
primarydns.jalindia.co.in. 86400 IN A 10.10.10.10
secondarydns.jalindia.co.in. 86400 IN A 10.10.10.11



===========mailserver 10.100.0.77============
host jalindia.co.in
nslookup jalindia.co.in
dig jalindia.co.in
dig mx jalindia.co.in
dig jalindia.co.in
host jal-gate-svr2.jalindia.co.in
host jal-gate-svr1.jalindia.co.in
host primarydns.jalindia.co.in
host secondarydns.jalindia.co.in
vi /etc/resolv.conf
ssh 10.100.0.100
telnet jal-gate-svr1.jalindia.co.in 25

==============================
named-checkconf /etc/named.conf
named-checkconf /etc/named.conf
host jalindia.co.in
dig mx jalindia.co.in
telnet jal-gate-svr1.jalindia.co.in.
route
telnet 59.160.230.70 25
telnet 115.119.16.172 25
telnet 115.119.16.168 25
=============

Saturday, July 18, 2009

RMAN Recovery with Previous Incarnation

rman>list incarnation of database;
sql>shutdown immediate
sql>startup mount
rman>reset database to incarnation 3;
rman>restore database until scn 4312345;
rman>recover database until scn 4312345;
rman>list incarnation;
rman> alter database open resetlogs;
rman>list incarnation;

Friday, July 17, 2009

ASP Pages Cache problem

Don't Cache ASP pages

================
LD ASP Error:
we hide the client but data is fetched from some branch.
but on LD ASp server, no data is fetched for that hidden client.

Reason
data is coming from cache.


=================IIS Settings================

iis > Http Header > http headers enable content expiration

==========
ISAPI applications (Active Server Pages Web pages) can be cached on Internet Information Server.Use these steps to disable caching:

inetmgr > iis > website >home directory > virtual directory > Configuration > cache isapi application

Now IIS Application so that it won't cache your ASP pages. But this alone is not enough. At the top of the .asp page that you do not want cached, add the following line: <% Response.Expires=0 %>

==========

stopping and restarting both the web site and the application
pool, we've also stopped and restarted IIS Admin Service and the WWW
publishing service.



====================ASP Page================
<% Response.Expires = 0 Response.Expiresabsolute = Now() - 1 Response.AddHeader "pragma","no-cache" Response.AddHeader "cache-control","private" Response.CacheControl = "no-cache" %>

===============Meta Tags===============





===================Registry Settings=================

HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\UriEnabledCache=0

================

If problem is not solved then this is due to networking stack between client browser and the server. It looks like a proxy or browser-side cache.

=============Browser-side Cache===========
Retain server results in a browser-side cache. The cache holds query-result pairs; the queries are cache keys and the server results are server results. So, whenever the browser performs an XMLHttpRequest Call, it first checks the cache. If the query is held as a key in the cache, the corresponding value is used as the result, and there is no need to access the server.



==========

ipconfig /flushdns


Modify the behavior of the Microsoft Windows DNS caching algorithm by setting two registry entries in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters registry key.

The MaxCacheTtl represents the maximum time that the results of a DNS lookup will be cached. The default value is 86,400 seconds. If you set this value to 1, DNS entries will only be cashed for a single second.

MaxNegativeCacheTtl represents the maximim time that the results of a failed DNS lookup will be cached. The default value is 900 seconds. If you set this value to 0, failed DNS lookups will not be cached.

==========



Wednesday, July 8, 2009

Recover Database with missing Archieve Logs

Recover Database witn missing Archieve Logs

I am trying the restore my old database but due to missing of one archieve log. i m not able to restore and recover from rman

Not able to open database

ora 19870 ora 19505 ora 27041 osd 04002

rman 00571 rman 00569 rman 03002 rman 06053 rman 06025


--------------A---------------------------------------
crosscheck copy of archivelog all;
crosscheck archivelog all;
resync catalog;
delete force obsolete;
delete expired archive all;


Note: not able to resync because not able to connect to recovery catalog because database is not open.

---------------B----------------------------------------
Point in time Recovery
-------------
1)
restore database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')";
recover database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')";

rman-03002 rman-20207

2)
restore database until scn 1000;
recover database until scn 1000;

3)
restore database until sequence 923 thread 1;
recover database until sequence 923 thread 1;


Note: not recover because of missing sequence.


---------------C----------------------------------------

list incranation
reset database to incranation inc_key;

restore database until sequence 923 thread 1;
recover database until sequence 923 thread 1;


Note: not recover because of missing sequence.

--------------D----------------------------------------
alter database backup controlfile to trace as 'c:\newcontrolfile.txt'

create new control file from above.


--------------E-FINAL SOLUTION---------------------------------------

shutdown immediate;
add into init.ora _allow_resetlogs_corruption=true
startup mount;
sql>recover database until cancel using backup controlfile;

Specify log: {=suggested | filename | AUTO | CANCEL}

CANCEL

Alter database open resetlogs








===============================================================


Monday, June 29, 2009

OEM Backup Notification

If Backup is failed or stop then i will get notification

Steps :
- Open a database target (SNS0910)
- Click on User-Defined-Metrics
- Create
- Metric Name = Last datafile backup
- Type = Number
- Output = Single Value
- SQL Query : the time in hour since the oldest checkpoint time of the newest backup


select (sysdate-min(t))*24 from
(
select max(b.CHECKPOINT_TIME) t
from v$backup_datafile b, v$tablespace ts, v$datafile f
where INCLUDED_IN_DATABASE_BACKUP='YES'
and f.file#=b.file#
and f.ts#=ts.ts#
group by f.file#
)
- Credentials : dbsnmp/*****
- Threshold Operator > Warning 24 Critical 48
- Repeat every 1 hour
- OK



Same for redologs, with a name of Last redolog backup query of

select (sysdate-max(NEXT_TIME))*24 from v$BACKUP_REDOLOG


It is now possible to define alerts.
- Preferences
- Notification Rules
- Create
- Apply to specific targets : Add you productive databases group
- Deselect Availability Down
- Metric: Add : Show all: Check User defined metric : Select : Last datafile backup , Last redolog backup
- Severity : Critical and Clear
- Policy : None
- Method : Email



======================

select
to_char(max(completion_time) ,'DDMMYYHH24MISS') lastbackup
from (SELECT completion_time
FROM v$backup_set
UNION
SELECT completion_time
FROM v$datafile_copy
union
select sysdate-365 from dual
)

Warining 0
Critical 320000000000


===========

SELECT
ELAPSED_SECONDS/60 minutes
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY desc;

Warining 30
Critical 60

Saturday, June 27, 2009

OEM Backup Error Solution

OEM Backup Error
--------------------


Recovery Manager: Release 10.2.0.3.0 - Production on Sat Jun 27 11:24:24 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


RMAN>

connected to target database: SNS0910 (DBID=45805873)

RMAN>

connected to recovery catalog database

RMAN>

echo set on


RMAN> set command id to 'BACKUP_SNS0910.UNI_062709112403';

executing command: SET COMMAND ID


RMAN> backup device type disk tag 'BACKUP_SNS0910.UNI_062709112403' database;

Starting backup at 27-JUN-09

allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=509 devtype=DISK

channel ORA_DISK_1: starting compressed full datafile backupset

channel ORA_DISK_1: specifying datafile(s) in backupset

input datafile fno=00005 name=E:\SNSD0910\USERS01.ORA

input datafile fno=00004 name=E:\SNSD0910\INDEX01.ORA

input datafile fno=00002 name=E:\SNSD0910\UNDOTBS01.ORA

input datafile fno=00003 name=E:\SNSD0910\SYSAUX01.ORA

input datafile fno=00001 name=E:\SNSD0910\SYSTEM01.ORA

channel ORA_DISK_1: starting piece 1 at 27-JUN-09

channel ORA_DISK_1: finished piece 1 at 27-JUN-09

piece handle=E:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\SNS0910TEST\6UKIKHFC_1_1 tag=BACKUP_SNS0910.UNI_062709112403 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:19:15

Finished backup at 27-JUN-09



Starting Control File and SPFILE Autobackup at 27-JUN-09

piece handle=E:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\SNS0910TEST\CONTROL\C-45805873-20090627-01 comment=NONE

Finished Control File and SPFILE Autobackup at 27-JUN-09



RMAN> backup device type disk tag 'BACKUP_SNS0910.UNI_062709112403' archivelog all not backed up;

Starting backup at 27-JUN-09

current log archived

using channel ORA_DISK_1

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of backup command at 06/27/2009 11:43:48

RMAN-06059: expected archived log not found, lost of archived log compromises recoverability

ORA-19625: error identifying file D:\ARCHIVE0910\ARC00928_0681409713.001

ORA-27041: unable to open file

OSD-04002: unable to open file

O/S-Error: (OS 2) The system cannot find the file specified.


RMAN> exit;

Recovery Manager complete.



---------------------------------------
Solutions:

-------------Solution Through OEM------

OEM > Maintenance > Manage Current Backup>
Backup Sets
Crosscheck > Validate
Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired

Image Copies
Crosscheck > Validate
Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired

-----------------Solution Through command prompt----------

RMAN > crosscheck archivelog all;

If error is still there then follow following steps

RMAN> crosscheck copy of archivelog all;
RMAN> crosscheck archivelog all;
RMAN> resync catalog;
RMAN> delete force obsolete;
RMAN> delete expired archive all;

---------------------------------------

Tuesday, June 23, 2009

ORACLE EXPORT/IMPORT UTILITY - DATA PUMP

==================ROLES======================================


SQLPLUSW sys/linux@sns0910srv as sysdba;

create user dpuser identified by dpuser;

grant connect, resource to dpuser;

CREATE DIRECTORY dpump_dir1 AS 'E:\oracle\product\10.2.0\flash_recovery_area\sns0910test\dp';

GRANT create session, create table to dpuser;

GRANT EXP_FULL_DATABASE,IMP_FULL_DATABASE to dpuser;

grant read, write on directory dpump_dir1 to dpuser;



=======================Init.ora parameters======================

Init.ora parameters that affect the performance of Data Pump:

Oracle recommends the following settings to improve performance.

Disk_Asynch_io= true

Db_block_checking=false

Db_block_checksum=false

=========================================
FULL=Y

expdp dpuser/dpuser@sns0910srv full=Y directory=dpump_dir1 dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp dpuser/dpuser@sns0910srv full=Y directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log

=========================================
SCHEMAS=schema, schema, schema…

expdp dpuser/dpuser@sns0910srv SCHEMAS=LDBO directory=dpump_dir1 dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp dpuser/dpuser@sns0910srv SCHEMAS=LDBO directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log

============

TABLES=[schemas].tablename, [schemas].tablename,…

expdp dpuser/dpuser@sns0910srv TABLES=LDBO.ACCOUNTS directory=dpump_dir1 dumpfile=LDBO.dmp logfile=expdpLDBO.log

impdp dpuser/dpuser@sns0910srv TABLES=LDBO.ACCOUNTS directory=dpump_dir1 dumpfile=DB10G.dmp logfile=impdpDB10G.log


===============

TABLESPACES=tablespacename, tablespacename, tablespacename…

TRANSPORT TABLESPACES=tablespacename…

================

Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U"

wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:

expdp dpuser/dpuser@sns0910srv schemas=LDBO directory=dpump_dir1 parallel=4 dumpfile=LDBO_%U.dmp logfile=expdpLDBO.log

==============

expdp dpuser/dpuser@sns0910srv schemas=LDBO include=TABLE:"IN ('ACCOUNTS', 'ACCOUNTADDRESSDETAIL')" directory=dpump_dir1

dumpfile=LDBO.dmp logfile=expdpLDBO.log

expdp dpuser/dpuser@sns0910srv schemas=LDBO exclude=TABLE:"= ''" directory=dpump_dir1 dumpfile=LDBO.dmp logfile=expdpLDBO.log

=================

SELECT * from dba_datapump_jobs

==================ERROR=====

ORA-31631: privileges are required
ORA-39109: Unprivileged users may not operate upon other users' schemas

solutions----

SQL> GRANT create session, create table to dpuser;

SQL> GRANT EXP_FULL_DATABASE,IMP_FULL_DATABASE to dpuser;



====================Data Pump API======================

SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'FULL',
remote_link => NULL,
job_name => 'DB_EXPORT',
version => 'COMPATIBLE');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'LDBO.dmp',
directory => 'dpump_dir1');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'LDBO.log',
directory => 'dpump_dir1',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''LDBO''');

DBMS_DATAPUMP.start_job(l_dp_handle);

DBMS_DATAPUMP.detach(l_dp_handle);
END;
/


----------------ERROR---------------------------
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3162
ORA-06512: at line 14

Friday, June 19, 2009

Configure Oracle E-MAIL notification for DB shutdown or startup events Manually

Configure Oracle E-MAIL notification for DB shutdown or startup events Manually

conn sys/linux@sns0910srv as sysdba

@E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlmail.sql
@E:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\prvtmail.plb;


alter system set smtp_out_server = 'mail.uniconindia.in' scope=spfile;

shutdown immediate;
startup

grant execute on utl_mail to ldbo;



create or replace trigger ldbo.db_shutdown
before shutdown on database
begin
sys.utl_mail.send (
sender =>'dbanotification@uniconindia.in',
recipients =>'dbamonitoring@uniconindia.in',
subject => 'Oracle Database Server DOWN',
message => 'May be LD Server Down for maintenance'||
' but also contact to DBA for further details. '
);
end;
/


create or replace trigger ldbo.db_startup
after startup on database
begin
sys.utl_mail.send (
sender =>'dbanotification@uniconindia.in',
recipients =>'dbamonitoring@uniconindia.in',
subject => 'Oracle Database Server UP',
message => 'LD Server OPEN for normal use.'
);
end;
/

Tuesday, June 9, 2009

Oracle 10g Data Guard

Oracle 10g Data Guard

Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.

Data Guard Configurations:

A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other. For example, you can have a standby database on the same system as the production database, along with two standby databases on other systems at remote locations.

You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager.

Primary Database

A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.

The primary database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

Standby Database

A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.

Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.

A standby database can be either a physical standby database or a logical standby database:

Physical standby database

Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database.

Logical standby database

Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and database upgrades.

Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:

Log Transport Services

Control the automated transfer of redo data from the production database to one or more archival destinations.

Log Apply Services

Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.

Role Management Services

Change the role of a database from a standby database to a primary database or from a primary database to a standby database using either a switchover or a failover operation.

A database can operate in one of the two mutually exclusive roles: primary or standby database.

  • Failover

During a failover, one of the standby databases takes the primary database role.

  • Switchover

Primary and standby database can continue to alternate roles. The primary database can switch the role to a standby database; and one of the standby databases can switch roles to become the primary.

The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo data:

For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database,

For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database

Data Guard Interfaces

Oracle provides three ways to manage a Data Guard environment:

1. SQL*Plus and SQL Statements

Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:

SQL> alter database commit to switchover to physical standby;

2. Data Guard Broker GUI Interface (Data Guard Manager)

Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.

3. Data Guard Broker Command-Line Interface (CLI)

It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.

The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following are some of the operations that the broker automates and simplifies:

  • Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, log transport services, and log apply services, where any of the databases could be Real Application Clusters (RAC) databases.
  • Adding up to 8 additional new or existing (physical or logical, RAC, or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the same configuration.
  • Managing an entire Data Guard configuration, including all databases, log transport services, and log apply services, through a client connection to any database in the configuration.
  • Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.
  • Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.

You can perform all management operations locally or remotely through the broker’s easy-to-use interfaces: the Data Guard web pages of Oracle Enterprise Manager, which is the broker’s graphical user interface (GUI), and the Data Guard command-line interface (CLI) called DGMGRL.

Configuring Oracle DataGuard using SQL commands - Creating a physical standby database

Step 1) Getting the primary database ready (on Primary host)

We are assuming that you are using SPFILE for your current(Primary) instance. You can check if your instance is using SPFILE or not using spfile parameter.

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string

You can create spfile as shown below. (on primary host)

SQL> create spfile from pfile;

File created.

The primary database must meet two conditions before a standby database can be created from it:

  1. It must be in force logging mode and
  2. It must be in archive log mode (also automatic archiving must be enabled and a local archiving destination must be defined.

Before putting database in force logging mode, check if the database is force logging mode using

SQL> select force_logging from v$database;

FOR

NO

Also the database is not in archive log mode as shown below.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination d:/oracle/product/10.2.0/dbs/arch
Oldest online log sequence 10
Current log sequence 12

Now we will start the database in archive log and force logging mode.

Starting database in archive log mode:-

We need to set following 2 parameters to set the database in archive log mode.

Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
log_archive_format = “ARCH_%r_%t_%s.ARC”

If a database is in force logging mode, all changes, except those in temporary tablespaces, will be logged, independently from any nologging specification.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE
instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 750786888 bytes
Database Buffers 314572800 bytes
Redo Buffers 6397952 bytes
Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database force logging;

Database altered.

SQL> alter database open;

Database altered.

So now our primary database is in archive log mode and in force logging mode.

SQL> select log_mode, force_logging from v$database;

LOG_MODE FOR
———— —
ARCHIVELOG YES

init.ora file for primary

control_files = d:/oracle/product/10.2.0/oradata_orcl/orclcontrol.ctl
db_name = orcl

db_domain = UNICON.COM

db_block_size = 8192
pga_aggregate_target = 250M

processes = 300
sessions = 300
open_cursors = 1024

undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0

sga_target = 600M

nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’PRIMARY’
Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE

Step 2) Creating the standby database

Since we are creating a physical stand by database we have to copy all the datafiles
of primary database to standby location. For that, you need to shutdown main database, copy the files of main database to new location and start the main database again.

Step 3) Creating a standby database control file

A control file needs to be created for the standby system. Execute the following on the primary system:

SQL> alter database create standby controlfile as ‘d:/oracle/product/10.2.0/dbf/standby.ctl’;

Database altered.

Step 4) Creating an init file

SQL> show parameters spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string d:/oracle/product/10.2.0/dbs/s
pfiletest.ora

Step 5) Changing init.ora file for standby database

A pfile is created from the spfile. This pfile needs to be modified and then be used on the standby system to create an spfile from it. So create a pfile from spfile on primary database.

create pfile=’/some/path/to/a/file’ from spfile

SQL> create pfile=d:’/oracle/product/10.2.0/dbs/standby.ora’ from spfile;

File created.

The following parameters must be modified or added:

  • control_files
  • standby_archive_dest
  • db_file_name_convert (only if directory structure is different on primary and standby server)
  • log_file_name_convert (only if directory structure is different on primary and standby server)
  • log_archive_format
  • log_archive_dest_1 — This value is used if a standby becomes primary during a switchover or a failover.
  • standby_file_management — Set to auto

init.ora parameters for standby

control_files = d:/oracle/product/10.2.0/oradata/orcl/standby_orcl.ctl
db_name = orcl
db_domain = UNICON.COM
db_block_size = 8192
pga_aggregate_target = 250M
processes = 300
sessions = 300
open_cursors = 1024
undo_management = AUTO

undo_tablespace = undotbs
compatible = 10.2.0
sga_target = 600M
nls_language = AMERICAN
nls_territory = AMERICA
background_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/bdump
user_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/udump
core_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/cdump
db_unique_name=’STANDBY’
Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’
Log_archive_dest_state_1=ENABLE
standby_archive_dest=d:/oracle/product/10.2.0/prim_archive

db_file_name_convert=’d:/oracle/product/10.2.0/oradata’,\

‘d:/oracle/product/10.2.0/oradata/orcl’
log_file_name_convert=’d:/oracle/product/10.2.0/oradata’,\

‘d:/oracle/product/10.2.0/oradata/orcl’

standby_file_management=auto

FAL_Client=’to_standby’

Step 7) Creating the spfile on the standby database
set ORACLE_SID=orcl
sqlplus “/ as sysdba”

create spfile from pfile=’/…/../modified-pfile’;

Step 8- On standby database
SQL> startup nomount pfile=standby.ora
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 1984184 bytes
Variable Size 754981192 bytes
Database Buffers 314572800 bytes
Redo Buffers 2203648 bytes

SQL> alter database mount standby database;

Database altered.

Add following parameters to standby side

FAL_Client=’to_standby’
FAL_Server=’to_primary’
Log_archive_dest_2=’Service=to_primary VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=primary’
Log_archive_dest_state_2=ENABLE
remote_login_passwordfile=’SHARED’

Add following parameters to primary side

Log_archive_dest_2=’Service=to_standby lgwr’
Log_archive_dest_state_2=ENABLE
Standby_File_Management=’AUTO’
REMOTE_LOGIN_PASSWORDFILE=’SHARED’

Create password file on both sides

orapwd file=d:/oracle/product/10.2.0/db_1/dbs/orapworcl.ora password=oracle entries=5 force=y

FTP the password file to standby location

Step 9) Configuring the listener

Creating net service names

Net service names must be created on both the primary and standby database that will be used by log transport services. That is: something like to following lines must be added in the tnsnames.ora.

Setup listener configuration

On Primary:

ORCL=
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
)
)

SID_LIST_ORCL=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

On Standby:

SID_LIST_orcl=
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = d:/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

orcl =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T)(PORT = 1538))
)
)

TNSNAMES settings

On Primary:

orcl =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

TO_STANDBY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T.UNICON.COM)(PORT = 1538))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

On standby:

TO_PRIMARY =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02P.UNICON.COM)(PORT = 1520))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)

orcl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ORANOUP02T.UNICON.COM)(PORT = 1538))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

On Standby side following below commands

SQL> startup nomount pfile=standby_orcl.ora
ORACLE instance started.

Total System Global Area 629145600 bytes
Fixed Size 1980744 bytes
Variable Size 171968184 bytes
Database Buffers 452984832 bytes
Redo Buffers 2211840 bytes

SQL> alter database mount standby database;

Database altered.

Try to connect to stand by database from primary database

Following connections should work now
From Primary host:

sqlplus sys/oracle@orcl as sysdba –> This will connect to primary database
sqlplus sys/oracle@to_standby as sysdba –> This will connect to standby database from primary host

From Standby host

sqlplus sys/oracle@orcl as sysdba –> This will connect to standby database
sqlplus sys/oracle@to_primary as sysdba –> This will connect to primary database from standby host

LOG SHIPPING

On PRIMARY site enable Log_archive_dest_state_2 to start shipping archived redo logs.

SQL> Alter system set Log_archive_dest_state_2=ENABLE scope=both;

System Altered.

Check the sequence # and the archiving mode by executing following command.

SQL> Archive Log List

Then switch the logfile on primary side

SQL> Alter system switch logfile;

System Altered.

Start physical apply log service on standby side.

SQL> Alter Database Recover Managed Standby Database Disconnect;

Database Altered.

Now the session will be available to you and MRP will work as a background process and apply the redo logs.

You can check whether the log is applied or not by querying V$ARCHIVED_LOG.

SQL> Select Name, Applied, Archived from v$Archived_log;

This query will return the name of archived files and their status of being archived and applied.

Once you complete above step, you are done with physical standby. We can verify the log files which got applied using following commands.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09

9 rows selected.

On Primary side

SQL> Select Status, Error from v$Archive_dest where dest_id=2;

STATUS ERROR
——— —————————————————————–
VALID

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

On Standby side

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09
10 12-JUN-09 12-JUN-09

10 rows selected.

If you can see, after switching archive log on primary side, an archive log file got applied to standby database.

Again on primary

SQL> alter system switch logfile;

System altered.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIM NEXT_TIME
———- ——— ———
1 11-JUN-09 12-JUN-09
2 12-JUN-09 12-JUN-09
3 12-JUN-09 12-JUN-09
4 12-JUN-09 12-JUN-09
5 12-JUN-09 12-JUN-09
6 12-JUN-09 12-JUN-09
7 12-JUN-09 12-JUN-09
8 12-JUN-09 12-JUN-09
9 12-JUN-09 12-JUN-09
10 12-JUN-09 12-JUN-09
11 12-JUN-09 12-JUN-09

11 rows selected.

Oracle- OEM - Export Database / Table / Schema / Tablespace (Data Pump)

1)
user: snsexport
passowrd:snsexp
role:exp_full_database

2)
Now Try to export database by login above user


Errors: ORA-31626: job does not exist ORA-31633: unable to create master table "SNSEXPORT.EXPORTTEST" ORA-06512: at

"SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 863 ORA-00955: name is already used by an existing object

Exception : ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line

911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356 ORA-06512: at line 2


solutions
add privilieges for snsexport user


CREATE SESSION
BACKUP ANY TABLE
SELECT ANY TABLE
SELECT ANY SEQUENCE
EXECUTE ANY PROCEDURE
CREATE ANY DIRECTORY
EXECUTE ANY TYPE
ADMINISTER RESOURCE MANAGER
RESUMABLE
SELECT ANY DICTIONARY
READ ANY FILE GROUP
create table

-------------------------------------
3)
Now Following Error

ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS", line

142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1


Solution
Add role MGMT_USER

--------------------------

4) ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS",

line 142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1

Solution:

add the user snsexport

-login as user SYSTEM (or user SYS) to the ‘Enterprise Manager 10g
Database Control’
- At the top right, click on the link ‘Setup’
- On the page ‘Administrators’, click on the button ‘Create’
- On the page ‘Create Administrator: Properties’, add the user snsexport
- Click on the button: ‘Finish’
- On the page ‘Create Administrator: Review’, click on the button: ‘Finish’
- On the page ‘Administrators’, confirm that the user has been added.
- At the top right, click on the link ‘Logout’
--------------------------

5) Now Porblems are Resolved

Oracle - Create Standby / Duplicate / Test / Development / Clone Database

Steps to restore RMAN backup to different host, for example, RMAN backup from Production server to Test server.

Presumptions

* Production database: Server A.
* Standby and RMAN database: Server B.
* Test database: Server C
* tnsnames.ora

TEST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = test.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = test)
(SERVER = DEDICATED)
)
)

RMAN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stb.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = rman)
(SERVER = DEDICATED)
)
)

PRO =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = prod.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = prod)
(SERVER = DEDICATED)
)
)

STB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = stb.unicon.com)(PORT = 1521))
)
(CONNECT_DATA =
(SID = stb)
(SERVER = DEDICATED)
)
)

* All servers have identifical directory structure.

1, Copy backup control file and init.ora file from Prod to Test.

2, Start Test server.

sqlplus /nolog
SQL>connect sys/linux@test as sysdba
SQL> startup nomount PFILE=’D:\Oracle\admin\pfile\init.ora’;



3. Copy RMAN backup files from Prod to Test, place them in the same directory as Prod.

4. Connect to RMAN

rman TARGET SYS/oracle@PROD CATALOG rman/rman@rman AUXILIARY SYS/linux@test

5, Duplicate database

RUN
{
allocate auxiliary channel c1 DEVICE TYPE disk;
DUPLICATE TARGET DATABASE to test nofilenamecheck ;
}

6, Duplicate database before current time

RUN
{
allocate auxiliary channel c1 DEVICE TYPE disk;
DUPLICATE TARGET DATABASE to test nofilenamecheck UNTIL TIME “TO_DATE(’06/09/2009',’MM/DD/YYYY’)”;
}

Saturday, June 6, 2009

Spool To Excel and Html

SPOOL to EXCEL

set head off;
set feed off;
set trimspool on;
set linesize 32767;
set pagesize 32767;
set echo off;
set termout off;
Spool c:\abc.xls
select * from dba_users;
Spool off
exit


SPOOL TO HTML


set pagesize 9999
set feedback off
SET TERMOUT on
SET NEWPAGE 1
SET UNDERLINE ON
set markup html on;
spool c:\abc.html
select * from dba_users;
Spool off
exit

Friday, June 5, 2009

OEM OS Host Credentials (User Authenication Error)

OEM OS Host Credentials (User Authenication Error)

Validation Error - Connection to host as user kgupta2 failed: ERROR: Wrong password for user

Solution
OEM>Preferences >== Preferred Credentials >== Target Types Host.
provide the hostname, password which is mentioned in following


You have to provide the 'Log on as a batch job' privilege:

1. Go to control panel/administrative tools
a. click on "local security policy"
b. click on "local policies"
c. click on "user rights assignments"
d. double click on "log on as a batch job"
e. click on "add" and add the user that was entered in the "normal username" or "privileged username" section of the EM Console.

2. Go to the Preferences link in the EM GUI
a. click on Preferred Credentials (link on the left menu)
b. under "Target Type: Host" click on "set credentials"
c. enter the OS user who has logon as a batch job privilege into the "normal username" and "normal password" fields

3. Test the connection
a. while in the Set Credentials window, click on "Test"

Tuesday, June 2, 2009

Oracle-Rename database User

Rename Oracle Database User

1) exp owner=kgupta2
2) create user kshitij identified by rakesh;
3) DROP USER kgupta2 CASCADE;
4) imp FROMUSER=kgupta2 TOUSER=kshitij

Oracle- Rename Database

Rename the Oracle Database

1) Full Database Backup

2) conn SYS/ORACLE AS SYSDBA

3) ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS;

4) Locate the latest dump file in your USER_DUMP_DEST directory (show parameter USER_DUMP_DEST) - rename it to something like

dbrename.sql.

5) Edit dbrename.sql, remove all headers and comments, and change the database's name.

6) Change "CREATE CONTROLFILE REUSE ..." to "CREATE CONTROLFILE SET ...".

7) Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE, don't ABORT!)

8) Run dbrename.sql.

9) ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name;

Saturday, May 23, 2009

ORacle - Change SID name after Creating Database

Change SID name after Creating Database

Recreate the Control file to achieve this .

1. SVRMGR>Alter Database backup controlfile to trace;
This will generate an Ascii Trace file in $USER_DUMP_DEST directory which will have the Control File Creation Script.

2. Shutdown the Database and Do a Physical Backup of all the Datafiles,Controlfiles,RedoLog files,Archived Redo log files etc

3. Rename the Init.ora and config.ora to Init.ora and Config.ora files in $ORACLE_HOME/dbs This is to prevent any errors during Database Startups looking for default 'pfile' names.

4. Rename the Old Controlfiles to say control01.old etc This is to Create New Controlfile and not reuse the existing one.

5. Edit the Control File creation Script ..It should read like:
Startup nomount;
Create Controlfile set Database 'NEW_SID' Resetlogs
......
;

6. Open your database:
alter database open resetlogs;

Friday, May 22, 2009

Oracle DB Data Dictionary Views

select * from DICTIONARY;

use following query from sys user

select kqfvinam name from x$kqfvi order by kqfvinam;
select * from PUBLICSYN where sname like 'V$%' order by 3,1;

Sql Server System Stored Procedures

Sql Server System Stored Procedures


--provide list of all database objects
exec sp_help


--provide list of all columns , there datatype and some other important information

sp_help tablename_or_viewname

--Provide defination of given object, object can be function, SP or trigger

sp_helptext view_function_proc_triggername

-- provide list of current running process with some other important information

sp_who2
-- provide list of current running process, provide information less than sp_who2
sp_who
--provide list of tables and views
sp_tables

-- provide list of columnname and some other usefull information of input object
sp_columns table_viewname

-- provide list of all databases with there size
sp_databases

-- Enable you to rename a database or database object

sp_rename
currentname,
newname,
objecttype /*optional*/

--Enable you to set & view the database compatibility value , valid values for sql server 2005 are 60, 65, 70, 80, or 90

/*(90 for sql server 2005)*/

sp_dbcmptlevel
DatabaseName,
Value /*optional*/

--enable you to set values of different database option

sp_dboption dbname, optname, optvalue

--provide you the object dependent and type of dependent object for input object
sp_depends objectname

--enable you to equire a lock on active transaction

sp_getapplock Resource,
lockmode, --can be 'Shared', 'Update', 'Exclusive', 'IntentExclusive'and 'IntentShared'
LockOwner, --can be 'Transaction' and 'Session' -- default is 'Transaction'
LockTimeout, --default null
DbPrincipal -- default public

--provide you the detail of constraint on given objectname

sp_helpconstraint objectname

--
--provide you data and log file info for current database, if u pass a file name to this proc this will give you Db name for

that file
sp_helpfile
[filename] --optional

--provide you file group name for current database
sp_helpfilegroup
filegroupname --optional

--provide you list of indexes on given object
sp_helpindex objectname

--provide you list of stats on given object

sp_helpstats objectname

--provide you list of triggers on given object
sp_helptrigger objectname

--provide you users for current Db with Loginname
sp_helpuser

--provide you detail of current locks
sp_lock
spId1, --optional
spId2 --optional

--Provide you overview of server performance
sp_monitor

--provide you list of database on which you have access rights
sp_mshasdbaccess

--provide you list of index on given object and space used by them
sp_msindexspace objectname


-----------------


SQL Server Extended Stored Procedures


p_MSgetversion
This extended stored procedure can be used to get the current version of Microsoft SQL Server. To get the current SQL Server version, run:

EXEC master..sp_MSgetversion

Note. A more common way to retrieve the current SQL Server version (this way provides more information) is to use following SELECT statement:

SELECT @@version



xp_dirtree
This extended stored procedure can be used to get a list of all the folders for the folder named in the xp. To get a list of all the folders in the C:\MSSQL7 folder, run:

EXEC master..xp_dirtree 'C:\MSSQL7'


xp_subdirs
This extended stored procedure is used to get the list of folders for the folder named in the xp. In comparison with xp_dirtree, xp_subdirs returns only those directories whose depth = 1.

This is the example:

EXEC master..xp_subdirs 'C:\MSSQL7'




xp_enum_oledb_providers
This extended stored procedure is used to list of all the available OLE DB providers. It returns Provider Name, Parse Name and Provider Description. To get a list of all OLE DB providers for your SQL Server, run:

EXEC master..xp_enum_oledb_providers



xp_enumcodepages
This extended stored procedure can be used to list of all code pages, character sets and their description for your SQL Server. To see this, list, run:

EXEC master..xp_enumcodepages



xp_enumdsn
This extended stored procedure returns a list of all system DSNs and their descriptions. To get the list of system DSNs, run:

EXEC master..xp_enumdsn



xp_enumerrorlogs
This extended stored procedure returns the list of all error logs with their last change date. To get the list of error logs, run:

EXEC master..xp_enumerrorlogs


xp_enumgroups
This extended stored procedure returns the list of Windows NT groups and their description. To get the list of the Windows NT groups, run:

EXEC master..xp_enumgroups



xp_fileexist
You can use this extended stored procedure to determine whether a particular file exists on the disk or not. The syntax for this xp is:

EXECUTE xp_fileexist filename [, file_exists INT OUTPUT]

For example, to check whether the file boot.ini exists on disk c: or not, run:

EXEC master..xp_fileexist 'c:\boot.ini'



xp_fixeddrives
This very useful extended stored procedure returns the list of all hard drives and the amount of free space in Mb for each hard drive. To see the list of drives, run:

EXEC master..xp_fixeddrives



xp_getnetname
This extended stored procedure returns the WINS name of the SQL Server that you're connected to. To view the name, run:

EXEC master..xp_getnetname



xp_readerrorlog
This extended stored procedure returns the content of the errorlog file. You can find the errorlog file in the C:\MSSQL7\Log directory, by default. To see the text of the errorlog file, run:

EXEC master..xp_readerrorlog

SQL Server DBCC (DataBase Console Commands)

SQL Server DBCC(DataBase Console Commands)

DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.

Example:

DBCC CACHESTATS

Sample Results (abbreviated):

Object Name Hit Ratio
------------ -------------

Proc 0.86420054765378507
Prepared 0.99988494930394334
Adhoc 0.93237136647793051
ReplProc 0.0
Trigger 0.99843452831887947
Cursor 0.42319205924058612
Exec Cxt 0.65279111666076906
View 0.95740334726893905
Default 0.60895011346896522
UsrTab 0.94985969576133511
SysTab 0.0
Check 0.67021276595744683
Rule 0.0
Summary 0.80056155581812771

Here's what some of the key statistics from this command mean:

  • Hit Ratio: Displays the percentage of time that this particular object was found in SQL Server's cache. The bigger this number, the better.
  • Object Count: Displays the total number of objects of the specified type that are cached.
  • Avg. Cost: A value used by SQL Server that measures how long it takes to compile a plan, along with the amount of memory needed by the plan. This value is used by SQL Server to determine if the plan should be cached or not.
  • Avg. Pages: Measures the total number of 8K pages used, on average, for cached objects.
  • LW Object Count, LW Avg Cost, WL Avg Stay, LW Ave Use: All these columns indicate how many of the specified objects have been removed from the cache by the Lazy Writer. The lower the figure, the better.

DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server's data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.

Example:

DBCC DROPCLEANBUFFERS

DBCC ERRORLOG: If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.

Example:

DBCC ERRORLOG

DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server. The database ID number to be affected must be entered as part of the command.

You may want to use this command before testing to ensure that previous stored procedure plans won't negatively affect testing results.

Example:

DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = 'database_name')
DBCC FLUSHPROCINDB (@intDBID)

DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn't do a great job of logical defragmentation.

The only way to truly reduce logical fragmentation is to rebuild your table's indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.

Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.

Example:

DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)

DBCC FREEPROCCACHE: Used to clear out the stored procedure cache for all SQL Server databases. You may want to use this command before testing to ensure that previous stored procedure plans won't negatively affect testing results.

Example:

DBCC FREEPROCCACHE

DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity. This is an undocumented command, and one that may be dropped in future versions of SQL Server.

Example:

DBCC MEMORYSTATUS

DBCC OPENTRAN: An open transaction can leave locks open, preventing others from accessing the data they need in a database. This command is used to identify the oldest open transaction in a specific database.

Example:

DBCC OPENTRAN('database_name')

DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.

Example:

DBCC PAGE ({dbid|dbname}, pagenum [,print option] [,cache] [,logical])

where:

Dbid or dbname: Enter either the dbid or the name of the database in question.

Pagenum: Enter the page number of the SQL Server page that is to be examined.

Print option: (Optional) Print option can be either 0, 1, or 2. 0 - (Default) This option causes DBCC PAGE to print out only the page header information. 1 - This option causes DBCC PAGE to print out the page header information, each row of information from the page, and the page's offset table. Each of the rows printed out will be separated from each other. 2 - This option is the same as option 1, except it prints the page rows as a single block of information rather than separating the individual rows. The offset and header will also be displayed.

Cache: (Optional) This parameter allows either a 1 or a 0 to be entered. 0 - This option causes DBCC PAGE to retrieve the page number from disk rather than checking to see if it is in cache. 1 - (Default) This option takes the page from cache if it is in cache rather than getting it from disk only.

Logical: (Optional) This parameter is for use if the page number that is to be retrieved is a virtual page rather then a logical page. It can be either 0 or 1. 0 - If the page is to be a virtual page number. 1 - (Default) If the page is the logical page number.

DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server's performance would be increased because I/O could be reduced on the server.

The process of "pinning a table" is a way to tell SQL Server that we don't want it to flush out data pages for specific named tables once they are read into the cache in the first place. This in effect keeps these database pages in the data cache all the time, which eliminates the process of SQL Server from having to read the data pages, flush them out, and reread them again when the time arrives. As you can imagine, this can reduce I/O for these pinned tables, boosting SQL Server's performance.

To pin a table, the command DBCC PINTABLE is used. For example, the script below can be run to pin a table in SQL Server:

DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID('Northwind')
SET @tbl_id = OBJECT_ID('Northwind..categories')
DBCC PINTABLE (@db_id, @tbl_id)

While you can use the DBCC PINTABLE directly, without the rest of the above script, you will find the script handy because the DBCC PINTABLE's parameters refer to the database and table ID that you want to pin, not by their database and table name. This script makes it a little easier to pin a table. You must run this command for every table you want to pin.

Once a table is pinned in the data cache, this does not mean that the entire table is automatically loaded into the data cache. It only means that as data pages from that table are needed by SQL Server, they are loaded into the data cache, and then stay there, not ever being flushed out to disk until you give the command to unpin the table using the DBCC UNPINTABLE. It is possible that part of a table, and not all of it, will be all that is pinned.

When you are done with a table and you no longer want it pinned, you will want to unpin your table. To do so, run this example code:

DECLARE @db_id int, @tbl_id int
USE Northwind
SET @db_id = DB_ID('Northwind')
SET @tbl_id = OBJECT_ID('Northwind..categories')
DBCC UNPINTABLE (@db_id, @tbl_id)

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.

Example:

DBCC PROCCACHE

DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server's performance.

If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.

Database reorganizations can be done byscheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).

Unfortunately, the DBCC DBREINDEX command will not automatically rebuild all of the indexes on all the tables in a database; it can only work on one table at a time. But if you run the following script, you can index all the tables in a database with ease.

Example:

DBCC DBREINDEX('table_name', fillfactor)

or

--Script to automatically reindex all tables in a database

USE DatabaseName --Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT "Reindexing " + @TableName
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor

The script will automatically reindex every index in every table of any database you select, and provide a fillfactor of 90%. You can substitute any number you want for the 90 in the above script.

When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don't need access to the tables being reorganized.

DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.

Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don't have to manually look up the table name ID number and the index ID number.

Example:

DBCC SHOWCONTIG (Table_id, IndexID)

Or:

--Script to identify table fragmentation

--Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

--Set the table and index to be examined
SELECT @IndexName = 'index_name' --enter name of index
SET @ID = OBJECT_ID('table_name') --enter name of table

--Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

--Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)

While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database.

DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.

Example:

DBCC SHOW_STATISTICS (table_name, index_name)

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.

Example:

DBCC SQLMGRSTATS

Sample Results:

Item Status
------------------------- -----------
Memory Used (8k Pages) 5446
Number CSql Objects 29098
Number False Hits 425490

Here's what the above means:

  • Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.
  • Number CSql Objects: Measures the total number of cached Transact-SQL statements.
  • Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.

DBCC SQLPERF(): This command includes both documented and undocumented options. Let's take a look at all of them and see what they do.

DBCC SQLPERF (LOGSPACE)

This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.

DBCC SQLPERF (UMSSTATS)

This option (undocumented) returns data about SQL Server thread management.

DBCC SQLPERF (WAITSTATS)

This option (undocumented) returns data about wait types for SQL Server resources.

DBCC SQLPERF (IOSTATS)

This option (undocumented) returns data about outstanding SQL Server reads and writes.

DBCC SQLPERF (RASTATS)

This option (undocumented) returns data about SQL Server read-ahead activity.

DBCC SQLPERF (THREADS)

This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread

DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)

Statistic Value
-------------------------------- ------------------------
Scheduler ID 0.0
num users 18.0
num runnable 0.0
num workers 13.0
idle workers 11.0
work queued 0.0
cntxt switches 2.2994396E+7
cntxt switches(idle) 1.7793976E+7
Scheduler ID 1.0
num users 15.0
num runnable 0.0
num workers 13.0
idle workers 10.0
work queued 0.0
cntxt switches 2.4836728E+7
cntxt switches(idle) 1.6275707E+7
Scheduler ID 2.0
num users 17.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1331447E+7
cntxt switches(idle) 1.6273097E+7
Scheduler ID 3.0
num users 16.0
num runnable 0.0
num workers 12.0
idle workers 11.0
work queued 0.0
cntxt switches 1.1110251E+7
cntxt switches(idle) 1.624729E+7
Scheduler Switches 0.0
Total Work 3.1632352E+7

Below is an explanation of some of the key statistics above:

  • num users: This is the number of SQL Server threads currently in the scheduler.
  • num runnable: This is the number of actual SQL Server threads that are runnable.
  • num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.
  • idle workers: The number of workers that are currently idle.
  • cntxt switches: The number of context switches between runnable threads.
  • cntxt switches (idle): The number of context switches to the idle thread.

DBCC TRACEON & DBCC TRACEOFF: Used to turn on and off trace flags. Trace flags are often used to turn on and off specific server behavior or server characteristics temporarily. In rare occasions, they can be useful to troubleshooting SQL Server performance problems.

Example:

To use the DBCC TRACEON command to turn on a specified trace flag, use this syntax:

DBCC TRACEON (trace# [,...n])

To use the DBCC TRACEON command to turn off a specified trace flag, use this syntax:

DBCC TRACEOFF (trace# [,...n])

You can also use the DBCC TRACESTATUS command to find out which trace flags are currently turned on in your server using this syntax:

DBCC TRACESTATUS (trace# [,...n])

For specific information on the different kinds of trace flags available, search this website or look them up in Books Online.

DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server's performance when running. When you run this command, you must specify the name of the database that you want affected.

Example:

DBCC UPDATEUSAGE ('databasename')

DBCC Commands

List Available DBCC Commands

DBCC Commands or Database Consistency Checker commands have been with SQL Server from its early ages itself. In this article we would list the available DBCC commands listed using the DBCC Help command. Many in this list are undocumented and are rarely used in practical scenarios. But these form the base for debugging complex problems. There are approximately 130 of them.

DBCC activecursors [(spid)]

DBCC addextendedproc (function_name, dll_name)

DBCC addinstance (objectname, instancename)

DBCC adduserobject (name)

DBCC auditevent (eventclass, eventsubclass, success, loginname, rolename, dbusername, loginid)

DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])

DBCC balancefactor (variance_percent)

DBCC bufcount [(number_of_buffers)]

DBCC buffer ( {'dbname' | dbid} [, objid [, number [, printopt={0|1|2} ][, dirty | io | kept | rlock | ioerr | hashed ]]])

DBCC bytes ( startaddress, length )

DBCC cachestats

DBCC callfulltext

DBCC checkalloc [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, ESTIMATEONLY]]

DBCC checkcatalog [('database_name')] [WITH NO_INFOMSGS]

DBCC checkconstraints [( 'tab_name' | tab_id | 'constraint_name' | constraint_id )] [WITH ALL_CONSTRAINTS | ALL_ERRORMSGS]

DBCC checkdb [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, DBCC TABLOCK]

DBCC checkdbts (dbid, newTimestamp)]

DBCC checkfilegroup [( [ {'filegroup_name' | filegroup_id} ] [, NOINDEX] )] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC checkident ('table_name'[, { NORESEED | {RESEED [, new_reseed_value] } } ] )

DBCC checkprimaryfile ( {'FileName'} [, opt={0|1|2|3} ])

DBCC checktable ('table_name'[, {NOINDEX | index_id | REPAIR}]) [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC cleantable ('database_name'|database_id, 'table_name'|table_id, [batch_size])

DBCC cacheprofile [( {actionid} [, bucketid])

DBCC clearspacecaches ('database_name'|database_id, 'table_name'|table_id, 'index_name'|index_id)

DBCC collectstats (on | off)

DBCC concurrencyviolation (reset | display | startlog | stoplog)

DBCC config

DBCC cursorstats ([spid [,'clear']])

DBCC dbinfo [('dbname')]

DBCC dbrecover (dbname [, IgnoreErrors])

DBCC dbreindex ('table_name' [, index_name [, fillfactor ]]) [WITH NO_INFOMSGS]

DBCC dbreindexall (db_name/db_id, type_bitmap)

DBCC dbrepair ('dbname', DROPDB [, NOINIT])

DBCC dbtable [({'dbname' | dbid})]

DBCC debugbreak

DBCC deleteinstance (objectname, instancename)

DBCC des [( {'dbname' | dbid} [, {'objname' | objid} ])]

DBCC detachdb [( 'dbname' )]

DBCC dropcleanbuffers

DBCC dropextendedproc (function_name)

DBCC dropuserobject ('object_name')

DBCC dumptrigger ({'BREAK', {0 | 1}} | 'DISPLAY' | {'SET', exception_number} | {'CLEAR', exception_number})

DBCC errorlog

DBCC extentinfo [({'database_name'| dbid | 0} [, {'table_name' | table_id} [, {'index_name' | index_id | -1}]])]

DBCC fileheader [( {'dbname' | dbid} [, fileid])

DBCC fixallocation [({'ADD' | 'REMOVE'}, {'PAGE' | 'SINGLEPAGE' | 'EXTENT' | 'MIXEDEXTENT'}, filenum, pagenum [, objectid, indid])

DBCC flush ('data' | 'log', dbid)

DBCC flushprocindb (database)

DBCC free dll_name (FREE)

DBCC freeproccache

DBCC freeze_io (db)

DBCC getvalue (name)

DBCC help ('DBCC_command' | '?')

DBCC icecapquery ('dbname', stored_proc_name [, #_times_to_icecap (-1 infinite, 0 turns off)]) Use 'DBCC icecapquery (printlist)' to see list of SP's to profile. Use 'DBCC icecapquery (icecapall)' to profile all SP's.

DBCC incrementinstance (objectname, countername, instancename, value)

DBCC ind ( { 'dbname' | dbid }, { 'objname' | objid }, { indid | 0 | -1 | -2 } )

DBCC indexdefrag ({dbid | dbname | 0}, {tableid | tablename}, {indid | indname})

DBCC inputbuffer (spid)

DBCC invalidate_textptr (textptr)

DBCC invalidate_textptr_objid (objid)

DBCC iotrace ( { 'dbname' | dbid | 0 | -1 }, { fileid | 0 }, bufsize, [ { numIOs | -1 } [, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )

DBCC latch ( address [, 'owners'] [, 'stackdumps'])

DBCC lock ([{'DUMPTABLE' | 'DUMPSTATS' | 'RESETSTATS' | 'HASH'}]|[{'STALLREPORTTHESHOLD', stallthreshold}])

DBCC lockobjectschema ('object_name')

DBCC log ([dbid[,{0|1|2|3|4}[,['lsn','[0x]x:y:z']|['numrecs',num]|['xdesid','x:y']|['extent','x:y']|['pageid','x:y']|['objid',{x,'y'}]|['logrecs',{'lop'|op}...]|['output',x,['filename','x']]...]]])

DBCC loginfo [({'database_name' | dbid})]

DBCC matview ({'PERSIST' | 'ENDPERSIST' | 'FREE' | 'USE' | 'ENDUSE'})

DBCC memobjlist [(memory object)]

DBCC memorymap

DBCC memorystatus

DBCC memospy

DBCC memusage ([IDS | NAMES], [Number of rows to output])

DBCC monitorevents ('sink' [, 'filter-expression'])

DBCC newalloc - please use checkalloc instead

DBCC no_textptr (table_id , max_inline)

DBCC opentran [({'dbname'| dbid})] [WITH TABLERESULTS[,NO_INFOMSGS]]

DBCC outputbuffer (spid)

DBCC page ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ][, cache={0|1} ])

DBCC perflog

DBCC perfmon

DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2}, targetfile, targetpg, order={1|0})

DBCC pintable (database_id, table_id)

DBCC procbuf [({'dbname' | dbid}[, {'objname' | objid}[, nbufs[, printopt = { 0 | 1 } ]]] )]

DBCC proccache

DBCC prtipage (dbid, objid, indexid [, [{{level, 0} | {filenum, pagenum}}] [,printopt]])

DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]

DBCC readpage ({ dbid, 'dbname' }, fileid, pageid, formatstr [, printopt = { 0 | 1} ])

DBCC rebuild_log (dbname [, filename])

DBCC renamecolumn (object_name, old_name, new_name)

DBCC resource

DBCC row_lock (dbid, tableid, set) - Not Needed

DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC ruleon ( rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC setcpuweight (weight)

DBCC setinstance (objectname, countername, instancename, value)

DBCC setioweight (weight)

DBCC show_statistics ('table_name', 'target_name')

DBCC showcontig (table_id | table_name [, index_id | index_name] [WITH FAST, ALL_INDEXES, TABLERESULTS [,ALL_LEVELS]])

DBCC showdbaffinity

DBCC showfilestats [(file_num)]

DBCC showoffrules

DBCC showonrules

DBCC showtableaffinity (table)

DBCC showtext ('dbname', {textpointer | {fileid, pageid, slotid [,option]}})

DBCC showweights

DBCC shrinkdatabase ({dbid | 'dbname'}, [freespace_percentage [, {NOTRUNCATE | TRUNCATEONLY}]])

DBCC shrinkdb is no longer supported. Please use shrinkdatabase instead

DBCC shrinkfile ({fileid | 'filename'}, [compress_size [, {NOTRUNCATE | TRUNCATEONLY | EMPTYFILE}]])

DBCC sqlmgrstats

DBCC sqlperf (LOGSPACE)({IOSTATS | LRUSTATS | NETSTATS | RASTATS [, CLEAR]} | {THREADS} | {LOGSPACE})

DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, 'THREADID'}] )]

DBCC tab ( dbid, objid )

DBCC tape_control {'query' | 'release'}[,('\\.\tape')]

DBCC tec [( uid[, spid[, ecid]] )]

DBCC textall [({'database_name'|database_id}[, 'FULL' | FAST] )]

DBCC textalloc ({'table_name'|table_id}[, 'FULL' | FAST])

DBCC thaw_io (db)

DBCC traceoff [( tracenum [, tracenum ... ] )]

DBCC traceon [( tracenum [, tracenum ... ] )]

DBCC tracestatus (trace# [, ...trace#])

DBCC unpintable (dbid, table_id)

DBCC updateusage ({'database_name'| 0} [, 'table_name' [, index_id]]) [WITH [NO_INFOMSGS] [,] COUNT_ROWS]

DBCC upgradedb (db)

DBCC usagegovernor (command, value)

DBCC useplan [(number_of_plan)]

DBCC useroptions

DBCC wakeup (spid)

DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)

Followers