Sunday, March 13, 2011

RMAN Configuration Channel Allocation

Automatic channel allocation:
CONFIGURE PARALLELISM FOR AUTOMATIC CHANNELS:
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 4;
CONFIGURE AUTOMATIC CHANNEL OPTIONS:
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT='F:\BACKUP\%U';
CONFIGURING BACKUPPIECE SIZE:
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ='F:\BACKUP\%U' MAXPIECESIZE
500M;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK
CHANGE THE DEFAULT DEVICE TYPE:
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURATION OF EXCLUDING TABLESPACE FROM DATABASE BACKUP:
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE ;
NOTE: when the following command is issued, it will skip the tablespace which is specified in the above
command. SYSTEM tablespace can not be excluded from whole database backup.
RMAN> BACKUP DATABASE;
NOTE: To overwrite this one,
RMAN> BACKUP DATABASE NOEXCLUDE;
EXCLUDING READ ONLY AND OFFLINE TABLESPACE:
RMAN> BACKUP DATABASE SKIP READONLY SKIP OFFLINE;
TO LIST THE CURRENT CONFIGURATION OF THE TARGET DATABASE:
RMAN> REPORT SCHEMA;
RETENTION POLICY:
There are two options for implementing a backup retention policy.
RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
NOTE: Default is redundancy 1.
RMAN> CONFIGURE RETENTION POLICY CLEAR;
Clear means it will take the default setting (i.e. Redundancy 1)
RMAN> CONFIGURE RETENTION POLICY TO NONE;
RMAN> REPORT OBSOLETE;
RMAN> DELETE OBSOLETE;
RMAN> SHOW RETENTION POLICY;
BACKUP OPTIMIZATION:
RMAN> SHOW BACKUP OPTIMIZATION;
the BACKUP command skips the backup of a file when the identical file backup exist
RMAN> CONFIGURE BACKUP OPTIMIZATION ON / OFF;
CONFIGURING DUPLEXED BACKUP SET:
RMAN> CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;
SHOW COMMAND:
RMAN> SHOW CHANNEL;
RMAN> SHOW DEFAULT DEVICE TYPE;
RMAN> SHOW DEFAULT DEVICE TYPE;
RMAN> SHOW BACKUP OPTIMIZATION;
RMAN> SHOW RETENTION POLICY;
RMAN> SHOW MAXSETSIZE;
RMAN> SHOW ALL;
LIST COMMAND:
RMAN> LIST BACKUP OF DATABASE;
RMAN> LIST BACKUP OF TABLESPACE USERS;
RMAN> LIST BACKUP OF DATAFILE 1;
RMAN> LIST COPY;
RMAN> LIST COPY OF TABLESPACE USERS;
RMAN> LIST COPY OF DATAFILE 2;
REPORT COMMAND:
RMAN> REPORT SCHEMA;
RMAN> REPORT OBSOLETE;
RMAN> REPORT NEED BACKUP DAYS 5;
RMAN> REPORT NEED BACKUP REDUNDANCY 10;
RMAN> REPORT NEED BACKUP INCREMENTAL 3 DATABASE;
Manual channel allocation:
RMAN>RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT=’E:\DATAFILE3.BAK’;
BACKUP DATAFIE 3;
}
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='C:\BACKUP\%U','E:\BACKUP\%U','F:\BACKUP\%U';
BACKUP COPIES 3 DATAFILE 9;
}
.Creating different backup piece in different locations:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK FORMAT='F:\BACKUP\%U';
ALLOCATE CHANNEL C2 TYPE DISK FORMAT='F:\BACKUP\%U';
ALLOCATE CHANNEL C3 TYPE DISK FORMAT='F:\BACKUP\%U';
BACKUP DATAFILE 8;
}
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='F:\BACKUP\DATAFILE8_%c.BAK';
BACKUP COPIES 3 DATAFILE 8;
}
NOTE: In Format %c is mandatory
RMAN>RUN {
BACKUP AS BACKUPSET
FORMAT '/u01/db01/backup/%d_%s_%p'
DURATION 10:00 MINIMIZE LOAD
(DATABASE);
SQL 'alter system archive log current';
}
RMAN BACKUP:
RMAN> BACKUP AS BACKUPSET
FORMAT '/BACKUP/df_%d_%s_%p.bus'
TABLESPACE hr_data;
DATABASE BACKUP:
RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='F:\BACKUP\%U';
BACKUP DATABASE;
}
TABLESPACE BACKUP:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='F:\BACKUP\%U';
BACKUP TABLESPACE USERS,UNDOTBS1;
}
DATAFILE BACKUP:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='F:\BACKUP\%U';
BACKUP DATAFILE 7,8,9;
}
ARCHIVED REDOLOG BACKUP:
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
RMAN> BACKUP TABLESPACE USERS PLUS ARCHIVELOG;
RMAN> BACKUP DATAFILE 7 PLUS ARCHIVELOG;
RMAN>BACKUP
FORMAT '/disk1/backup/ar_%t_%s_%p'
ARCHIVELOG FROM SEQUENCE=234
DELETE INPUT;
Multiplexed Backup Sets
RMAN>RUN {
ALLOCATE CHANNEL c1 DEVICE TYPE disk;
ALLOCATE CHANNEL c2 DEVICE TYPE disk;
ALLOCATE CHANNEL c3 DEVICE TYPE disk;
BACKUP
INCREMENTAL LEVEL = 0
FORMAT '%u.bak'
(DATAFILE 1 CHANNEL c1)
(DATAFILE 2 CHANNEL c2) (DATAFILE 3 CHANNEL c3);
Sql ‘ALTER SYSTEM ARCHIVE LOG CURRENT’;
}
RMAN>RUN {
ALLOCATE CHANNEL c1 type disk;
COPY INCREMENTAL LEVEL = 0
DATAFILE 1 tag=DF3,
ARCHIVELOG all;
}
RMAN>BACKUP COPY OF DATABASE;
RMAN>ALTER DATABASE ENABLE
BLOCK CHANGE TRACKING
USING FILE '/mydir/rman_change_track.f'
REUSE;
RMAN>RECOVER COPY OF
DATAFILE {n|'file_name'}
WITH TAG 'incr_upd_df1';
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK MAXOPENFILES=3;
BACKUP FILESPERSET 6 FORMAT 'F:\BACKUP\%U' DATABASE;
}
DUPLEXED BACKUP SET:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='C:\BACKUP\%U','C:\NEW\%U';
BACKUP COPIES 2 DATAFILE 2;
}
PARALLELIZATION OF BACKUP SETS:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK;
ALLOCATE CHANNEL C2 TYPE DISK;
ALLOCATE CHANNEL C3 TYPE DISK;
BACKUP
FORMAT 'C:\BACKUP\%U'
(DATAFILE 1,2,3 CHANNEL C1)
(DATAFILE 4,5,6 CHANNEL C2)
(DATAFILE 7,8,9 CHANNEL C3);
}
BACKUP PIECE SIZE:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK MAXPIECESIZE=200M;
BACKUP FORMAT 'C:\BACKUP\%U' (TABLESPACE SYSTEM);
}
ARCHIVED REDOLOG BACKUP SETS:
RMAN> BACKUP
FORMAT 'C:\BACKUP\%U'
ARCHIVELOG ALL;
RMAN> BACKUP
FORMAT 'C:\BACKUP\%U'
ARCHIVELOG ALL DELETE ALL INPUT;
CONTROLFILE SPFILE BACKUP:
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> BACKUP DATAFILE 5 INCLUDE CURRENT CONTROLFILE;
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='C:\BACKUP\SPCONTROL.BAK';
BACKUP CURRENT CONTROLFILE;
}
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='C:\BACKUP\SPCONTROL.BAK';
BACKUP SPFILE;
}
TAGS FOR BACKUP AND IMAGE COPY:
RMAN> RUN
{
ALLOCATE CHANNEL C1 TYPE DISK
FORMAT='C:\BACKUP\DF7.BAK';
BACKUP DATAFILE 7 TAG='DF7';
}
RMAN> RUN
{
COPY
DATAFILE 6 TO 'C:\BACKUP\DF6.BAK' TAG='DF6';
}

SQL format

column Format A35

set lines 9999
set pages 9999
set colsep "|"



------

Wednesday, March 9, 2011

Why 10.2.0.4 patch set?

bugs fixed in this patch set:

Higher CPU / Higher "cache buffer chains" latch gets / Higher "consistent gets" after truncate/Rebuild.

SYSTEM RESET of an spfile parameter can corrupt the SPFILE
/*append parallel*/ can corrupt an index

deadlock hang on "cursor: pin S wait on X" (typically from DBMS_STATS)
Wrong results with bind variables/CURSOR_SHARING

5079978 High US enqueue contention in RAC
5387030 Automatic tuning of undo_retention causes unusual extra space allocation
5439554 "buffer busy wait" timeouts with automatic undo management and in memory undo
5442919 Expired extents not being reused (ORA-30036)
5512921 Instance crash caused by SMON OERI[kcblus_1] / dump

----------------------

Sunday, March 6, 2011

Incremental Backup

RMAN Incremental Backup

Level 0 (same as full backup but used for incremental backup level1)
backup incremental level 0 as compressed backupset database FORMAT 'D:\archive1011\%U';

  • A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0

  • A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0



backup incremental level 1 as compressed backupset database FORMAT 'D:\archive1011\%U';
backup incremental level 1 as compressed backupset database FORMAT 'D:\archive1011\%U';
backup incremental level 1 as compressed backupset database FORMAT 'D:\archive1011\%U';
backup incremental level 1 as compressed backupset database FORMAT 'D:\archive1011\%U';

Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need one incremental backup from any particular level. Cumulative backups require more space and time than differential backups, however, because they duplicate the work done by previous backups at the same level.

cumulative incremental backup (backup after level 0)
backup cumulative incremental level 1 as compressed backupset database FORMAT 'D:\archive1011\%U';

Wednesday, February 23, 2011

Wait Events

http://www.scribd.com/doc/3321687/09-enqueues

SQL*Net message from client

The server process (foreground process) waits for a message from the client process to arrive.




db file scattered

The db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return.

A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.

* db file sequential read—A single-block read (i.e., index fetch by ROWID)

* db file scattered read—A multiblock read (a full-table scan, OPQ, sorting)


read by other session
read by other session occurs when two users need access to the same block of data. The first user reads the data from disk and places it in the buffer cache. The second user has to wait for the first users operation to complete so they are placed in to waiting. This is when the read by other session wait occurs. Unfortunately this is one of those events we need to "catch in the act" to properly resolve.
http://www.rampant-books.com/art_read_by_other_session.htm



log file sync
When a user session commits, the session's redo information needs to be flushed to the redo logfile. The user session will post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session.

log file parallel write
Writing redo records to the redo log files from the log buffer.


db file parallel write
The db file parallel write Oracle metric occurs when the process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk, and is waiting for all requests to complete.




PX Deq Credit: send blkd -----------------



direct path read
direct path read waits only when you are doing a parallel full-scan.



enq: RO - fast object reuse



Buffer Busy Waits
A buffer busy wait occurs if multiple processes want to access a buffer in the buffer cache concurrently.
The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.


The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists to a table and index, implementing Automatic Segment Storage Management (ASSM, a.k.a bitmap freelists), and, of course, and adding a missing index to reduce buffer touches.






rdbms ipc message

The background processes (LGWR, DBWR, LMS0) use this event to indicate that they are idle and are waiting for the foreground processes to send them an IPC message to do some work.




Streams AQ: waiting for messages in the queue

The session is waiting on an empty OLTP queue (Advanced Queuing) for a message to arrive so that the session can dequeue that message.





library cache lock
Oracle's library cache is nothing more than an area in memory, specifically one of three parts inside the shared pool. The library cache is composed of shared SQL areas, PL/SQL packages and procedures, various locks & handles, and in the case of a shared server configuration, stores private SQL areas. Whenever an application wants to execute SQL or PL/SQL (collectively called code), that code must first reside inside Oracle's library cache. When applications run and reference code, Oracle will first search the library cache to see if that code already exists in memory.


1. situation
library cache lock / pins is happen when object is pin in memory (executing , compile ...), because is executed and another session want to use id (compilation , grant ...)
2. situation
first session make long DML and later second session try DDL (ALTER TABLE)



Time Model Statistics

The goal of a DBA would be to reduce the DB time number to be as low as possible for any given time period. Obviously DBAs constantly try and reduce this number by eliminating wait events, but now we have a bit more incentive to reduce DB time by tuning SQL, applications, architecture, database design, instance layout, etc. –realizing that if we can produce a result set faster then DB time will also be reduced.

AWR Sections

AWR report is broken into multiple parts.

1)Instance information:-
This provides information the instance name , number,snapshot ids,total time the report was taken for and the database time during this elapsed time.

Elapsed time= end snapshot time – start snapshot time
Database time= Work done by database during this much elapsed time( CPU and I/o both add to Database time).If this is lesser than the elapsed time by a great margin, then database is idle.Database time does not include time spend by the background processes.

2)Cache Sizes : This shows the size of each SGA region after AMM has changed them. This information
can be compared to the original init.ora parameters at the end of the AWR report.

3)Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.This is very important for understanding how is the instance behaving.This has to be compared to base line report to understand the expected load on the machine and the delta during bad times.

4)Instance Efficiency Percentages (Target 100%): This section talks about how close are the vital ratios like buffer cache hit, library cache hit,parses etc.These can be taken as indicators ,but should not be a cause of worry if they are low.As the ratios cold be low or high based in database activities, and not due to real performance problem.Hence these are not stand alone statistics, should be read for a high level view .

5)Shared Pool Statistics: This summarizes changes to the shared pool during the snapshot
period.

6)Top 5 Timed Events :This is the section which is most relevant for analysis.This section shows what % of database time was the wait event seen for.Till 9i, this was the way to backtrack what was the total database time for the report , as there was no Database time column in 9i.

7)RAC Statistics :This part is seen only incase of cluster instance.This provides important indication on the average time take for block transfer, block receiving , messages ., which can point to performance problems in the Cluster instead of database.

8)Wait Class : This Depicts which wait class was the area of contention and where we need to focus.Was that network, concurrency, cluster, i/o Application, configuration etc.

9)Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.

10)Wait Events: This AWR report section provides more detailed wait event information for foreground
user processes which includes Top 5 wait events and many other wait events that occurred during
the snapshot interval.

11)Background Wait Events: This section is relevant to the background process wait events.

12)Time Model Statistics: Time mode statistics report how database-processing time is spent. This
section contains detailed timing information on particular components participating in database
processing.This gives information about background process timing also which is not included in database time.

13)Operating System Statistics: This section is important from OS server contention point of view.This section shows the main external resources including I/O, CPU, memory, and network usage.

14)Service Statistics: The service statistics section gives information services and their load in terms of CPU seconds, i/o seconds, number of buffer reads etc.

15)SQL Section: This section displays top SQL, ordered by important SQL execution metrics.

a)SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution
time during processing.

b)SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time
during its processing.

c)SQL Ordered by Gets: These SQLs performed a high number of logical reads while
retrieving data.

d)SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.

e)SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.

f)SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.

g)SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool
for some reason.

16)Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.

17)I/O Section: This section shows the all important I/O activity.This provides time it took to make 1 i/o say Av Rd(ms), and i/o per second say Av Rd/s.This should be compared to the baseline to see if the rate of i/o has always been like this or there is a diversion now.

18)Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and
Java pool.

19)Buffer Wait Statistics: This important section shows buffer cache waits statistics.

20)Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.

21)Undo Segment Summary: This section gives a summary about how undo segments are used by the database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.

22)Latch Activity: This section shows details about latch statistics. Latches are a lightweight
serialization mechanism that is used to single-thread access to internal Oracle structures.The latch should be checked by its sleeps.The sleepiest Latch is the latch that is under contention , and not the latch with high requests.Hence run through the sleep breakdown part of this section to arrive at the latch under highest contention.

23)Segment Section: This portion is important to make a guess in which segment and which segment type the contention could be.Tally this with the top 5 wait events.

Segments by Logical Reads: Includes top segments which experienced high number of
logical reads.

Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.

Segments by Buffer Busy Waits: These segments have the largest number of buffer waits
caused by their data blocks.

Segments by Row Lock Waits: Includes segments that had a large number of row locks on
their data.

Segments by ITL Waits: Includes segments that had a large contention for Interested
Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage
parameter of the table.

24)Dictionary Cache Stats: This section exposes details about how the data dictionary cache is
operating.

25)Library Cache Activity: Includes library cache statistics which are needed in case you see library cache in top 5 wait events.You might want to see if the reload/invalidations are causing the contention or there is some other issue with library cache.

26)SGA Memory Summary:This would tell us the difference in the respective pools at the start and end of report.This could be an indicator of setting minimum value for each, when sga)target is being used..

27)init.ora Parameters: This section shows the original init.ora parameters for the instance during
the snapshot period.

Tuesday, February 22, 2011

delete listener services in windows

how to delete listener services in windows

regedt32-->hkey_local_machine-->system-->currentcontrolset-->services-->oracle and delete it by delete key.

Sunday, February 20, 2011

RMAN :Restore Different Server, Database folder on different Directory, Backup Piece on different location

Scenario
Restore on Different Server, Database folders are on different Directory, Backup Piece on different location
Previous Server Prod, E:\snsd1011\, E:\archive1011\
New Server UAT D:\snsd1011\, D:\archive1011\

-----install oracle server10.2.0.3 without create startup db

------
mkdir D:\oracle\product\10.2.0\admin\sns1011\adump
mkdir D:\oracle\product\10.2.0\admin\sns1011\bdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\cdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\dpdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\pfile
mkdir D:\oracle\product\10.2.0\admin\sns1011\udump
mkdir D:\archive1011\sns1011\arch
copy initsns1011.ora, tnsnames.ora,listener.ora to destination location and change paramater accordingly.

------
D:\>
oradim -new -sid sns1011 -SRVC OracleServicesns1011 -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE D:\oracle\product\10.2.0\db_1\database\initsns1011.ORA

----
lsnrctl stop
lsnrctl start
lsnrctl services
tnsping sns1011
----
sqlplusw sys/oracle@sns1011srv as sysdba

SQL>startup nomount pfile='D:\oracle\product\10.2.0\db_1\database\initsns6.ora';

-----
cmd
c:>
SET ORACLE_SID=sns6
RMAN TARGET SYS/linux@SNS1011SRV
shutdown immediate;
startup nomount;

RMAN>RESTORE CONTROLFILE FROM 'D:\archive1011\SNS1011\C-3554091374-20100603-00';

RMAN > SET DBID=3554091374

alter database MOUNT;

---------
RMAN>
list backup;
CROSSCHECK backup of database;
delete backup of database;
delete expired backup;
list backup;
delete backupset 146;
CROSSCHECK backup of controlfile;
delete backup of controlfile;
CROSSCHECK archivelog all;
delete force obsolete;
delete expired archivelog all;

---------

RMAN>CATALOG START WITH 'D:\archive1011\sns1011';

or

catalog backuppiece
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_1_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_2_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_3_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_4_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_5_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_6_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_7_T_721740034'
;

---------

RUN{
set newname for datafile 1 TO 'D:\SNSD1011\SYSTEM01.ORA';
set newname for datafile 2 TO 'D:\SNSD1011\UNDOTBS01.ORA';
set newname for datafile 3 TO 'D:\SNSD1011\SYSAUX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\INDX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\USERS01.ORA';
set newname for tempfile 5 TO 'D:\SNSD1011\TEMP01.ORA';
}

---------
SQL>
alter database rename file 'e:\snsd1011\system01.ora' to 'd:\snsd1011\system01.ora';
alter database rename file 'e:\snsd1011\users01.ora' to 'd:\snsd1011\users01.ora';
alter database rename file 'e:\snsd1011\UNDOTBS01.ora' to 'd:\snsd1011\UNDOTBS01.ora';
alter database rename file 'e:\snsd1011\SYSAUX01.ora' to 'd:\snsd1011\SYSAUX01.ora';
alter database rename file 'e:\snsd1011\INDX01.ora' to 'd:\snsd1011\INDEX01.ora';
alter database rename file 'e:\snsd1011\TEMP01.ora' to 'd:\snsd1011\TEMP01.ora';

alter database rename file 'e:\snsd1011\redo01.ora' to 'd:\snsd1011\redo01.ora';
alter database rename file 'e:\snsd1011\redo02.ora' to 'd:\snsd1011\redo02.ora';
alter database rename file 'e:\snsd1011\redo03.ora' to 'd:\snsd1011\redo03.ora';

------------
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;

----------------------------

Restore Error to different server and different directory (RMAN ORA-01180 ORA-01110)


--------Error--------RMAN ORA-01180 ORA-01110----------

I have taken database backup (location E drive ) in backuppieces and restore it into test server (D Drive) but not able to restore

If Controlfile have datafiles location of E drive. Then Error fail to restore on D drive.
If I recreate Controlfile then Error of DBID mismatch


SET ORACLE_SID=sns6
RMAN TARGET SYS/linux@SNS1011SRV
shutdown immediate;
startup nomount;

RMAN>RESTORE CONTROLFILE FROM 'D:\archive1011\SNS1011\C-3554091374-20100603-00';
RMAN > SET DBID=3554091374

alter database MOUNT;

RMAN>
list backup;
CROSSCHECK backup of database;
delete backup of database;
delete expired backup;
list backup;
delete backupset 146;
CROSSCHECK backup of controlfile;
delete backup of controlfile;
CROSSCHECK archivelog all;
delete force obsolete;
delete expired archivelog all;
list backup;
RMAN> delete backuppiece 'E:\BACKUP\RMAN\SNS1011\RMANBACKUP_DB_SNS1011_S_157_P_1_T_721741509';

RMAN-06207: WARNING: 7 objects could not be deleted for DISK channel(s) due to mismatched statuS.


RMAN>CATALOG START WITH 'D:\archive1011\sns1011';

or

catalog backuppiece
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_1_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_2_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_3_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_4_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_5_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_6_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_7_T_721740034'
;



RUN{
set newname for datafile 1 TO 'D:\SNSD1011\SYSTEM01.ORA';
set newname for datafile 2 TO 'D:\SNSD1011\UNDOTBS01.ORA';
set newname for datafile 3 TO 'D:\SNSD1011\SYSAUX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\INDX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\USERS01.ORA';
set newname for tempfile 5 TO 'D:\SNSD1011\TEMP01.ORA';
}

RESTORE DATABASE;



Starting restore at 20-FEB-11
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\SNSD1011\SYSTEM01.ORA
restoring datafile 00002 to E:\SNSD1011\UNDOTBS01.ORA
restoring datafile 00003 to E:\SNSD1011\SYSAUX01.ORA
restoring datafile 00004 to E:\SNSD1011\INDX01.ORA
restoring datafile 00005 to E:\SNSD1011\USERS01.ORA
channel ORA_DISK_1: reading from backup piece D:\ARCHIVE1011\SNS1011\RMANBACKUP_
DB_SNS1011_S_156_P_1_T_721740034
ORA-19870: error reading backup piece D:\ARCHIVE1011\SNS1011\RMANBACKUP_DB_SNS10
11_S_156_P_1_T_721740034
ORA-19504: failed to create file "E:\SNSD1011\USERS01.ORA"
ORA-27040: file create error, unable to create file
OSD-04002: unable to open file
O/S-Error: (OS 21) The device is not ready.
failover to previous backup

creating datafile fno=1 name=E:\SNSD1011\SYSTEM01.ORA
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 02/20/2011 20:48:10
ORA-01180: can not create datafile 1
ORA-01110: data file 1: 'E:\SNSD1011\SYSTEM01.ORA'


----------------Solution-----------------------

SQL>
alter database rename file 'e:\snsd1011\system01.ora' to 'd:\snsd1011\system01.ora';
alter database rename file 'e:\snsd1011\users01.ora' to 'd:\snsd1011\users01.ora';
alter database rename file 'e:\snsd1011\UNDOTBS01.ora' to 'd:\snsd1011\UNDOTBS01.ora';
alter database rename file 'e:\snsd1011\SYSAUX01.ora' to 'd:\snsd1011\SYSAUX01.ora';
alter database rename file 'e:\snsd1011\INDX01.ora' to 'd:\snsd1011\INDEX01.ora';
alter database rename file 'e:\snsd1011\TEMP01.ora' to 'd:\snsd1011\TEMP01.ora';

alter database rename file 'e:\snsd1011\redo01.ora' to 'd:\snsd1011\redo01.ora';
alter database rename file 'e:\snsd1011\redo02.ora' to 'd:\snsd1011\redo02.ora';
alter database rename file 'e:\snsd1011\redo03.ora' to 'd:\snsd1011\redo03.ora';

----------------------

Wednesday, February 16, 2011

Data Pump

USING DATA PUMP
TABLE EXPORT IMPORT
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log



1. The following SQL statements creates a user, a directory object named dpump_dir1 and grants the permissions to the user.

SQLPLUS ldbo/linux@sns0809srv as sysdba
SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS 'e:\dpdirectory';
SQL> grant read, write on directory dpump_dir1 to dpuser


$ expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
$expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
$expdp dpuser/dpuser@TDB10G schemas=dpuser directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log



---------------difference between traditional exp/imp and data pump-------

The main differences are listed below:
1)Expdp/Impdp access files on the server rather than on the client.

2))Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

3)To improved performance Impdp/Expdp use parallel execution rather than a single stream of execution.
1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions

-------

DBMS_DATAPUMP package


SQLPLUS system/manager@TDB10G as sysdba



SQL> create user dpuser identified by dpuser;
grant connect, resource to dpuser;
CREATE DIRECTORY dpump_dir1 AS 'E:\app\kshitij';
grant read, write on directory dpump_dir1 to dpuser;

========
$ expdp dpuser/dpuser@orcl schemas=dpuser

include= TABLE:\"IN (\'EMP\', \'DEPT\')\"

directory=dpump_dir1 dumpfile=dpuser.dmp logfile=dpuser.log



$expdp dpuser/dpuser@TDB10G schemas=dpuser

exclude=TABLE:\"= \'EMP_DETAILS\'\"

directory=dpump_dir1 dumpfile=dpuser2.dmp lo
==============
The following steps list the basic activities involved in using Data Pump API.

1. Execute DBMS_DATAPUMP.OPEN procedure to create job.

2. Define parameters for the job like adding file and filters etc.

3. Start the job.

4. Optionally monitor the job until it completes.

5. Optionally detach from job and attach at later time.

6. Optionally, stop the job

7. Restart the job that was stopped.

Example of the above steps:



Declare

P_handle number; --- -- Data Pump job handle

P_last_job_state varchar2(45); ---- -- To keep track of job state

P_job_state varchar2(45);

P_status ku$_Status ----- -- The status object returned by get_status

BEGIN

P_handle:=DBMS_DATAPUMP.OPEN ('EXPORT','SCHEMA', NULL,'EXAMPLE','LATEST');



-- Specify a single dump file for the job (using the handle just returned)

-- and a directory object, which must already be defined and accessible

-- to the user running this procedure



DBMS_DATAPUMP.ADD_FILE (p_handle,'example.dmp','DMPDIR');



-- A metadata filter is used to specify the schema that will be exported.



DBMS_DATAPUMP.METADATA_FILTER (p_handle,'SCHEMA_EXPR','IN (''dpuser'')');



-- Start the job. An exception will be generated if something is not set up

-- Properly.



DBMS_DATAPUMP.start_job (p_handle);



----The export job should now be running.

The status of the job can be checked by writing a separate procedure and capturing the errors and status until it is completed. Overall job status can also be obtained by querying “SELECT * from dba_datapump_jobs”.



=======================

Data Pump ( expdp / impdp )

Now in 10g we can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities.

Advantages using data pump

1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions
Note (1) You cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater.

Note (2) The expdp and impdp are command line tools and run from within the Operating System.

Use of data pump

1) Migrating databases
2) Copying databases
3) Transferring oracle databases between different operating systems
4) Backing up important tables before you change them
5) Moving database objects from one tablespace to another
6) Transporting tablespace's between databases
7) Reorganizing fragmented table data
8) Extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

1) dbms_datapump - the main engine for driving data dictionary metadata loading and unloading 2) dbms_metadata - used to extract the appropriate metadata
3) command-line - expdp and impdp are the import/export equivalents

Data Access methods

1) Direct path

2) External table path

Direct Path
bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.
External Path
Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.
In the following cases oracle will use the external path if any of the below are in use
1) clustered tables
2) active triggers in the table
3) a single partition in a table with a global index
4) referential integrity constraints
5) domain indexes on LOB columns
6) tables with fine-grained access control enabled in the insert mode
7) tables with BFILE or opaque type columns

Data Pump files

All files will be created on the server.

1) Dump files - holds the data and metadata
2) log files - the resulting output from the data pump command
3) sql files - contain the DDL statements describing the objects included in the job but can contain data

Master data pump tables - when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Data Pump privileges

1) exp_full_database 2) imp_full_database


How Data Pump works


The Master Control Process (MCP), has the process name DMnn, only one master job runs per job which controls the whole Data Pump job, it performs the following
create jobs and controls them
creates and manages the worker processes
monitors the jobs and logs the process
maintains the job state and restart information in the master table (create in the users schema running the job)
manages the necessary files including the dump file set
The master process creates a master table which contains job details (state, restart info), this table is created in the users schema who is running the Data Pump job. Once the job has finished it dumps the table contents into the data pump file and deletes the table. When you import the data pump file it re-creates the table and reads it to verify the correct sequence in which the it should import the various database objects.
The worker process is named DWnn and is the process that actually performs the work, you can have a number of worker process running on the same job (parallelism). The work process updates the master table with the various job status.
The shadow process is created when the client logs in to the oracle server it services data pump API requests, it creates the job consisting of the master table and the master process.
The client processes are the expdp and impdp commands.

Examples

Exporting database
expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log
Note: increase the parallel option based on the number of CPU's you have

Exporting schema
expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log

table
expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only tablespace

expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log

Importing database
impdp system/password full=y dumpfile=data.dmp nologfile=y

Importing schema change
impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only

impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=…Other Options

directory
specifies a oracle directory object

filesize
split the dump file into specific sizes (could be used if filesystem has 2GB limit)

parfile
specify the parameter file

content
contents option can be ALL, METADATA_ONLY or DATA_ONLY

compression
compression is used by default but you can stop it

exclude/include
metadata filtering

query
selectively export table data using a SQL statement

estimate
Calculate job estimates where the vaild keywords are blocks and statistics

estimate_only
Calculate job estimates without performing the export

network link
you can perform a export across a network

encryption
you can encrypt data within the data pump file

parallel
increase worker processes to increase throughput, base it on number of CPU's

remap_schema
move objects from one schema to another

remap_datafile
change the name of the datafile when moving across different systems

remap_tablespace
move from one tablespace to another

Useful Views
DBA_DATAPUMP_JOBS

summary information of all currently running data pump jobs
DBA_DATAPUMP_SESSIONS

displays the user currently running data pump jobs
V$SESSION_LONGOPS

display information like totalwork, sofar, units and opname
The units of work done so far
Privileges
IMP_FULL_DATABASE
required if using advanced features
EXP_FULL_DATABASE
required if using advanced features

DBMS_DATAPUMP package
The package dbms_datapump can be used for the following
starting/stopping/restarting a job
monitoring a job
detaching from a job

exporting

declare d1 number;begin d1 := dbms_datapump.open('export','schema',null, 'test1', 'latest'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_filter(d1, 'schema_expr','in (''OE'')'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;


importing
declare d1 number;begin d1 := dbms_datapump.open('import','full',null, 'test1'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_remap(d1, 'remap_schema', 'oe', 'hr'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;

============================
REMAP

REMAP_TABLESPACE – This allows you to easily import a table into a different tablespace from which it was originally exported. The databases have to be 10.1 or later.

> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY =dpumpdir1 DUMPFILE=employees.dmp

REMAP_DATAFILES – This is a very useful feature when you move databases between platforms that have different file naming conventions. This parameter changes the source datafile name to the target datafile name in all SQL
statements where the source datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, it’s best to specify the parameter within a parameter file.

The parameter file, payroll.par, has the following content:

DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/pa
yroll/tbs6.dbf’”

You can then issue the following command:

> impdp username/password PARFILE=payroll.par
================

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters, but once you are comfortable with Data Pump, there are new capabilities that you will want to explore.

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. Now Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database.) The number of parallel processes can be changed on the fly using Data Pump’s interactive command-line mode. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa).

For best performance, you should do the following:

1. Make sure your system is well balanced across CPU, memory, and I/O.
2. Have at least one dump file for each degree of parallelism. If there aren’t enough dump files, performance will not be optimal because multiple threads of execution will be trying to access the same dump file.
3. Put files that are members of a dump file set on separate disks so that they will be written and read in parallel.
4. For export operations, use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated.

> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=par_exp%u.dmp PARALLEL=4

-------

General Errors

http://www.articles.freemegazone.com/oracleErrors.php

ORA12518: TNS:listener could not hand off client connection
ORA04030: out of process memory when trying to allocate
ORA00060: deadlock detected while waiting for resource
ORA00054: resource busy and acquire with NOWAIT specified
ORA00600: intemal error code, arguments: [numl, [?], [?], [?], [?], [?]
ORA00376: ORA01110 recovery from lost datafile
ORA01925: maximum of 148 enabled roles exceeded
ORA01000: maximum open cursors exceeded
ORA01180 ORA01110 RMAN recover on different location
ora 19870 ora 19505 ora 27041 RMANRecover Database witn missing Archieve Logs.
ORA12545: Connect failed because target host or object does not exist
ORA00942: table or view does not exist
ORA03113: endoffile on communication channel
ORA06502: PL/SQL: numeric or value error
ORA04031: unable to allocate num bytes of shared memory num, num, num
ORA01756: quoted string not properly terminated
ORA29283: invalid file operation
ORA00020: maximum number of processes num exceeded
ORA12203: TNS:unable to connect to destination
ORA12154: TNS:could not resolve the connect identifier specified
ORA01017: invalid username/password; logon denied
ORA01403: no data found


----------ORA-12518: TNS:listener could not hand off client connection------------------
ping 10.100.0.65 -t

tnsping 10.100.0.65 10

lnsrctl startus

database server memory were enough for new connection

check virtual memory

--solution---
size of listener.log is increased to 1GB

select * from v$resource_limit order by 2 desc;
kill sniped user session


Turn On Listener Tracing
LOGGING_LISTENER = on
TRACE_LEVEL_LISTENER=16
TRACE_FILE_LISTENER=listener.trc
TRACE_DIRECTORY_LISTENER=d:\oracle\product\10.2.0\db_1\network\trace


--------------------------------ORA-04030: out of process memory when trying to allocate

Do session, memory monitoring


Windows Server 2003 SP2
Oracle 10g
/PAE /3GB
RAM 8GB
instances:6

Memory Usage of Oracle.exe for current FY goes upto 2.5 GB

1) exclude database folders from virus scan

2) decrease sga_max_size for all other instances

3)Schedule to kill sniped sessions


----------------------ORA-00060: deadlock detected while waiting for resource

A deadlock is the situation where you have two, or more, Oracle "sessions" (well, transactional "states") competing for mutually locked resources. Oracle deals with deadlocks pretty much immediately by raising an exception (ORA-00060) in one of the sessions.


Trying to execute a statement, but your session was deadlocked because another session had the same resource locked. The statement(s) that you tried to execute have been rolled back.

1. You can wait a few minutes and try to re-execute the statement(s) that were rolled back.
2. You can execute a ROLLBACK and re-execute all statements since the last COMMIT was executed.



select do.object_name,
row_wait_obj#, do.data_object_id, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, do.data_object_id, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=543
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;



select l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2;


select session_id,oracle_username,process,
decode(locked_mode,
2, 'row share',
3, 'row exclusive',
4, 'share',
5, 'share row exclusive',
6, 'exclusive', 'unknown') "Lockmode"
from V$LOCKED_OBJECT;


Session 551 is blocking 2 other sessions

select * from v$session where sid='551'

select * from v$lock;

select sid, serial#,status from v$session where username = 'USER';

select serial#,status from v$session where sid='Session id';

alter system kill session 'SID,SERIAL#';


The session should now be killed and the lock SHOULD release.

Rechecking "v$locked_object" will tell you this. If the lock does not
immediately release, there may be a rollback occuring.

To check for rollback:

select used_ublk from v$transaction where ADDR=;


-------------------------------- ORA-00054: resource busy and acquire with NOWAIT specified

Trying to execute a LOCK TABLE or SELECT FOR UPDATE command with the NOWAIT keyword but the resource was unavailable.
1. Wait and try the command again after a few minutes.
2. Execute the command without the NOWAIT keyword.



---------------------------------------------ORA-00600: intemal error code, arguments: [numl, [?], [?], [?], [?], [?]
ORA-600 is an internal error generated by the generic kernel code of the Oracle RDBMS software. It is different from other Oracle errors in many ways. The following is a list of these differences:

1. An ORA-600 error may or may not be displayed on the screen. Therefore, screen output should not be relied on for capturing information on this error. Information on ORA-600 errors are found in the database alert and trace files. We recommend that you check these files frequently for database errors. (See the Alert and Trace Files section for more information.)

2. Each ORA-600 error comes with a list of arguments They usually enclosed in square brackets and follow the error on the same line for example:

ORA-00600 [14000][51202][1][51200][][]

Each argument has a specific meaning which can only be interpreted by an Oracle support analyst. The arguments may also change meaning from version to version therefore customers are not advised to memorize them.

3. Every occurrence of an ORA-600 should be reported to Oracle Support. Unlike other errors, you can not find help text for these errors. Only Oracle technical support should diagnose and take actions to prevent or resolve damage to the database.

4. Each ORA-600 error generates a database trace file.

Possible causes

Possible causes include:

* time-outs,
* file corruption,
* failed data checks in memory, hardware, memory, or I/O messages,
* incorrectly restored files
* a SELECT FROM DUAL statement in PL/SQL within Oracle Forms (you have to use SELECT FROM SYS.DUAL instead!)

How to fix it

Contact Oracle Support with the following information:

* events that led up to the error
* the operations that were attempted that led to the error
* the conditions of the operating system and database at the time of the error
* any unusual circumstances that occurred prior to receiving the ORA-00600 message.
* contents of any trace files generated by the error
* the relevant portions of the Alert file
* in Oracle Forms PL/SQL, use SELECT FROM SYS.DUAL to access the system "dual" table

------------------ORA-00376---------ORA-01110-------recovery from lost datafile
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: 'D:\VIKRAM\ORADATA\TEST2\USERS01.DBF'


sql>startup
ORA-01157: cannot identify/lock data file 4 – see DBWR trace file
ORA-01110: data file 4: ‘D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF’
RMAN> restore datafile 4;
RMAN> recover datafile 4;
RMAN> alter database open;

-----If the database is already open when datafile corruption is detected, you can recover the datafile without shutting down the database. The only additional step is to take the relevant tablespace offline before starting recovery. In this case you would perform recovery at the tablespace level.
RMAN> sql ‘alter tablespace USERS offline immediate’;
RMAN> recover tablespace USERS;
RMAN> sql ‘alter tablespace USERS online’;









-----------------------ORA-01925: maximum of 148 enabled roles exceeded ----------------

Increase max_enabled_roles and warm start the database.

or revoke roles from user
-------------------------------ORA-01000: maximum open cursors exceeded
Cause: A host language program attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.
Action: Modify the program to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.


------------RMAN recover on different location---------ORA-01180--ORA-01110--------------------------
RMAN> restore database;


ORA-01180: can not create datafile 1
ORA-01110: data file 1: 'E:\SNSD1011\SYSTEM01.ORA'

crosscheck
delete force obsolete;
delete expired archivelog all;
crosscheck backup of database;
delete expired backup;

RMAN> catalog backuppiece 'E:\archive1011\sns1011\0ULAP4RC_1_1';
SQL> alter database rename file 'E:\SNSD1011\REDO01.ORA' TO 'D:\SNSD1011\REDO01.ORA' ;

run { set until sequence
set newname for datafile 1 to 'D:\SNSD1011\SYSTEM01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\UNDOTBS01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\SYSAUX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\INDX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\USERS01.ORA' ;




---------------------RMAN--------Recover Database witn missing Archieve Logs.----------ora 19870 ora 19505 ora 27041---------

I am trying the restore my old database but due to missing of one archieve log. i m not able to restore and recover from rman

Not able to open database

ora 19870 ora 19505 ora 27041 osd 04002

rman 00571 rman 00569 rman 03002 rman 06053 rman 06025


Solutions
shutdown immediate;
add into init.ora _allow_resetlogs_corruption=true
startup mount;
sql>recover database until cancel using backup controlfile;

Specify log: {=suggested | filename | AUTO | CANCEL}

CANCEL

Alter database open resetlogs



------------------ORA-01034: Oracle not available

Oracle is not started up. Possible causes may be that either the SGA requires more space than was allocated for it or the operating-system variable pointing to the instance is improperly defined.

1. Refer to accompanying messages for possible causes and correct the problem mentioned in the other messages.
2. If Oracle has been initialized, then on some operating systems, verify that Oracle was linked correctly.
3. See the platform specific Oracle documentation.

--------------------ORA-12545: Connect failed because target host or object does not exist

The address specified is not valid, or the program being connected to does not exist.
1. Ensure the ADDRESS parameters have been entered correctly.
2. Ensure that the executable for the server exists.
3. If the protocol is TCP/IP, edit the TNSNAMES.ORA file to change the host name to a numeric IP address and try again.


------------------------ORA-00942: table or view does not exist

1. SQL statement is executed that references a table or view that either does not exist.
2. You do not have access to the table or view, or the table or view belongs to another schema and you didn't reference the table by the schema name.


1. If this error occurred because the table or view does not exist, you will need to create the table or view.
2. If this error occurred because you do not have access to the table or view, you will need to have the owner of the table/view, or a DBA grant you the appropriate privileges to this object.
3. If this error occurred because the table/view belongs to another schema and you didn't reference the table by the schema name, you will need to rewrite your SQL to include the schema name.

---------------ORA-03113: end-of-file on communication channel

You encountered an unexpected end-of-file on the communication channel.

1. Check for network problems and review the SQL*Net setup.
2. Look in the alert.log file for any errors.
3. Test to see whether the server process is dead and whether a trace file was generated at failure time.



------------------------------------------ORA-06502: PL/SQL: numeric or value error
The executed statement resulted in an arithmetic, numeric, string, conversion, or constraint error. Change the data, how it is manipulated, or how it is declared so that values do not violate constraints.


------------------------------------------23 ORA-04031: unable to allocate num bytes of shared memory num, num, num Tried to use more shared memory than was available. SGA private memory has been exhausted.

1. Reduce your use of shared memory.
2. Increase the SHARED_POOL_SIZE initialization parameter in the initialization file.
3. Use the DBMS_SHARED_POOL package to pin large packages.


------------------------------------------24 ORA-01756: quoted string not properly terminated A quoted string is not terminated with a single quote mark (') Insert the closing quote and retry the statement.


------------------------------------------25 ORA-29283: invalid file operation An attempt was made to read from a file or directory that does not exist, or file or directory access was denied by the operating system. Verify file and directory access privileges on the file system, and if reading, verify that the file exists.

------------------------------------------26 ORA-00020: maximum number of processes num exceeded All process state objects are in use.

1. Wait a few minutes and try to re-execute the statement(s).
2. Shut down Oracle, increase the PROCESSES parameter in the initialization parameter file, and restart Oracle.

------------------------------------------27 ORA-12203: TNS:unable to connect to destination

1. Invalid address specified or destination is not listening.
2. This error can also occur because of underlying network or network transport problems.


1. Verify that the net service name you entered was correct.
2. Verify that the ADDRESS portion of the connect descriptor which corresponds to the net service name is correct.
3. Ensure that the destination process (for example the listener) is running at the remote node.

----------------------------------------ORA-12154: TNS:could not resolve the connect identifier specified

You tried to connect to Oracle, but the service name is either missing from the TNSNAMES.ORA file or is incorrectly defined

1. Make sure that the TNSNAMES.ORA file exists and is in the correct directory.
2. Make sure that the service name that you are connecting to is included in the TNSNAMES.ORA file and that it is correctly defined.
3. Make sure that there are no syntax errors in the TNSNAMES.ORA file. For example, if there are unmatched brackets in the file, the file will be rendered unusable.




------------------------------------------28 ORA-01017: invalid username/password; logon denied Logging into Oracle with an invalid username/password combination. Enter a valid username and password combination in the correct format. If the username and password are entered together, the format is: username/password



------------------------------------------29 ORA-01403: no data found

1. Executing a SELECT INTO statement and no rows were returned.
2. Referencing an uninitialized row in a table.
3. Reading past the end of file with the UTL_FILE package.

Terminate processing of the data.


30 ORA-01033: ORACLE initialization or shutdown in progress An attempt was made to log on while Oracle is being started up or shutdown Wait a few minutes. Then retry the operation.




----------------------------------------????????????/
alter database open resetlogs;
ORA-01153: an incompatible media recovery is active
SQL> alter database recover cancel;
now
ORA-01112: media recovery not started


-------------------

http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/ora24324-service-handle-not-initialized-ora01041-internal-error-hostdef-extension-doesnt-exist-2771079

ORA-01092: ORACLE instance terminated. Disconnection forced

----------------------------------------------

ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
ORA-24324 ORA-01041

--------
ORA-01033: ORACLE initialization or shutdown in progress
-----------------

Common Oracle error codes

ORA-00001 Unique constraint violated. (Invalid data has been rejected)

ORA-00600 Internal error (contact support)

ORA-03113 End-of-file on communication channel (Network connection lost)

ORA-03114 Not connected to ORACLE

ORA-00942 Table or view does not exist

ORA-01017 Invalid Username/Password

ORA-01031 Insufficient privileges

ORA-01034 Oracle not available (the database is down)

ORA-01403 No data found

ORA-01555 Snapshot too old (Rollback has been overwritten)


ORA-12154 TNS:could not resolve service name"
ORA-12203 TNS:unable to connect to destination"
ORA-12500 TNS:listener failed to start a dedicated server process"
ORA-12545 TNS:name lookup failure"
ORA-12560 TNS:protocol adapter error"
ORA-02330 Package error raised with DBMS_SYS_ERROR.RAISE_SYSTEM_ERROR


-

Database fresh creation using cold backup(physical folder)

Database fresh creation using cold backup(physical folder) with same database files location

Install oracle software without database creation

mkdir C:\oracle\product\10.2.0\admin\ins2\adump
mkdir C:\oracle\product\10.2.0\admin\ins2\bdump
mkdir C:\oracle\product\10.2.0\admin\ins2\cdump
mkdir C:\oracle\product\10.2.0\admin\ins2\dpdump
mkdir C:\oracle\product\10.2.0\admin\ins2\pfile
mkdir C:\oracle\product\10.2.0\admin\ins2\udump
mkdir c:\archive1011\ins2\arch

copy initins2.ora, tnsnames.ora,listener.ora to destination location and change paramater accordingly.

Create Instance and service

oradim -new -sid ins2 -SRVC OracleServiceins2 -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE c:\oracle\product\10.2.0\db_1\database\initins2.ORA


lsnrctl stop
lsnrctl start


sqlplusw sys/oracle@ins2srv as sysdba

startup


-------------

-------In case of different database file location-----
from the souce server
alter database backup controlfile to trace as 'c:/controlfilereadable';

to destination
startup nomount
recreate control file with change database files location
alter database mount
alter database open

-----------------------------------------

Trace files generated in Bulk in BDUMP

Same following error in all trace file which is generated in every minute..

Error in Alert log ORA-07445 ORA-12012 ORA-00604 ORA-01427 ORA-06512
-----------------------------------------
Errors in file c:\oracle\product\10.2.0\admin\ari0708\bdump\ari5_j000_5996.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [kghalp+288] [PC:0x4F0F13E] [ADDR:0x1E] [UNABLE_TO_READ] []
ORA-12012: error on auto execute of job 1
ORA-00604: error occurred at recursive SQL level 1
ORA-01427: single-row subquery returns more than one row
ORA-06512: at line 2
-----------------------------------------
by disabling the tracing in init.ora, bdump trace generation can not be stopped. bdump trace is generated due to following reasons

1. Software Bugs
2. High RAM Usage.
3. OS Limits
4. Shared pool corruptions due to I/o slaves error (Refer Metalink note:1404681)
5. SGA corruption (Refer Metalink note: 5202899/5736850)
6. Library cache lock by query coordinator (Refer Metalink note: 3271112)
7. Hardware Errors
8.Oracle Block Corruptions
9. Program Errors

-----------------------------------
ORA-12012: error on auto execute of job 1

select * from dba_jobs;
select * from dba_jobs_running;
emd_maintenance has job_id=1 and running in some seconds

-------Solution--------
sqlplus sys/oracle@ari0708srv as sysdba

exec sysman.emd_maintenance.remove_em_dbms_jobs;

--------------------------

Friday, December 31, 2010

ORA-04030: out of process memory when trying to allocate

Do session, memory monitoring


Windows Server 2003 SP2
Oracle 10g
/PAE /3GB
RAM 8GB
instances:6

Memory Usage of Oracle.exe for current FY goes upto 2.5 GB

1) exclude database folders from virus scan

2) decrease sga_max_size for all other instances

3)Schedule to kill sniped sessions
http://systemnetworkdatabaseadministrator.blogspot.com/2010/09/idle-session-sniped-session-kill.html

Now Memory Usage of Oracle.exe for current FY goes down 2.1 GB


-------------------------

Thursday, November 11, 2010

Clear Temporary tablespace

1)
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE 'E:\SNSD1011\TEMP02.ORA' SIZE 5M REUSE
AUTOEXTEND ON NEXT 1M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

2)
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;

3)
DROP TABLESPACE temporary INCLUDING CONTENTS AND DATAFILES;

4)
cREATE TEMPORARY TABLESPACE temporary
TEMPFILE 'E:\SNSD1011\TEMP01.ORA' SIZE 500M REUSE
AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

5)
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temporary;

6)
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
7)
SELECT tablespace_name, file_name, bytes
FROM dba_temp_files WHERE tablespace_name = 'temporary';

Wednesday, October 6, 2010

When to Rebuild an Index?

When to Rebuild an Index?

It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur,creating a bottleneck in performance. Here is a sample procedure on how to identify the such indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.


For example:

SQL> analyze index emp_empno_pk compute statistics;
Index analyzed.

2. Run the query below to find out how skewed each index is. This query checks on all indexes that are on emp table.


select index_name, blevel,decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from user_indexes;

select * from dba_indexes where blevel > 4;



3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. If the BLEVEL were to be more than 4, it is recommended to rebuild the index.

Note: If you do not analyze the index, the index_check.sql script will show "BLEVEL HIGH" for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table.



SQL> analyze index emp_empno_pk validate structure;
Index analyzed.

5. Run the following query to find out PCT_DELETED ratio.
SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
2 (LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
3 from index_stats
4 where NAME='EMP_EMPNO_PK';



The PCT_DELETED column shows the percent of leaf entries (i.e. index entries) that have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.

The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average. For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in (10000-9000) x 100 / 10000 = 10. This shows a good distribution of values. If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.




In general, indexes and tables should be rebuilt when they become too fragmented.
In practice, you probably will not have need to do it with Oracle 10g. Fragmentation occurs on tables and indexes with lots of changes to structure (adding/removing columns) and lots of data changes (insert, update, delete).

From v10, Oracle have number of automated processes that take care about database performance. One of them is "Segment advisor" that runs automatically.


-----------------------

Friday, September 24, 2010

Corrupted OEM Recreation Error

emca -deconfig dbcontrol db -repos drop

drop taking more time and hang the server



not able to create oem because sysman already exist
emca -config dbcontrol db -repos create

use following to drop and create

emca -deconfig dbcontrol db
emca -config dbcontrol db

following is must or restart service oracledbconsoleservice
emctl stop dbconsole
emctl start dbconsole

--------------------

Wednesday, September 22, 2010

IDLE Session > Sniped Session > Kill > Release Resource

Making Idle Session SNIPED:

An idle session can be setup to become sniped after x minutes by setting the initialization parameter resource_limit = true in the init.ora and idle_time in the user profile. You can make user session becomes sniped after 1 hours of idle time by running below command:


select * from v$parameter where name like '%resource_limit%';

select * from dba_profiles where resource_name like 'IDLE_TIME';


alter system set resource_limit=true scope=spfile;

alter profile LD_BACKOFFICE limit IDLE_TIME 30;


after 30 minutes status=inactive will become sniped


--------------------Finding Sniped Idle Session--------------------------

SELECT
p.spid "Thread ID",
s.sid "Session ID",
s.serial# "Serial Num",
b.name "Background Process",
s.sql_id "SQL ID",
s.username "Session User",
s.osuser "OS User",
s.status "Status",
s.program "Program"
FROM
v$process p,
v$bgprocess b,
v$session s
WHERE
s.status = 'SNIPED'
AND
s.paddr = p.addr
AND b.paddr(+) = p.addr
AND b.name IS NULL
ORDER BY s.username ;


-----------------release resource from idle inactive / sniped / killed session-------

orakill SID SPID



-------------script---Sniped.sql--------------------------

store set c:\prevsqlsettings.sql replace

set pagesize 0

set feedback off

set trimspool on

set termout off

set verify off

spool c:\killSniped.bat

select 'orakill ' || i.instance_name || ' ' || spid from v$instance i,v$process p, v$session s,v$bgprocess b where p.addr=s.paddr AND b.paddr(+) = p.addr AND b.name IS NULL and s.status='SNIPED' and s.username != 'SYSTEM' and s.username != 'SYS' and s.username != 'DBSNMP' and s.username != 'REPADMIN' and s.username != 'WMSYS' and s.username != 'TSMSYS' and s.username != 'OUTLN' and s.username != 'ORACLE_OCM' and s.username != 'LDBO' and s.username != 'SNSEXPORT' and s.username != 'RENEXPORT' and s.username != 'CLLVL';

spool off

host c:\killSniped.bat

@c:\sqlsettings

host del c:\killSniped.bat

host del c:\prevsqlsettings.sql

exit


------------Job Schedule-----sniped.bat--------------

@echo
set oracle_sid=sns6
sqlplus ldbo/ldbo@sns1011srv @c:\sniped.sql
exit

--------------------------------------------------------Just for Knowledge-------------------------------------------



Then IDLE_TIME is set in the users' profiles or the default profile. This will kill the sessions in the database (status in v$session now becomes SNIPED) and they will eventually disconnect. It does not always clean up the Unix session (LOCAL=NO sessions). At this time all oracle resources are released but the shadow processes remains and OS resources are not released. This shadow process is still counted towards the parameters of init.ora.

This process is killed and entry from v$session is released only when user again tries to do something. Another way of forcing disconnect (if your users come in via SQL*Net) is to put the file sqlnet.ora on every client machine and include the parameter "SQLNET.EXPIRE_TIME" in it to force the close of the SQL*Net session.


SQLNET.EXPIRE_TIME =10


sqlnet.expire_time

sqlnet.expire_time parameter: The database is to detect dead TCP connections, not idle client applications
if you kill an idle session, you don't have a dead connection. A dead connection occurs if you switch off or disconnect your client such that it cannot send a proper FIN to close the TCP/IP connection.


Sqlnet.expire_time basically instructs the Server to send a probe packet every set minutes to the client , and if it finds a terminated connection or a connection that is no longer in use, causes the associated server process to terminate on the server. Sqlnet.expire_time basically instructs the Server to send a probe packet every set minutes to the client, and if it finds a terminated connection or a connection that is no longer in use, causes the associated server process to terminate on the server.
A valid database connection that is idle will respond to the probe packet causing no action on the part of the Server , whereas the resource_limit will snipe the session when idle_time is exceeded. The 'sniped' session will get disconnected when the user(or the user process) tries to communicate with the server again. A valid database connection that is idle will respond to the probe packet causing no action on the part of the Server, whereas the resource_limit will snipe the session when idle_time is exceeded. The 'sniped' session will get disconnected when the user (or the user process) tries to communicate with the server again.
But again,as you mentioned, expire_time works globally while idle_time profile works for that user. You can use both of them to make sure that the client not only gets sniped but also gets disconnected if the user process abnormally terminates. But again, as you mentioned, expire_time works globally while idle_time profile works for that user. You can use both of them to make sure that the client not only gets sniped but also gets disconnected if the user process abnormally terminates.



------------maual kill session----------------

alter system kill session '526,67';


-------------------------Last activity / work time ----------

select username,to_char(logon_time, 'DD-Mon-YYYY HH24:MI:SS') Logon_time,last_call_et,
to_char(sysdate-(last_call_et/(60*60*24)),'hh24:mi:ss') last_work_time
from v$session
where username is not null;


------------find sessions which are running for more than 2 hours----------------

SELECT
S.STATUS "Status",
S.SERIAL# "Serial#",
S.TYPE "Type",
S.USERNAME "DB User",
S.OSUSER "Client User",
S.SERVER "Server",
S.MACHINE "Machine",
S.MODULE "Module",
S.CLIENT_INFO "Client Info",
S.TERMINAL "Terminal",
S.PROGRAM "Program",
P.PROGRAM "O.S. Program",
s.logon_time "Connect Time",
lockwait "Lock Wait",
si.physical_reads "Physical Reads",
si.block_gets "Block Gets",
si.consistent_gets "Consistent Gets",
si.block_changes "Block Changes",
si.consistent_changes "Consistent Changes",
s.process "Process",
p.spid, p.pid, si.sid, s.audsid,
s.sql_address "Address", s.sql_hash_value "Sql Hash", s.Action,
sysdate - (s.LAST_CALL_ET / 86400) "Last Call"
FROM
V$SESSION S,
V$PROCESS P,
sys.V_$SESS_IO si
WHERE
S.paddr = P.addr(+)
and si.sid(+)=s.sid and S.USERNAME like 'APL%'
AND s.LAST_CALL_ET/60 >= 2.1
ORDER BY 5
DESC ;


----------------Kill all user sessions except me----------------


1 declare
2 sql_stmt VARCHAR2(200);
3 cursor c1 is select sid, serial# from v$session where
username is not null

4 begin
5 for sessions in c1 loop
6 sql_stmt := 'alter system kill session ' || '''';
7 sql_stmt := sql_stmt || to_char(sessions.sid)
||', ';
8 sql_stmt := sql_stmt || to_char(sessions.serial#)
|| '''';
9 dbms_output.put_line(sql_stmt);
10 begin -- Subblock in order to continue after
-- exception when it will try
-- to kill my session
11 execute immediate sql_stmt;
12 end;
13 end loop;
14* end;



----------------------------alter kill session --job schedule-------------------

creating a procedure to kill sesssion and schedule it nightly be the best solution?

Create or replace procedure KillSession(idleTime in Number)
as cursor c1 is
select sid,serial#,trunc((last_call_et - trunc(last_call_et/3600,0)*3600) / 60,0) TMIN from
v$session
where status='INACTIVE' and username not in ('SYS');
VString varchar2(50);
begin
for c1rec in c1 loop
if c1rec.TMIN > idleTime then
Vstring := 'alter system kill session

'||chr(39)||c1rec.sid||','||c1rec.serial#||chr(39);
execute immediate(Vstring);
end if;
end loop;
end;

------------------------------

Monday, September 20, 2010

User Session Monitoring

-------------------Sniped idle session--------------


SELECT DECODE(TRUNC(SYSDATE - LOGON_TIME), 0, NULL, TRUNC(SYSDATE - LOGON_TIME) || ' Days' || ' + ') || TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') LOGON_before, SID, v$session.SERIAL#, v$process.SPID OS_PROCESS, v$session.USERNAME, STATUS, OSUSER, MACHINE, v$session.PROGRAM, MODULE FROM v$session, v$process
WHERE ((v$session.paddr = v$process.addr) AND (status = 'SNIPED'))
ORDER BY logon_time ASC;



-------------------------Last activity / work time ----------

select username,to_char(logon_time, 'DD-Mon-YYYY HH24:MI:SS') Logon_time,last_call_et,
to_char(sysdate-(last_call_et/(60*60*24)),'hh24:mi:ss') last_work_time
from v$session
where username is not null;



--------- Monitor Per Session UGA, PGA UGA Max, PGA Max Usage----------------

SELECT
s.sid sid
, lpad(s.username,12) oracle_username
, lpad(s.osuser,9) os_username
, s.program session_program
, lpad(s.machine,8) session_machine
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = 'session pga memory') session_pga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = 'session pga memory max') session_pga_memory_max
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = 'session uga memory') session_uga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = 'session uga memory max') session_uga_memory_max
FROM
v$session s
ORDER BY session_pga_memory DESC;


--------- Monitor Per Session UGA (User Global Area), PGA(Process Global Area) Usage----------------
SELECT
e.SID,
e.username,
e.status,
a.UGA_MEMORY,
b.PGA_MEMORY
FROM
(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' UGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session uga memory') a,
(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b,
v$session e
WHERE
e.sid=a.sid
AND
e.sid=b.sid
ORDER BY
e.status,
a.UGA_MEMORY desc;


--------- Monitor Per Session UGA, PGA and Cursor Usage----------------
set pages500 lines110 trims on
clear col
col name format a30
col username format a20
break on username nodup skip 1

select vses.username||':'||vsst.sid||','||vses.serial# username, vstt.name, max(vsst.value) value
from v$sesstat vsst, v$statname vstt, v$session vses
where vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid and vstt.name in
('session pga memory','session pga memory max','session uga memory','session uga memory max',
'session cursor cache count','session cursor cache hits','session stored procedure space',
'opened cursors current','opened cursors cumulative') and vses.username is not null
group by vses.username, vsst.sid, vses.serial#, vstt.name
order by vses.username, vsst.sid, vses.serial#, vstt.name;

--------------------Determine PGA and Process Memory in use by Process------------------------

set lines 110
col unm format a30 hea "USERNAME (SID,SERIAL#)"
col pus format 999,990.9 hea "PROC KB|USED"
col pal format 999,990.9 hea "PROC KB|MAX ALLOC"
col pgu format 99,999,990.9 hea "PGA KB|USED"
col pga format 99,999,990.9 hea "PGA KB|ALLOC"
col pgm format 99,999,990.9 hea "PGA KB|MAX MEM"

select s.username||' ('||s.sid||','||s.serial#||')' unm, round((sum(m.used)/1024),1) pus,
round((sum(m.max_allocated)/1024),1) pal, round((sum(p.pga_used_mem)/1024),1) pgu,
round((sum(p.pga_alloc_mem)/1024),1) pga, round((sum(p.pga_max_mem)/1024),1) pgm
from v$process_memory m, v$session s, v$process p
where m.serial# = p.serial# and p.pid = m.pid and p.addr=s.paddr and
s.username is not null group by s.username, s.sid, s.serial# order by unm;



-------------Session I/O By User---------------------------

select nvl(ses.USERNAME,'ORACLE PROC') username,
OSUSER os_user,
PROCESS pid,
ses.SID sid,
SERIAL#,
PHYSICAL_READS,
BLOCK_GETS,
CONSISTENT_GETS,
BLOCK_CHANGES,
CONSISTENT_CHANGES
from v$session ses,
v$sess_io sio
where ses.SID = sio.SID
order by PHYSICAL_READS, ses.USERNAME;


----------------CPU Usage By Session-------------------------

select nvl(ss.USERNAME,'ORACLE PROC') username,
se.SID,
VALUE cpu_usage
from v$session ss,
v$sesstat se,
v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID
order by VALUE desc;

----------------Resource Usage By User-------------------------

select ses.SID,
nvl(ses.USERNAME,'ORACLE PROC') username,
sn.NAME statistic,
sest.VALUE
from v$session ses,
v$statname sn,
v$sesstat sest
where ses.SID = sest.SID
and sn.STATISTIC# = sest.STATISTIC#
and sest.VALUE is not null
and sest.VALUE != 0
order by ses.USERNAME, ses.SID, sn.NAME;

--------------Session Stats By Session-------------


select nvl(ss.USERNAME,'ORACLE PROC') username,
se.SID,
sn.NAME stastic,
VALUE usage
from v$session ss,
v$sesstat se,
v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and se.SID = ss.SID
and se.VALUE > 0
order by sn.NAME, se.SID, se.VALUE desc;

----------Cursor Usage By Session-------------

select user_process username,
"Recursive Calls",
"Opened Cursors",
"Current Cursors"
from (
select nvl(ss.USERNAME,'ORACLE PROC')||'('||se.sid||') ' user_process,
sum(decode(NAME,'recursive calls',value)) "Recursive Calls",
sum(decode(NAME,'opened cursors cumulative',value)) "Opened Cursors",
sum(decode(NAME,'opened cursors current',value)) "Current Cursors"
from v$session ss,
v$sesstat se,
v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and (NAME like '%opened cursors current%'
or NAME like '%recursive calls%'
or NAME like '%opened cursors cumulative%')
and se.SID = ss.SID
and ss.USERNAME is not null
group by nvl(ss.USERNAME,'ORACLE PROC')||'('||se.SID||') '
)
orasnap_user_cursors
order by USER_PROCESS,"Recursive Calls" ;


---------------------------User Hit Ratios---------------------

select USERNAME,
CONSISTENT_GETS,
BLOCK_GETS,
PHYSICAL_READS,
((CONSISTENT_GETS+BLOCK_GETS-PHYSICAL_READS) / (CONSISTENT_GETS+BLOCK_GETS)) Ratio
from v$session, v$sess_io
where v$session.SID = v$sess_io.SID
and (CONSISTENT_GETS+BLOCK_GETS) > 0
and USERNAME is not null
order by ((CONSISTENT_GETS+BLOCK_GETS-PHYSICAL_READS) / (CONSISTENT_GETS+BLOCK_GETS));





--------------------------------generate sql to kill a sessoin. See genDisconnect.sql
accept killme prompt 'User to Kill > '
set ver off
select 'alter system disconnect session '''||s.sid||','||s.serial#||''' immediate;' as sql_to_run from V$PROCESS p, V$SESSION s
where s.paddr = p.addr and s.username = upper ('&killme');

User to Kill > UIBADUSER
SQL_TO_RUN
---------------------------------------------------------
alter system disconnect session '296,3841' immediate;

--list old sessions to "disconnect", then generate sql to disconnect them
set linesize 180
col username for a25
col osuser for a12
col serial# for 9999999
col sid for 9999
col spid for a8
col module for a10 trunc
col start_time for a20
col machine for a20 trunc
select s.osuser,p.spid,s.username,s.sid,s.serial#,to_char(s.logon_time,'Dy dd Mon HH24:MI:SS') start_time,s.status,s.machine,s.MODULE
from V$PROCESS p,V$SESSION s where s.paddr = p.addr and s.username is not null and s.status = 'INACTIVE' and s.osuser = 'oracle'
and s.machine like 'banformprod5%' and s.logon_time < sysdate-1;
set echo off feedback off heading off
select '--run sql below to disconnect sessions above' from dual;
select 'alter system disconnect session '||chr(39)||sid||','||s.serial#||chr(39)||' immediate;' as sql_to_run from V$PROCESS p,V$SESSION s
where s.paddr = p.addr and s.username is not null and s.status = 'INACTIVE' and s.osuser = 'oracle'
and s.machine like 'banformprod5%' and s.logon_time < sysdate-1;
select '' from dual;
exit

OSUSER SPID USERNAME SID SERIAL# START_TIME STATUS MACHINE MODULE
------------ -------- ------------------------- ----- -------- -------------------- -------- -------------------- ----------
oracle 11911 AWILS1 1317 54857 Thu 25 Aug 07:18:55 INACTIVE banformprod5.admin.u SHACRSE


----------------------run sql below to disconnect sessions above

alter system disconnect session '1317,54857' immediate;

------------------------show current users. See showUsers.sql
set echo off feedback off heading off
set pagesize 0
set linesize 180
col username for a25
col osuser for a12
col serial# for 9999999
col sid for 9999
col spid for a8
col module for a10 trunc
col start_time for a20
col machine for a20 trunc
select 'Show users ordered by logon_time, username' from dual;

select 'OSUSER OSPID USERNAME SID SERIAL# LOGON_TIME STATUS MACHINE MODULE' from dual;

select '------------ -------- ------------------------ ----- -------- ------------------- --------- -------------------- ----------' from dual;

select s.osuser,p.spid,s.username,s.sid,s.serial#,to_char(s.logon_time,'Dy dd Mon HH24:MI:SS') start_time,s.status,s.machine,s.MODULE
from V$PROCESS p,V$SESSION s where s.paddr = p.addr and s.username is not null order by logon_time,1;

Show users ordered by logon_time, username
OSUSER OSPID USERNAME SID SERIAL# LOGON_TIME STATUS MACHINE MODULE
------------ -------- ------------------------ ----- -------- ------------------- --------- -------------------- ----------
submit 29273 JOBRUNNER 2 19 332 Sat 20 Aug 04:01:07 ACTIVE servername.dns.netw SFRPIPE
oracle 28413 SYS 26 452 Mon 22 Aug 15:31:55 INACTIVE servername.dns.netw sqlplus@se
johnny 14223 TLJOHN 37 710 Tue 23 Aug 09:03:36 INACTIVE TREE\USER1 TOAD 7.6.0
janedoey 22345 USER01 28 1462 Tue 23 Aug 12:44:00 INACTIVE TREE\USER1 designer.e
oracle 29986 OPS$ORACLE 18 219 Tue 23 Aug 13:01:01 ACTIVE servername.dns.netw SQL*Plus



-------------------generate SQL to disconnect blocking users.
SELECT 'alter system disconnect session '''||s.sid||','||s.serial#||'''immediate;' as "run_this_sql" FROM sys.x$kglpn p, sys.v_$session s,
(select kglhdadr, kglnaobj FROM sys.x$kglob WHERE kglhdadr in
(SELECT p1raw FROM sys.v_$session_wait WHERE event = 'library cache pin' AND state = 'WAITING')) k
WHERE p.kglpnuse = s.saddr AND kglpnhdl in
(SELECT p1raw FROM sys.v_$session_wait WHERE event = 'library cache pin' AND state = 'WAITING')
and k.kglhdadr = kglpnhdl and kglpnmod=3 order by kglpnmod;

run_this_sql
--------------------------------------------------------------------------------------
alter system disconnect session '853,44482'immediate;



------------------find the number of current commits occuring.
select command,count(command) from V$SESSION where command = '44' group by command order by 1;

USERNAME COUNT
-------------------- -----
JOBSUMITTER 37680
VISITOR 3488
WORKER 2839
TESTER 1793
DBSNMP 11
SYS 740

-----------------------show "stale" connections. showStale.sql---------------------------
set recsep off
set feedback off
set pages 1000
set wrap off
define l1='Stale Sessions (showStale.sql)'
define l2='==================================='
ttitle left l1 skip 1 left l2 skip 2
col sid_serial format a12 heading 'SID,SERIAL#'
col username format a10 heading 'User'
col osuser format a10 heading 'OS User'
col stats format a8 heading 'Status'
col program format a20 heading 'Program'
col module format a30 heading 'Module'
column last_call format 9999 heading 'Last|Call|(Days)'
select to_char(logon_time,'Dy dd Mon HH24:MI') logon_time,sid || ',' || serial# sid_serial,
username,osuser,trunc(last_call_et/60/60/24) last_call,status,program,module from V$SESSION
where type != 'BACKGROUND' and username is not null and trunc(last_call_et/60/60/24) > 1 order by 5;
exit


Last
Call
LOGON_TIME SID,SERIAL# User OS User (Days) STATUS Program Module
---------------- ------------ ---------- ---------- ------ -------- -------------------- ------------------------------







-----------------------show how users are logged on------------


select program,count(program) from V$SESSION group by program order by program;


PROGRAM COUNT(PROGRAM)
-------------------------------------------------- --------------

Wednesday, September 8, 2010

Listener.log Cleanup

-------------------
@echo
lsnrctl stop
rename d:\oracle\product\10.2.0\db_1\network\log\listener.log listener1.log
lsnrctl start
move d:\oracle\product\10.2.0\db_1\network\log\listener1.log d:\oracle\product\10.2.0\db_1\network\log\backup

-----------------------

or manually using lsnrctl utility


c:\> LSNRCT
LSNRCTL> show log_file
LSNRCTL> set log_file listenernew.log
LSNRCTL> show log_file
LSNRCTL> exit

LOG_ARCHIVE_MAX_PROCESSES

Oracle LOG_ARCHIVE_MAX_PROCESSES parameter specifies the number of active ARCH processes (ARC0-ARCn) initially invoked by Oracle. The value of LOG_ARCHIVE_MAX_PROCESSES parameter is evaluated at instance startup if LOG_ARCHIVE_START initialization parameter is set to true.


LOG_ARCHIVE_START=true

LOG_ARCHIVE_MAX_PROCESSES


no of log_archive_max_processes depends on active log group
select * from V$log;

If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can increase the maximum number of archiver processes with the LOG_ARCHIVE_MAX_PROCESSES initialization parameter. The ALTER SYSTEM statement can change the value of this parameter dynamically to increase or decrease the number of ARCn processes.


Sometimes it happens that we face application slow down due to heavy network traffic on the application and we tend to increase the LOG_ARCHIVE_MAX_PROCESSES parameter however this is not always the best solution. Whenever you face application slow down due to network traffic then instead of increasing the LOG _ARCHIVE_MAX_PROCESSES parameter one should first try to increase bandwidth or decrease network roundtrips or exchanged bytes.



----------------

Followers