Wednesday, February 23, 2011

Wait Events

http://www.scribd.com/doc/3321687/09-enqueues

SQL*Net message from client

The server process (foreground process) waits for a message from the client process to arrive.




db file scattered

The db file scattered Oracle metric event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return.

A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.

* db file sequential read—A single-block read (i.e., index fetch by ROWID)

* db file scattered read—A multiblock read (a full-table scan, OPQ, sorting)


read by other session
read by other session occurs when two users need access to the same block of data. The first user reads the data from disk and places it in the buffer cache. The second user has to wait for the first users operation to complete so they are placed in to waiting. This is when the read by other session wait occurs. Unfortunately this is one of those events we need to "catch in the act" to properly resolve.
http://www.rampant-books.com/art_read_by_other_session.htm



log file sync
When a user session commits, the session's redo information needs to be flushed to the redo logfile. The user session will post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session.

log file parallel write
Writing redo records to the redo log files from the log buffer.


db file parallel write
The db file parallel write Oracle metric occurs when the process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk, and is waiting for all requests to complete.




PX Deq Credit: send blkd -----------------



direct path read
direct path read waits only when you are doing a parallel full-scan.



enq: RO - fast object reuse



Buffer Busy Waits
A buffer busy wait occurs if multiple processes want to access a buffer in the buffer cache concurrently.
The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.


The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists to a table and index, implementing Automatic Segment Storage Management (ASSM, a.k.a bitmap freelists), and, of course, and adding a missing index to reduce buffer touches.






rdbms ipc message

The background processes (LGWR, DBWR, LMS0) use this event to indicate that they are idle and are waiting for the foreground processes to send them an IPC message to do some work.




Streams AQ: waiting for messages in the queue

The session is waiting on an empty OLTP queue (Advanced Queuing) for a message to arrive so that the session can dequeue that message.





library cache lock
Oracle's library cache is nothing more than an area in memory, specifically one of three parts inside the shared pool. The library cache is composed of shared SQL areas, PL/SQL packages and procedures, various locks & handles, and in the case of a shared server configuration, stores private SQL areas. Whenever an application wants to execute SQL or PL/SQL (collectively called code), that code must first reside inside Oracle's library cache. When applications run and reference code, Oracle will first search the library cache to see if that code already exists in memory.


1. situation
library cache lock / pins is happen when object is pin in memory (executing , compile ...), because is executed and another session want to use id (compilation , grant ...)
2. situation
first session make long DML and later second session try DDL (ALTER TABLE)



Time Model Statistics

The goal of a DBA would be to reduce the DB time number to be as low as possible for any given time period. Obviously DBAs constantly try and reduce this number by eliminating wait events, but now we have a bit more incentive to reduce DB time by tuning SQL, applications, architecture, database design, instance layout, etc. –realizing that if we can produce a result set faster then DB time will also be reduced.

AWR Sections

AWR report is broken into multiple parts.

1)Instance information:-
This provides information the instance name , number,snapshot ids,total time the report was taken for and the database time during this elapsed time.

Elapsed time= end snapshot time – start snapshot time
Database time= Work done by database during this much elapsed time( CPU and I/o both add to Database time).If this is lesser than the elapsed time by a great margin, then database is idle.Database time does not include time spend by the background processes.

2)Cache Sizes : This shows the size of each SGA region after AMM has changed them. This information
can be compared to the original init.ora parameters at the end of the AWR report.

3)Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.This is very important for understanding how is the instance behaving.This has to be compared to base line report to understand the expected load on the machine and the delta during bad times.

4)Instance Efficiency Percentages (Target 100%): This section talks about how close are the vital ratios like buffer cache hit, library cache hit,parses etc.These can be taken as indicators ,but should not be a cause of worry if they are low.As the ratios cold be low or high based in database activities, and not due to real performance problem.Hence these are not stand alone statistics, should be read for a high level view .

5)Shared Pool Statistics: This summarizes changes to the shared pool during the snapshot
period.

6)Top 5 Timed Events :This is the section which is most relevant for analysis.This section shows what % of database time was the wait event seen for.Till 9i, this was the way to backtrack what was the total database time for the report , as there was no Database time column in 9i.

7)RAC Statistics :This part is seen only incase of cluster instance.This provides important indication on the average time take for block transfer, block receiving , messages ., which can point to performance problems in the Cluster instead of database.

8)Wait Class : This Depicts which wait class was the area of contention and where we need to focus.Was that network, concurrency, cluster, i/o Application, configuration etc.

9)Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.

10)Wait Events: This AWR report section provides more detailed wait event information for foreground
user processes which includes Top 5 wait events and many other wait events that occurred during
the snapshot interval.

11)Background Wait Events: This section is relevant to the background process wait events.

12)Time Model Statistics: Time mode statistics report how database-processing time is spent. This
section contains detailed timing information on particular components participating in database
processing.This gives information about background process timing also which is not included in database time.

13)Operating System Statistics: This section is important from OS server contention point of view.This section shows the main external resources including I/O, CPU, memory, and network usage.

14)Service Statistics: The service statistics section gives information services and their load in terms of CPU seconds, i/o seconds, number of buffer reads etc.

15)SQL Section: This section displays top SQL, ordered by important SQL execution metrics.

a)SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution
time during processing.

b)SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time
during its processing.

c)SQL Ordered by Gets: These SQLs performed a high number of logical reads while
retrieving data.

d)SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.

e)SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.

f)SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.

g)SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool
for some reason.

16)Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.

17)I/O Section: This section shows the all important I/O activity.This provides time it took to make 1 i/o say Av Rd(ms), and i/o per second say Av Rd/s.This should be compared to the baseline to see if the rate of i/o has always been like this or there is a diversion now.

18)Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and
Java pool.

19)Buffer Wait Statistics: This important section shows buffer cache waits statistics.

20)Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.

21)Undo Segment Summary: This section gives a summary about how undo segments are used by the database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.

22)Latch Activity: This section shows details about latch statistics. Latches are a lightweight
serialization mechanism that is used to single-thread access to internal Oracle structures.The latch should be checked by its sleeps.The sleepiest Latch is the latch that is under contention , and not the latch with high requests.Hence run through the sleep breakdown part of this section to arrive at the latch under highest contention.

23)Segment Section: This portion is important to make a guess in which segment and which segment type the contention could be.Tally this with the top 5 wait events.

Segments by Logical Reads: Includes top segments which experienced high number of
logical reads.

Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.

Segments by Buffer Busy Waits: These segments have the largest number of buffer waits
caused by their data blocks.

Segments by Row Lock Waits: Includes segments that had a large number of row locks on
their data.

Segments by ITL Waits: Includes segments that had a large contention for Interested
Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage
parameter of the table.

24)Dictionary Cache Stats: This section exposes details about how the data dictionary cache is
operating.

25)Library Cache Activity: Includes library cache statistics which are needed in case you see library cache in top 5 wait events.You might want to see if the reload/invalidations are causing the contention or there is some other issue with library cache.

26)SGA Memory Summary:This would tell us the difference in the respective pools at the start and end of report.This could be an indicator of setting minimum value for each, when sga)target is being used..

27)init.ora Parameters: This section shows the original init.ora parameters for the instance during
the snapshot period.

Tuesday, February 22, 2011

delete listener services in windows

how to delete listener services in windows

regedt32-->hkey_local_machine-->system-->currentcontrolset-->services-->oracle and delete it by delete key.

Sunday, February 20, 2011

RMAN :Restore Different Server, Database folder on different Directory, Backup Piece on different location

Scenario
Restore on Different Server, Database folders are on different Directory, Backup Piece on different location
Previous Server Prod, E:\snsd1011\, E:\archive1011\
New Server UAT D:\snsd1011\, D:\archive1011\

-----install oracle server10.2.0.3 without create startup db

------
mkdir D:\oracle\product\10.2.0\admin\sns1011\adump
mkdir D:\oracle\product\10.2.0\admin\sns1011\bdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\cdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\dpdump
mkdir D:\oracle\product\10.2.0\admin\sns1011\pfile
mkdir D:\oracle\product\10.2.0\admin\sns1011\udump
mkdir D:\archive1011\sns1011\arch
copy initsns1011.ora, tnsnames.ora,listener.ora to destination location and change paramater accordingly.

------
D:\>
oradim -new -sid sns1011 -SRVC OracleServicesns1011 -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE D:\oracle\product\10.2.0\db_1\database\initsns1011.ORA

----
lsnrctl stop
lsnrctl start
lsnrctl services
tnsping sns1011
----
sqlplusw sys/oracle@sns1011srv as sysdba

SQL>startup nomount pfile='D:\oracle\product\10.2.0\db_1\database\initsns6.ora';

-----
cmd
c:>
SET ORACLE_SID=sns6
RMAN TARGET SYS/linux@SNS1011SRV
shutdown immediate;
startup nomount;

RMAN>RESTORE CONTROLFILE FROM 'D:\archive1011\SNS1011\C-3554091374-20100603-00';

RMAN > SET DBID=3554091374

alter database MOUNT;

---------
RMAN>
list backup;
CROSSCHECK backup of database;
delete backup of database;
delete expired backup;
list backup;
delete backupset 146;
CROSSCHECK backup of controlfile;
delete backup of controlfile;
CROSSCHECK archivelog all;
delete force obsolete;
delete expired archivelog all;

---------

RMAN>CATALOG START WITH 'D:\archive1011\sns1011';

or

catalog backuppiece
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_1_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_2_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_3_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_4_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_5_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_6_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_7_T_721740034'
;

---------

RUN{
set newname for datafile 1 TO 'D:\SNSD1011\SYSTEM01.ORA';
set newname for datafile 2 TO 'D:\SNSD1011\UNDOTBS01.ORA';
set newname for datafile 3 TO 'D:\SNSD1011\SYSAUX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\INDX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\USERS01.ORA';
set newname for tempfile 5 TO 'D:\SNSD1011\TEMP01.ORA';
}

---------
SQL>
alter database rename file 'e:\snsd1011\system01.ora' to 'd:\snsd1011\system01.ora';
alter database rename file 'e:\snsd1011\users01.ora' to 'd:\snsd1011\users01.ora';
alter database rename file 'e:\snsd1011\UNDOTBS01.ora' to 'd:\snsd1011\UNDOTBS01.ora';
alter database rename file 'e:\snsd1011\SYSAUX01.ora' to 'd:\snsd1011\SYSAUX01.ora';
alter database rename file 'e:\snsd1011\INDX01.ora' to 'd:\snsd1011\INDEX01.ora';
alter database rename file 'e:\snsd1011\TEMP01.ora' to 'd:\snsd1011\TEMP01.ora';

alter database rename file 'e:\snsd1011\redo01.ora' to 'd:\snsd1011\redo01.ora';
alter database rename file 'e:\snsd1011\redo02.ora' to 'd:\snsd1011\redo02.ora';
alter database rename file 'e:\snsd1011\redo03.ora' to 'd:\snsd1011\redo03.ora';

------------
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;

----------------------------

Restore Error to different server and different directory (RMAN ORA-01180 ORA-01110)


--------Error--------RMAN ORA-01180 ORA-01110----------

I have taken database backup (location E drive ) in backuppieces and restore it into test server (D Drive) but not able to restore

If Controlfile have datafiles location of E drive. Then Error fail to restore on D drive.
If I recreate Controlfile then Error of DBID mismatch


SET ORACLE_SID=sns6
RMAN TARGET SYS/linux@SNS1011SRV
shutdown immediate;
startup nomount;

RMAN>RESTORE CONTROLFILE FROM 'D:\archive1011\SNS1011\C-3554091374-20100603-00';
RMAN > SET DBID=3554091374

alter database MOUNT;

RMAN>
list backup;
CROSSCHECK backup of database;
delete backup of database;
delete expired backup;
list backup;
delete backupset 146;
CROSSCHECK backup of controlfile;
delete backup of controlfile;
CROSSCHECK archivelog all;
delete force obsolete;
delete expired archivelog all;
list backup;
RMAN> delete backuppiece 'E:\BACKUP\RMAN\SNS1011\RMANBACKUP_DB_SNS1011_S_157_P_1_T_721741509';

RMAN-06207: WARNING: 7 objects could not be deleted for DISK channel(s) due to mismatched statuS.


RMAN>CATALOG START WITH 'D:\archive1011\sns1011';

or

catalog backuppiece
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_1_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_2_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_3_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_4_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_5_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_6_T_721740034',
'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_156_P_7_T_721740034'
;



RUN{
set newname for datafile 1 TO 'D:\SNSD1011\SYSTEM01.ORA';
set newname for datafile 2 TO 'D:\SNSD1011\UNDOTBS01.ORA';
set newname for datafile 3 TO 'D:\SNSD1011\SYSAUX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\INDX01.ORA';
set newname for datafile 4 TO 'D:\SNSD1011\USERS01.ORA';
set newname for tempfile 5 TO 'D:\SNSD1011\TEMP01.ORA';
}

RESTORE DATABASE;



Starting restore at 20-FEB-11
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\SNSD1011\SYSTEM01.ORA
restoring datafile 00002 to E:\SNSD1011\UNDOTBS01.ORA
restoring datafile 00003 to E:\SNSD1011\SYSAUX01.ORA
restoring datafile 00004 to E:\SNSD1011\INDX01.ORA
restoring datafile 00005 to E:\SNSD1011\USERS01.ORA
channel ORA_DISK_1: reading from backup piece D:\ARCHIVE1011\SNS1011\RMANBACKUP_
DB_SNS1011_S_156_P_1_T_721740034
ORA-19870: error reading backup piece D:\ARCHIVE1011\SNS1011\RMANBACKUP_DB_SNS10
11_S_156_P_1_T_721740034
ORA-19504: failed to create file "E:\SNSD1011\USERS01.ORA"
ORA-27040: file create error, unable to create file
OSD-04002: unable to open file
O/S-Error: (OS 21) The device is not ready.
failover to previous backup

creating datafile fno=1 name=E:\SNSD1011\SYSTEM01.ORA
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 02/20/2011 20:48:10
ORA-01180: can not create datafile 1
ORA-01110: data file 1: 'E:\SNSD1011\SYSTEM01.ORA'


----------------Solution-----------------------

SQL>
alter database rename file 'e:\snsd1011\system01.ora' to 'd:\snsd1011\system01.ora';
alter database rename file 'e:\snsd1011\users01.ora' to 'd:\snsd1011\users01.ora';
alter database rename file 'e:\snsd1011\UNDOTBS01.ora' to 'd:\snsd1011\UNDOTBS01.ora';
alter database rename file 'e:\snsd1011\SYSAUX01.ora' to 'd:\snsd1011\SYSAUX01.ora';
alter database rename file 'e:\snsd1011\INDX01.ora' to 'd:\snsd1011\INDEX01.ora';
alter database rename file 'e:\snsd1011\TEMP01.ora' to 'd:\snsd1011\TEMP01.ora';

alter database rename file 'e:\snsd1011\redo01.ora' to 'd:\snsd1011\redo01.ora';
alter database rename file 'e:\snsd1011\redo02.ora' to 'd:\snsd1011\redo02.ora';
alter database rename file 'e:\snsd1011\redo03.ora' to 'd:\snsd1011\redo03.ora';

----------------------

Wednesday, February 16, 2011

Data Pump

USING DATA PUMP
TABLE EXPORT IMPORT
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log



1. The following SQL statements creates a user, a directory object named dpump_dir1 and grants the permissions to the user.

SQLPLUS ldbo/linux@sns0809srv as sysdba
SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS 'e:\dpdirectory';
SQL> grant read, write on directory dpump_dir1 to dpuser


$ expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
$expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
$expdp dpuser/dpuser@TDB10G schemas=dpuser directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log



---------------difference between traditional exp/imp and data pump-------

The main differences are listed below:
1)Expdp/Impdp access files on the server rather than on the client.

2))Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

3)To improved performance Impdp/Expdp use parallel execution rather than a single stream of execution.
1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions

-------

DBMS_DATAPUMP package


SQLPLUS system/manager@TDB10G as sysdba



SQL> create user dpuser identified by dpuser;
grant connect, resource to dpuser;
CREATE DIRECTORY dpump_dir1 AS 'E:\app\kshitij';
grant read, write on directory dpump_dir1 to dpuser;

========
$ expdp dpuser/dpuser@orcl schemas=dpuser

include= TABLE:\"IN (\'EMP\', \'DEPT\')\"

directory=dpump_dir1 dumpfile=dpuser.dmp logfile=dpuser.log



$expdp dpuser/dpuser@TDB10G schemas=dpuser

exclude=TABLE:\"= \'EMP_DETAILS\'\"

directory=dpump_dir1 dumpfile=dpuser2.dmp lo
==============
The following steps list the basic activities involved in using Data Pump API.

1. Execute DBMS_DATAPUMP.OPEN procedure to create job.

2. Define parameters for the job like adding file and filters etc.

3. Start the job.

4. Optionally monitor the job until it completes.

5. Optionally detach from job and attach at later time.

6. Optionally, stop the job

7. Restart the job that was stopped.

Example of the above steps:



Declare

P_handle number; --- -- Data Pump job handle

P_last_job_state varchar2(45); ---- -- To keep track of job state

P_job_state varchar2(45);

P_status ku$_Status ----- -- The status object returned by get_status

BEGIN

P_handle:=DBMS_DATAPUMP.OPEN ('EXPORT','SCHEMA', NULL,'EXAMPLE','LATEST');



-- Specify a single dump file for the job (using the handle just returned)

-- and a directory object, which must already be defined and accessible

-- to the user running this procedure



DBMS_DATAPUMP.ADD_FILE (p_handle,'example.dmp','DMPDIR');



-- A metadata filter is used to specify the schema that will be exported.



DBMS_DATAPUMP.METADATA_FILTER (p_handle,'SCHEMA_EXPR','IN (''dpuser'')');



-- Start the job. An exception will be generated if something is not set up

-- Properly.



DBMS_DATAPUMP.start_job (p_handle);



----The export job should now be running.

The status of the job can be checked by writing a separate procedure and capturing the errors and status until it is completed. Overall job status can also be obtained by querying “SELECT * from dba_datapump_jobs”.



=======================

Data Pump ( expdp / impdp )

Now in 10g we can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities.

Advantages using data pump

1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions
Note (1) You cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater.

Note (2) The expdp and impdp are command line tools and run from within the Operating System.

Use of data pump

1) Migrating databases
2) Copying databases
3) Transferring oracle databases between different operating systems
4) Backing up important tables before you change them
5) Moving database objects from one tablespace to another
6) Transporting tablespace's between databases
7) Reorganizing fragmented table data
8) Extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

1) dbms_datapump - the main engine for driving data dictionary metadata loading and unloading 2) dbms_metadata - used to extract the appropriate metadata
3) command-line - expdp and impdp are the import/export equivalents

Data Access methods

1) Direct path

2) External table path

Direct Path
bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.
External Path
Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.
In the following cases oracle will use the external path if any of the below are in use
1) clustered tables
2) active triggers in the table
3) a single partition in a table with a global index
4) referential integrity constraints
5) domain indexes on LOB columns
6) tables with fine-grained access control enabled in the insert mode
7) tables with BFILE or opaque type columns

Data Pump files

All files will be created on the server.

1) Dump files - holds the data and metadata
2) log files - the resulting output from the data pump command
3) sql files - contain the DDL statements describing the objects included in the job but can contain data

Master data pump tables - when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Data Pump privileges

1) exp_full_database 2) imp_full_database


How Data Pump works


The Master Control Process (MCP), has the process name DMnn, only one master job runs per job which controls the whole Data Pump job, it performs the following
create jobs and controls them
creates and manages the worker processes
monitors the jobs and logs the process
maintains the job state and restart information in the master table (create in the users schema running the job)
manages the necessary files including the dump file set
The master process creates a master table which contains job details (state, restart info), this table is created in the users schema who is running the Data Pump job. Once the job has finished it dumps the table contents into the data pump file and deletes the table. When you import the data pump file it re-creates the table and reads it to verify the correct sequence in which the it should import the various database objects.
The worker process is named DWnn and is the process that actually performs the work, you can have a number of worker process running on the same job (parallelism). The work process updates the master table with the various job status.
The shadow process is created when the client logs in to the oracle server it services data pump API requests, it creates the job consisting of the master table and the master process.
The client processes are the expdp and impdp commands.

Examples

Exporting database
expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log
Note: increase the parallel option based on the number of CPU's you have

Exporting schema
expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log

table
expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only tablespace

expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log

Importing database
impdp system/password full=y dumpfile=data.dmp nologfile=y

Importing schema change
impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only

impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=…Other Options

directory
specifies a oracle directory object

filesize
split the dump file into specific sizes (could be used if filesystem has 2GB limit)

parfile
specify the parameter file

content
contents option can be ALL, METADATA_ONLY or DATA_ONLY

compression
compression is used by default but you can stop it

exclude/include
metadata filtering

query
selectively export table data using a SQL statement

estimate
Calculate job estimates where the vaild keywords are blocks and statistics

estimate_only
Calculate job estimates without performing the export

network link
you can perform a export across a network

encryption
you can encrypt data within the data pump file

parallel
increase worker processes to increase throughput, base it on number of CPU's

remap_schema
move objects from one schema to another

remap_datafile
change the name of the datafile when moving across different systems

remap_tablespace
move from one tablespace to another

Useful Views
DBA_DATAPUMP_JOBS

summary information of all currently running data pump jobs
DBA_DATAPUMP_SESSIONS

displays the user currently running data pump jobs
V$SESSION_LONGOPS

display information like totalwork, sofar, units and opname
The units of work done so far
Privileges
IMP_FULL_DATABASE
required if using advanced features
EXP_FULL_DATABASE
required if using advanced features

DBMS_DATAPUMP package
The package dbms_datapump can be used for the following
starting/stopping/restarting a job
monitoring a job
detaching from a job

exporting

declare d1 number;begin d1 := dbms_datapump.open('export','schema',null, 'test1', 'latest'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_filter(d1, 'schema_expr','in (''OE'')'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;


importing
declare d1 number;begin d1 := dbms_datapump.open('import','full',null, 'test1'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_remap(d1, 'remap_schema', 'oe', 'hr'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;

============================
REMAP

REMAP_TABLESPACE – This allows you to easily import a table into a different tablespace from which it was originally exported. The databases have to be 10.1 or later.

> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY =dpumpdir1 DUMPFILE=employees.dmp

REMAP_DATAFILES – This is a very useful feature when you move databases between platforms that have different file naming conventions. This parameter changes the source datafile name to the target datafile name in all SQL
statements where the source datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, it’s best to specify the parameter within a parameter file.

The parameter file, payroll.par, has the following content:

DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/pa
yroll/tbs6.dbf’”

You can then issue the following command:

> impdp username/password PARFILE=payroll.par
================

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters, but once you are comfortable with Data Pump, there are new capabilities that you will want to explore.

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. Now Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database.) The number of parallel processes can be changed on the fly using Data Pump’s interactive command-line mode. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa).

For best performance, you should do the following:

1. Make sure your system is well balanced across CPU, memory, and I/O.
2. Have at least one dump file for each degree of parallelism. If there aren’t enough dump files, performance will not be optimal because multiple threads of execution will be trying to access the same dump file.
3. Put files that are members of a dump file set on separate disks so that they will be written and read in parallel.
4. For export operations, use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated.

> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=par_exp%u.dmp PARALLEL=4

-------

General Errors

http://www.articles.freemegazone.com/oracleErrors.php

ORA12518: TNS:listener could not hand off client connection
ORA04030: out of process memory when trying to allocate
ORA00060: deadlock detected while waiting for resource
ORA00054: resource busy and acquire with NOWAIT specified
ORA00600: intemal error code, arguments: [numl, [?], [?], [?], [?], [?]
ORA00376: ORA01110 recovery from lost datafile
ORA01925: maximum of 148 enabled roles exceeded
ORA01000: maximum open cursors exceeded
ORA01180 ORA01110 RMAN recover on different location
ora 19870 ora 19505 ora 27041 RMANRecover Database witn missing Archieve Logs.
ORA12545: Connect failed because target host or object does not exist
ORA00942: table or view does not exist
ORA03113: endoffile on communication channel
ORA06502: PL/SQL: numeric or value error
ORA04031: unable to allocate num bytes of shared memory num, num, num
ORA01756: quoted string not properly terminated
ORA29283: invalid file operation
ORA00020: maximum number of processes num exceeded
ORA12203: TNS:unable to connect to destination
ORA12154: TNS:could not resolve the connect identifier specified
ORA01017: invalid username/password; logon denied
ORA01403: no data found


----------ORA-12518: TNS:listener could not hand off client connection------------------
ping 10.100.0.65 -t

tnsping 10.100.0.65 10

lnsrctl startus

database server memory were enough for new connection

check virtual memory

--solution---
size of listener.log is increased to 1GB

select * from v$resource_limit order by 2 desc;
kill sniped user session


Turn On Listener Tracing
LOGGING_LISTENER = on
TRACE_LEVEL_LISTENER=16
TRACE_FILE_LISTENER=listener.trc
TRACE_DIRECTORY_LISTENER=d:\oracle\product\10.2.0\db_1\network\trace


--------------------------------ORA-04030: out of process memory when trying to allocate

Do session, memory monitoring


Windows Server 2003 SP2
Oracle 10g
/PAE /3GB
RAM 8GB
instances:6

Memory Usage of Oracle.exe for current FY goes upto 2.5 GB

1) exclude database folders from virus scan

2) decrease sga_max_size for all other instances

3)Schedule to kill sniped sessions


----------------------ORA-00060: deadlock detected while waiting for resource

A deadlock is the situation where you have two, or more, Oracle "sessions" (well, transactional "states") competing for mutually locked resources. Oracle deals with deadlocks pretty much immediately by raising an exception (ORA-00060) in one of the sessions.


Trying to execute a statement, but your session was deadlocked because another session had the same resource locked. The statement(s) that you tried to execute have been rolled back.

1. You can wait a few minutes and try to re-execute the statement(s) that were rolled back.
2. You can execute a ROLLBACK and re-execute all statements since the last COMMIT was executed.



select do.object_name,
row_wait_obj#, do.data_object_id, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, do.data_object_id, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=543
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;



select l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2;


select session_id,oracle_username,process,
decode(locked_mode,
2, 'row share',
3, 'row exclusive',
4, 'share',
5, 'share row exclusive',
6, 'exclusive', 'unknown') "Lockmode"
from V$LOCKED_OBJECT;


Session 551 is blocking 2 other sessions

select * from v$session where sid='551'

select * from v$lock;

select sid, serial#,status from v$session where username = 'USER';

select serial#,status from v$session where sid='Session id';

alter system kill session 'SID,SERIAL#';


The session should now be killed and the lock SHOULD release.

Rechecking "v$locked_object" will tell you this. If the lock does not
immediately release, there may be a rollback occuring.

To check for rollback:

select used_ublk from v$transaction where ADDR=;


-------------------------------- ORA-00054: resource busy and acquire with NOWAIT specified

Trying to execute a LOCK TABLE or SELECT FOR UPDATE command with the NOWAIT keyword but the resource was unavailable.
1. Wait and try the command again after a few minutes.
2. Execute the command without the NOWAIT keyword.



---------------------------------------------ORA-00600: intemal error code, arguments: [numl, [?], [?], [?], [?], [?]
ORA-600 is an internal error generated by the generic kernel code of the Oracle RDBMS software. It is different from other Oracle errors in many ways. The following is a list of these differences:

1. An ORA-600 error may or may not be displayed on the screen. Therefore, screen output should not be relied on for capturing information on this error. Information on ORA-600 errors are found in the database alert and trace files. We recommend that you check these files frequently for database errors. (See the Alert and Trace Files section for more information.)

2. Each ORA-600 error comes with a list of arguments They usually enclosed in square brackets and follow the error on the same line for example:

ORA-00600 [14000][51202][1][51200][][]

Each argument has a specific meaning which can only be interpreted by an Oracle support analyst. The arguments may also change meaning from version to version therefore customers are not advised to memorize them.

3. Every occurrence of an ORA-600 should be reported to Oracle Support. Unlike other errors, you can not find help text for these errors. Only Oracle technical support should diagnose and take actions to prevent or resolve damage to the database.

4. Each ORA-600 error generates a database trace file.

Possible causes

Possible causes include:

* time-outs,
* file corruption,
* failed data checks in memory, hardware, memory, or I/O messages,
* incorrectly restored files
* a SELECT FROM DUAL statement in PL/SQL within Oracle Forms (you have to use SELECT FROM SYS.DUAL instead!)

How to fix it

Contact Oracle Support with the following information:

* events that led up to the error
* the operations that were attempted that led to the error
* the conditions of the operating system and database at the time of the error
* any unusual circumstances that occurred prior to receiving the ORA-00600 message.
* contents of any trace files generated by the error
* the relevant portions of the Alert file
* in Oracle Forms PL/SQL, use SELECT FROM SYS.DUAL to access the system "dual" table

------------------ORA-00376---------ORA-01110-------recovery from lost datafile
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: 'D:\VIKRAM\ORADATA\TEST2\USERS01.DBF'


sql>startup
ORA-01157: cannot identify/lock data file 4 – see DBWR trace file
ORA-01110: data file 4: ‘D:\ORACLE_DATA\DATAFILES\ORCL\USERS01.DBF’
RMAN> restore datafile 4;
RMAN> recover datafile 4;
RMAN> alter database open;

-----If the database is already open when datafile corruption is detected, you can recover the datafile without shutting down the database. The only additional step is to take the relevant tablespace offline before starting recovery. In this case you would perform recovery at the tablespace level.
RMAN> sql ‘alter tablespace USERS offline immediate’;
RMAN> recover tablespace USERS;
RMAN> sql ‘alter tablespace USERS online’;









-----------------------ORA-01925: maximum of 148 enabled roles exceeded ----------------

Increase max_enabled_roles and warm start the database.

or revoke roles from user
-------------------------------ORA-01000: maximum open cursors exceeded
Cause: A host language program attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.
Action: Modify the program to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.


------------RMAN recover on different location---------ORA-01180--ORA-01110--------------------------
RMAN> restore database;


ORA-01180: can not create datafile 1
ORA-01110: data file 1: 'E:\SNSD1011\SYSTEM01.ORA'

crosscheck
delete force obsolete;
delete expired archivelog all;
crosscheck backup of database;
delete expired backup;

RMAN> catalog backuppiece 'E:\archive1011\sns1011\0ULAP4RC_1_1';
SQL> alter database rename file 'E:\SNSD1011\REDO01.ORA' TO 'D:\SNSD1011\REDO01.ORA' ;

run { set until sequence
set newname for datafile 1 to 'D:\SNSD1011\SYSTEM01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\UNDOTBS01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\SYSAUX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\INDX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\USERS01.ORA' ;




---------------------RMAN--------Recover Database witn missing Archieve Logs.----------ora 19870 ora 19505 ora 27041---------

I am trying the restore my old database but due to missing of one archieve log. i m not able to restore and recover from rman

Not able to open database

ora 19870 ora 19505 ora 27041 osd 04002

rman 00571 rman 00569 rman 03002 rman 06053 rman 06025


Solutions
shutdown immediate;
add into init.ora _allow_resetlogs_corruption=true
startup mount;
sql>recover database until cancel using backup controlfile;

Specify log: {=suggested | filename | AUTO | CANCEL}

CANCEL

Alter database open resetlogs



------------------ORA-01034: Oracle not available

Oracle is not started up. Possible causes may be that either the SGA requires more space than was allocated for it or the operating-system variable pointing to the instance is improperly defined.

1. Refer to accompanying messages for possible causes and correct the problem mentioned in the other messages.
2. If Oracle has been initialized, then on some operating systems, verify that Oracle was linked correctly.
3. See the platform specific Oracle documentation.

--------------------ORA-12545: Connect failed because target host or object does not exist

The address specified is not valid, or the program being connected to does not exist.
1. Ensure the ADDRESS parameters have been entered correctly.
2. Ensure that the executable for the server exists.
3. If the protocol is TCP/IP, edit the TNSNAMES.ORA file to change the host name to a numeric IP address and try again.


------------------------ORA-00942: table or view does not exist

1. SQL statement is executed that references a table or view that either does not exist.
2. You do not have access to the table or view, or the table or view belongs to another schema and you didn't reference the table by the schema name.


1. If this error occurred because the table or view does not exist, you will need to create the table or view.
2. If this error occurred because you do not have access to the table or view, you will need to have the owner of the table/view, or a DBA grant you the appropriate privileges to this object.
3. If this error occurred because the table/view belongs to another schema and you didn't reference the table by the schema name, you will need to rewrite your SQL to include the schema name.

---------------ORA-03113: end-of-file on communication channel

You encountered an unexpected end-of-file on the communication channel.

1. Check for network problems and review the SQL*Net setup.
2. Look in the alert.log file for any errors.
3. Test to see whether the server process is dead and whether a trace file was generated at failure time.



------------------------------------------ORA-06502: PL/SQL: numeric or value error
The executed statement resulted in an arithmetic, numeric, string, conversion, or constraint error. Change the data, how it is manipulated, or how it is declared so that values do not violate constraints.


------------------------------------------23 ORA-04031: unable to allocate num bytes of shared memory num, num, num Tried to use more shared memory than was available. SGA private memory has been exhausted.

1. Reduce your use of shared memory.
2. Increase the SHARED_POOL_SIZE initialization parameter in the initialization file.
3. Use the DBMS_SHARED_POOL package to pin large packages.


------------------------------------------24 ORA-01756: quoted string not properly terminated A quoted string is not terminated with a single quote mark (') Insert the closing quote and retry the statement.


------------------------------------------25 ORA-29283: invalid file operation An attempt was made to read from a file or directory that does not exist, or file or directory access was denied by the operating system. Verify file and directory access privileges on the file system, and if reading, verify that the file exists.

------------------------------------------26 ORA-00020: maximum number of processes num exceeded All process state objects are in use.

1. Wait a few minutes and try to re-execute the statement(s).
2. Shut down Oracle, increase the PROCESSES parameter in the initialization parameter file, and restart Oracle.

------------------------------------------27 ORA-12203: TNS:unable to connect to destination

1. Invalid address specified or destination is not listening.
2. This error can also occur because of underlying network or network transport problems.


1. Verify that the net service name you entered was correct.
2. Verify that the ADDRESS portion of the connect descriptor which corresponds to the net service name is correct.
3. Ensure that the destination process (for example the listener) is running at the remote node.

----------------------------------------ORA-12154: TNS:could not resolve the connect identifier specified

You tried to connect to Oracle, but the service name is either missing from the TNSNAMES.ORA file or is incorrectly defined

1. Make sure that the TNSNAMES.ORA file exists and is in the correct directory.
2. Make sure that the service name that you are connecting to is included in the TNSNAMES.ORA file and that it is correctly defined.
3. Make sure that there are no syntax errors in the TNSNAMES.ORA file. For example, if there are unmatched brackets in the file, the file will be rendered unusable.




------------------------------------------28 ORA-01017: invalid username/password; logon denied Logging into Oracle with an invalid username/password combination. Enter a valid username and password combination in the correct format. If the username and password are entered together, the format is: username/password



------------------------------------------29 ORA-01403: no data found

1. Executing a SELECT INTO statement and no rows were returned.
2. Referencing an uninitialized row in a table.
3. Reading past the end of file with the UTL_FILE package.

Terminate processing of the data.


30 ORA-01033: ORACLE initialization or shutdown in progress An attempt was made to log on while Oracle is being started up or shutdown Wait a few minutes. Then retry the operation.




----------------------------------------????????????/
alter database open resetlogs;
ORA-01153: an incompatible media recovery is active
SQL> alter database recover cancel;
now
ORA-01112: media recovery not started


-------------------

http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/ora24324-service-handle-not-initialized-ora01041-internal-error-hostdef-extension-doesnt-exist-2771079

ORA-01092: ORACLE instance terminated. Disconnection forced

----------------------------------------------

ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
ORA-24324 ORA-01041

--------
ORA-01033: ORACLE initialization or shutdown in progress
-----------------

Common Oracle error codes

ORA-00001 Unique constraint violated. (Invalid data has been rejected)

ORA-00600 Internal error (contact support)

ORA-03113 End-of-file on communication channel (Network connection lost)

ORA-03114 Not connected to ORACLE

ORA-00942 Table or view does not exist

ORA-01017 Invalid Username/Password

ORA-01031 Insufficient privileges

ORA-01034 Oracle not available (the database is down)

ORA-01403 No data found

ORA-01555 Snapshot too old (Rollback has been overwritten)


ORA-12154 TNS:could not resolve service name"
ORA-12203 TNS:unable to connect to destination"
ORA-12500 TNS:listener failed to start a dedicated server process"
ORA-12545 TNS:name lookup failure"
ORA-12560 TNS:protocol adapter error"
ORA-02330 Package error raised with DBMS_SYS_ERROR.RAISE_SYSTEM_ERROR


-

Database fresh creation using cold backup(physical folder)

Database fresh creation using cold backup(physical folder) with same database files location

Install oracle software without database creation

mkdir C:\oracle\product\10.2.0\admin\ins2\adump
mkdir C:\oracle\product\10.2.0\admin\ins2\bdump
mkdir C:\oracle\product\10.2.0\admin\ins2\cdump
mkdir C:\oracle\product\10.2.0\admin\ins2\dpdump
mkdir C:\oracle\product\10.2.0\admin\ins2\pfile
mkdir C:\oracle\product\10.2.0\admin\ins2\udump
mkdir c:\archive1011\ins2\arch

copy initins2.ora, tnsnames.ora,listener.ora to destination location and change paramater accordingly.

Create Instance and service

oradim -new -sid ins2 -SRVC OracleServiceins2 -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE c:\oracle\product\10.2.0\db_1\database\initins2.ORA


lsnrctl stop
lsnrctl start


sqlplusw sys/oracle@ins2srv as sysdba

startup


-------------

-------In case of different database file location-----
from the souce server
alter database backup controlfile to trace as 'c:/controlfilereadable';

to destination
startup nomount
recreate control file with change database files location
alter database mount
alter database open

-----------------------------------------

Trace files generated in Bulk in BDUMP

Same following error in all trace file which is generated in every minute..

Error in Alert log ORA-07445 ORA-12012 ORA-00604 ORA-01427 ORA-06512
-----------------------------------------
Errors in file c:\oracle\product\10.2.0\admin\ari0708\bdump\ari5_j000_5996.trc:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [kghalp+288] [PC:0x4F0F13E] [ADDR:0x1E] [UNABLE_TO_READ] []
ORA-12012: error on auto execute of job 1
ORA-00604: error occurred at recursive SQL level 1
ORA-01427: single-row subquery returns more than one row
ORA-06512: at line 2
-----------------------------------------
by disabling the tracing in init.ora, bdump trace generation can not be stopped. bdump trace is generated due to following reasons

1. Software Bugs
2. High RAM Usage.
3. OS Limits
4. Shared pool corruptions due to I/o slaves error (Refer Metalink note:1404681)
5. SGA corruption (Refer Metalink note: 5202899/5736850)
6. Library cache lock by query coordinator (Refer Metalink note: 3271112)
7. Hardware Errors
8.Oracle Block Corruptions
9. Program Errors

-----------------------------------
ORA-12012: error on auto execute of job 1

select * from dba_jobs;
select * from dba_jobs_running;
emd_maintenance has job_id=1 and running in some seconds

-------Solution--------
sqlplus sys/oracle@ari0708srv as sysdba

exec sysman.emd_maintenance.remove_em_dbms_jobs;

--------------------------

Followers