Showing posts with label backup. Show all posts
Showing posts with label backup. Show all posts

Friday, October 21, 2011

Data Pump EXPDP

expdp userid="'sys/oracle@ari1112srv as sysdba'" FULL=y directory=dpump_dir2 dumpfile=ari1112.DMP LOGFILE=ari1112.LOG



expdp ldbo/ldbo$123@ari1112srv directory=dpump_dir1 schemas=ldbo dumpfile=ari1112_tab1.DMP LOGFILE=ari1112_tab.LOG INCLUDE=TABLE:\"IN \(\'BILLCHARGESFILE2\',\'BILLINCLUSIVECHARGESFILE2\',\'CBILLFILE2\',\'DBILLFILE2\',\'FALEDDRCR2\',\'FALEDGERDESCRIPTION2\',\'FOCBILLFILE2\',\'FODBILLFILE2\',\'FOTRANSACTIONS2\',\'TRANSACTION2\'\)\"



expdp ldbo/ldbo$123@ari1112srv full=Y directory=dpump_dir1 dumpfile=ari1112_meta.DMP LOGFILE=ari1112_meta.LOG content=metadata_only EXCLUDE=TABLE:\"IN \(\'CLIENTSCANNEDIMAGE\',\'BILLCHARGESFILE\',\'BILLINCLUSIVECHARGESFILE\',\'CBILLFILE\',\'DBILLFILE\',\'FALEDDRCR\',\'FALEDGERDESCRIPTION\',\'FOCBILLFILE\',\'FODBILLFILE\',\'FOTRANSACTIONS\',\'TRANSACTIONS\',\'MV_CAPABILITIES_TABLE\',\'MV_RKCASHCLISCRIPTRANSACTION\',\'MV_RKCASHTRANSACTION\',\'MV_RKCLIENT\',\'MV_RKEXCHANGETURNOVER\',\'MV_RKFOCLISCRIPTRANSACTION\',\'MV_RKFOTRANSACTION\'\)\"



expdp ldbo/ldbo$123@ari1112srv directory=dpump_dir1 dumpfile=ari1112_package.DMP LOGFILE=ari1112_package.LOG INCLUDE=PACKAGE_BODIES,PACKAGE,PROCEDURE,FUNCTION

Wednesday, February 16, 2011

Data Pump

USING DATA PUMP
TABLE EXPORT IMPORT
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log



1. The following SQL statements creates a user, a directory object named dpump_dir1 and grants the permissions to the user.

SQLPLUS ldbo/linux@sns0809srv as sysdba
SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS 'e:\dpdirectory';
SQL> grant read, write on directory dpump_dir1 to dpuser


$ expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
$expdp dpuser/dpuser@sns0809srv schemas=dpuser directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
$expdp dpuser/dpuser@TDB10G schemas=dpuser directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log



---------------difference between traditional exp/imp and data pump-------

The main differences are listed below:
1)Expdp/Impdp access files on the server rather than on the client.

2))Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

3)To improved performance Impdp/Expdp use parallel execution rather than a single stream of execution.
1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions

-------

DBMS_DATAPUMP package


SQLPLUS system/manager@TDB10G as sysdba



SQL> create user dpuser identified by dpuser;
grant connect, resource to dpuser;
CREATE DIRECTORY dpump_dir1 AS 'E:\app\kshitij';
grant read, write on directory dpump_dir1 to dpuser;

========
$ expdp dpuser/dpuser@orcl schemas=dpuser

include= TABLE:\"IN (\'EMP\', \'DEPT\')\"

directory=dpump_dir1 dumpfile=dpuser.dmp logfile=dpuser.log



$expdp dpuser/dpuser@TDB10G schemas=dpuser

exclude=TABLE:\"= \'EMP_DETAILS\'\"

directory=dpump_dir1 dumpfile=dpuser2.dmp lo
==============
The following steps list the basic activities involved in using Data Pump API.

1. Execute DBMS_DATAPUMP.OPEN procedure to create job.

2. Define parameters for the job like adding file and filters etc.

3. Start the job.

4. Optionally monitor the job until it completes.

5. Optionally detach from job and attach at later time.

6. Optionally, stop the job

7. Restart the job that was stopped.

Example of the above steps:



Declare

P_handle number; --- -- Data Pump job handle

P_last_job_state varchar2(45); ---- -- To keep track of job state

P_job_state varchar2(45);

P_status ku$_Status ----- -- The status object returned by get_status

BEGIN

P_handle:=DBMS_DATAPUMP.OPEN ('EXPORT','SCHEMA', NULL,'EXAMPLE','LATEST');



-- Specify a single dump file for the job (using the handle just returned)

-- and a directory object, which must already be defined and accessible

-- to the user running this procedure



DBMS_DATAPUMP.ADD_FILE (p_handle,'example.dmp','DMPDIR');



-- A metadata filter is used to specify the schema that will be exported.



DBMS_DATAPUMP.METADATA_FILTER (p_handle,'SCHEMA_EXPR','IN (''dpuser'')');



-- Start the job. An exception will be generated if something is not set up

-- Properly.



DBMS_DATAPUMP.start_job (p_handle);



----The export job should now be running.

The status of the job can be checked by writing a separate procedure and capturing the errors and status until it is completed. Overall job status can also be obtained by querying “SELECT * from dba_datapump_jobs”.



=======================

Data Pump ( expdp / impdp )

Now in 10g we can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities.

Advantages using data pump

1) Ability to estimate jobs times
2) Ability to restart failed jobs
3) Perform fine-grained object selection
4) Monitor running jobs
5) Directly load a database from a remote instance via the network
6) Remapping capabilities
7) Improved performance using parallel executions
Note (1) You cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater.

Note (2) The expdp and impdp are command line tools and run from within the Operating System.

Use of data pump

1) Migrating databases
2) Copying databases
3) Transferring oracle databases between different operating systems
4) Backing up important tables before you change them
5) Moving database objects from one tablespace to another
6) Transporting tablespace's between databases
7) Reorganizing fragmented table data
8) Extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

1) dbms_datapump - the main engine for driving data dictionary metadata loading and unloading 2) dbms_metadata - used to extract the appropriate metadata
3) command-line - expdp and impdp are the import/export equivalents

Data Access methods

1) Direct path

2) External table path

Direct Path
bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.
External Path
Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.
In the following cases oracle will use the external path if any of the below are in use
1) clustered tables
2) active triggers in the table
3) a single partition in a table with a global index
4) referential integrity constraints
5) domain indexes on LOB columns
6) tables with fine-grained access control enabled in the insert mode
7) tables with BFILE or opaque type columns

Data Pump files

All files will be created on the server.

1) Dump files - holds the data and metadata
2) log files - the resulting output from the data pump command
3) sql files - contain the DDL statements describing the objects included in the job but can contain data

Master data pump tables - when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Data Pump privileges

1) exp_full_database 2) imp_full_database


How Data Pump works


The Master Control Process (MCP), has the process name DMnn, only one master job runs per job which controls the whole Data Pump job, it performs the following
create jobs and controls them
creates and manages the worker processes
monitors the jobs and logs the process
maintains the job state and restart information in the master table (create in the users schema running the job)
manages the necessary files including the dump file set
The master process creates a master table which contains job details (state, restart info), this table is created in the users schema who is running the Data Pump job. Once the job has finished it dumps the table contents into the data pump file and deletes the table. When you import the data pump file it re-creates the table and reads it to verify the correct sequence in which the it should import the various database objects.
The worker process is named DWnn and is the process that actually performs the work, you can have a number of worker process running on the same job (parallelism). The work process updates the master table with the various job status.
The shadow process is created when the client logs in to the oracle server it services data pump API requests, it creates the job consisting of the master table and the master process.
The client processes are the expdp and impdp commands.

Examples

Exporting database
expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log
Note: increase the parallel option based on the number of CPU's you have

Exporting schema
expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log

table
expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only tablespace

expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log

Importing database
impdp system/password full=y dumpfile=data.dmp nologfile=y

Importing schema change
impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only

impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=…Other Options

directory
specifies a oracle directory object

filesize
split the dump file into specific sizes (could be used if filesystem has 2GB limit)

parfile
specify the parameter file

content
contents option can be ALL, METADATA_ONLY or DATA_ONLY

compression
compression is used by default but you can stop it

exclude/include
metadata filtering

query
selectively export table data using a SQL statement

estimate
Calculate job estimates where the vaild keywords are blocks and statistics

estimate_only
Calculate job estimates without performing the export

network link
you can perform a export across a network

encryption
you can encrypt data within the data pump file

parallel
increase worker processes to increase throughput, base it on number of CPU's

remap_schema
move objects from one schema to another

remap_datafile
change the name of the datafile when moving across different systems

remap_tablespace
move from one tablespace to another

Useful Views
DBA_DATAPUMP_JOBS

summary information of all currently running data pump jobs
DBA_DATAPUMP_SESSIONS

displays the user currently running data pump jobs
V$SESSION_LONGOPS

display information like totalwork, sofar, units and opname
The units of work done so far
Privileges
IMP_FULL_DATABASE
required if using advanced features
EXP_FULL_DATABASE
required if using advanced features

DBMS_DATAPUMP package
The package dbms_datapump can be used for the following
starting/stopping/restarting a job
monitoring a job
detaching from a job

exporting

declare d1 number;begin d1 := dbms_datapump.open('export','schema',null, 'test1', 'latest'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_filter(d1, 'schema_expr','in (''OE'')'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;


importing
declare d1 number;begin d1 := dbms_datapump.open('import','full',null, 'test1'); dbms_datapump.add_file(d1, 'test1.dmp', 'dmpdir'); dbms_datapump.metadata_remap(d1, 'remap_schema', 'oe', 'hr'); dbms_datapump.start_job(d1); dbms_datadump.detach(d1);end;

============================
REMAP

REMAP_TABLESPACE – This allows you to easily import a table into a different tablespace from which it was originally exported. The databases have to be 10.1 or later.

> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY =dpumpdir1 DUMPFILE=employees.dmp

REMAP_DATAFILES – This is a very useful feature when you move databases between platforms that have different file naming conventions. This parameter changes the source datafile name to the target datafile name in all SQL
statements where the source datafile is referenced. Because the REMAP_DATAFILE value uses quotation marks, it’s best to specify the parameter within a parameter file.

The parameter file, payroll.par, has the following content:

DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/pa
yroll/tbs6.dbf’”

You can then issue the following command:

> impdp username/password PARFILE=payroll.par
================

Maximizing the Power of Oracle Data Pump

Data Pump works great with default parameters, but once you are comfortable with Data Pump, there are new capabilities that you will want to explore.

Parallelism

Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. Now Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously (PARALLEL is only available in the Enterprise Edition of Oracle Database.) The number of parallel processes can be changed on the fly using Data Pump’s interactive command-line mode. You may have a certain number of processes running during the day and decide to change that number if more system resources become available at night (or vice versa).

For best performance, you should do the following:

1. Make sure your system is well balanced across CPU, memory, and I/O.
2. Have at least one dump file for each degree of parallelism. If there aren’t enough dump files, performance will not be optimal because multiple threads of execution will be trying to access the same dump file.
3. Put files that are members of a dump file set on separate disks so that they will be written and read in parallel.
4. For export operations, use the %U variable in the DUMPFILE parameter so multiple dump files can be automatically generated.

> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=par_exp%u.dmp PARALLEL=4

-------

Thursday, May 20, 2010

Oracle Export Backup Job Email Notification

---------------------------------------------Export.bat---------------------------


@ECHO off
SETLOCAL
IF [%1]==[] goto s_start

ECHO GETDATE.cmd
ECHO Returns the date independent of regional settings
ECHO Creates the environment variables %v_year% %v_month% %v_day%
ECHO.
ECHO SYNTAX
ECHO GETDATE
ECHO.
ECHO.
GOTO :eof

:s_start

FOR /f "tokens=2-4 skip=1 delims=(-)" %%G IN ('echo.^|date') DO (
FOR /f "tokens=2 delims= " %%A IN ('date /t') DO (
SET v_first=%%G
SET v_second=%%H
SET v_third=%%I
SET v_all=%%A
)
)

SET %v_first%=%v_all:~0,2%
SET %v_second%=%v_all:~3,2%
SET %v_third%=%v_all:~8,2%

rem ECHO Today is Year: [%yy%] Month: [%mm%] Day: [%dd%]

ENDLOCAL & SET v_year=%yy%& SET v_month=%mm%& SET v_day=%dd%
ren C:\00\1.txt USED66665_%v_month%%v_day%%v_year%.txt

exp snsexport/snsexp@sns1011srv owner=(ldbo) file=G:\EXPORT\sns1011ora_%v_day%%v_month%%v_year%.DMP LOG=G:\EXPORT\sns1011ora_%v_day%%v_month%%v_year%.LOG

if exist G:\EXPORT\sns1011ora_%v_day%%v_month%%v_year%.DMP GOTO COMPLETED

if not exist G:\EXPORT\sns1011ora_%v_day%%v_month%%v_year%.DMP GOTO FAILED

:COMPLETED
Cscript.exe C:\Email_export.vbs
exit
:FAILED
Cscript.exe C:\Emailfail_export.vbs

---------------------------------------------Email_export.vbs---------------------------


Set wshShell = WScript.CreateObject( "WScript.Shell" )
strComputerName = wshShell.ExpandEnvironmentStrings( "%COMPUTERNAME%" )
'WScript.Echo "Computer Name: " & strComputerName
Set objMessage = CreateObject("CDO.Message")
objMessage.Subject = "LD Oracle Export Backup Job Completed"
objMessage.From = "dbanotification@uniconindia.in"
objMessage.To = "dbamonitoring@uniconindia.in"
objMessage.TextBody = "LD Oracle Export Backup Job Completed Successfully on " & strComputerName & " at " & FormatDateTime(Date,1) & " " & Time

'==This section provides the configuration information for the remote SMTP server.
'==Normally you will only change the server name or IP.
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

'Name or IP of Remote SMTP Server
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "10.100.0.94"

'Server port (typically 25)
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25

objMessage.Configuration.Fields.Update

'==End remote SMTP server configuration section==

objMessage.Send


---------------------------------------------Emailfail_export.vbs---------------------------


Set objMessage = CreateObject("CDO.Message")
strComputer = "." ' Name of the computer
objMessage.Subject = "LD Oracle Job Failed ORANOP02P"
objMessage.From = "dbanotification@uniconindia.in"
objMessage.To = "kgupta2@uniconindia.in"
objMessage.TextBody = "LD Oracle Job Failed on ORANOP02P !" & strComputer

'==This section provides the configuration information for the remote SMTP server.
'==Normally you will only change the server name or IP.
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

'Name or IP of Remote SMTP Server
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "10.100.0.94"

'Server port (typically 25)
objMessage.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25

objMessage.Configuration.Fields.Update

'==End remote SMTP server configuration section==

objMessage.Send

---------------------------------

Followers