Showing posts with label migration. Show all posts
Showing posts with label migration. Show all posts

Thursday, February 14, 2013

Migration using database link Performance


alter database noarchivelog;

NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated



DB1 : LD
DB2: DPNSDL
Database link created at LD
Processes are running on DPNSDL DB.

Process:

Old procedure we just Select the data from DB1,DB2 and manipulate and insert into DB1. IT takes 5-6 hours

INSERT INTO LEDGER@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK
INSERT INTO FALEDDESCRIPT@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK

New procedure, we create temp table on DPNSDL same as LD where database will have to inserted. Manipulate and store the data into temp table and in last insert the data into LD using database link
It takes 20 min.

INSERT INTO TBLTEMPLEDGERSUM
          (COOWNCODE, NBALANCE)
             SELECT OOWNCODE, NVL(SUM(CAMOUNT - DAMOUNT), 0) AS BAL
             FROM LEDGER@DPLDLINK
WHERE FIRMNUMBER = ILDFIRMNO
AND DTOFTRAN <= TO_DATE(ILDBALDATE, 'dd/mm/yyyy')
GROUP BY OOWNCODE;

INSERT INTO TBLTEMPLEDGERSUM
INSERT INTO TBLTEMPLDLEDGER
INSERT INTO TBLTEMPJOURNDC
INSERT INTO TBLTEMPFALEDDESCRIPT




Changes

Create LD Ledger table into NSDL database and store it at NSDL and later insert into LD db



               NSDL Financial Transmission which was taking around 5 hours has been brought down to only 20 minutes at Aditya Birla. All programmers need remember one thing

1) Problem if it arises at clients place need not necessarily mean that there was some recent change in the program. The problem could be because the data size has increased considerably and there is a change required in query or pl/sql execution to complete the same.

The above is very important as once LD moves to Silverlight the onus would be on programming team. I would be coming out of programming. Remember LD is the flagship product of the company.

Kudos to the programming and support team to tackle the same and solve it on priority basis.




Generally speaking dblink performance limited by network speed, but there are some pitfalls, leading to performance issues:

unnecessary joins between local and remote tables that leads to transferring large amounts of data;
lack of parallelism built into the query (unions help in this case);
implicit sorting on remote database side;
failure to comply with Oracle recommendations such as using of collocated views and hints (mainly DRIVING_SITE and NO_MERGE).




NO_UNNEST hint


USE_NL


http://www.unitask.com/oracledaily/2012/03/01/tuning-query-with-database-link-using-use_nl-hint/
https://cn.forums.oracle.com/forums/thread.jspa?threadID=2256982


create sequence to get import speed

select * from dba_sequences where sequence_name



I was in a very large volume shop pulling very large volumes of data from a collector system to be rolled into reporting tables. They selected from materized view on remote collector system as it was more efficient way to build the result set, which was then inserted noappend into local temp table with no constraints from which data was rolled into local tables.


http://msutic.blogspot.in/2012/03/tuning-distributed-query-using-usenl.html
http://msutic.blogspot.in/2009/08/sql-tuning-using-usehash-hint-dblink.html


I once solved similar issue using USE_HASH hint (SQL Tuning - using USE_HASH hint - dblink issue) so I've tried with USE_NL hint in this case.
It helped! I've received result under a second.

Using USE_HASH hint I resolved network bottleneck as much less data was being sent over the network.

SELECT /*+ use_nl(cc cfp) */ *
  FROM TAB1 cc, TAB2@DB2 cfp
 WHERE cc.C_ID = cfp.B_ID AND cc.CODE = '1234567890'

use both local, remote db table in hint


following may slow

/*+ use_nl(CLDP LDFIBS) OPTIMIZER_FEATURES_ENABLE('10.2.0.4') */


----------What is OPAQUE_TRANSFORM Hint and how to Control it (Doc ID 780503.1)


-----------------------------------------Remote tables have a number of performance implications:

If all of the tables from which you are selecting are on the same remote database, then Oracle will get the remote database to run the query and send back the results. The only problems are the cost of shipping the results back, and the fact that you cannot see how the remote database is executing the query in Explain Plan (from Oracle 10g onwards, Explain Plan will get the remote database to plan its bit of the query and pass back the results to your Explain Plan). Make sure that the whole query is being performed remotely - the Explain Plan output should just have the one step - "REMOTE" - or in 10g+, every line of the plan contains the word "REMOTE".

If some tables are local - or you are joining tables over two or more remote databases - then Oracle will need to ship all of the data to one database before it can perform a join.

If you want to use an Indexed Nested Loop (low volume) join, then the outer (2nd) table cannot be remote - if it is you will get a full table scan and a hash join. You could use the DRIVING SITE hint to make the remote database the driving site of the query. Then the inner (1st) table will be sent over there, and an index can be used on the outer table. The remote database will then send back the results.

Even better, speak to your DBA about having the remote table(s) replicated on your database so that you can avoid the remote join altogether.



Update on Large Table


http://jonathanlewis.wordpress.com/2008/07/28/big-update/
UPDATE is the heaviest DML operation and it will take a long time, regardless of whether you COMMIT after each update operation (for one table) or at end only.



=============

1. create table tabc select * from tabb;

2. truncate table tabb;

2.1 drop indexes from tabb;

3. insert into tabb (new_id,old_id, col1 ..... col 99)
select decode(a.old_id,null,c.new_id,a.new_id) as new_id, old_id, col1
........ col99
from tabc c, (select old_id,new_id from taba group by old_id,
new_id) a
where c.old_id = a.old_id(+);


4. create indexes on tabb;


===========

At a high level…
Create a new table by selecting everything from the old table and performing the calculation in the select clause
Create table TABLE_NEW nologging as select <perform calculation here> from TABLE_OLD;
Apply all constraints, indexes, grants, etc. to the new table
Drop the old table
Rename TABLE_NEW to TABLE_OLD;
Note the nologging option bypasses generating any redo and will result in significant performance improvement.





Use CTAS in lieu of large updates

When you are updating the majority of rows in a table, using Create Table As Select (CTAS) is often more efficient performance than a standard update.  For example, assume that the following update changed 75% of the table rows:

update
   mytab
set
   status = 'new'
where
   status = 'old;

In this case, a parallelized CTAS may perform far faster (Note: Make sure that you have an SMP server before using the parallel degree option):

create table new_mytab NOLOGGING as
select  /*+ full parallel(mytab,35)*/
   decode (status,'new','old',status,
   col2, col3, col4
from mytab;

-- rebuild indexes, triggers and constraints to new_mytab

rename mytab to bkup_mytab;
rename new_mytab to mytab;


Monday, November 5, 2012

11g Migration using RMAN


The steps for 10g database:

1- Run the utlu112i.sql Pre-upgrade script. You can find this script under @$ORACLE_HOME/rdbms/admin/. It must be copied from the 11g database software.

SQL> @$ORACLE_HOME/rdbms/admin/utlu112i.sql

This script adds a column named tz_version to table named registry$database. Pre-upgrade script updates this column with the value of the following query.

SQL> select version from v$timezone_file;

So it performs following operation.

SQL> ALTER TABLE registry$database ADD (tz_version NUMBER);
SQL> UPDATE registry$database set tz_version =4;

SQL> ALTER  PACKAGE “SYS”.”DBMS_REGISTRY”  COMPILE BODY;
SQL> ALTER VIEW “SYS”.”DBA_REGISTRY_DATABASE”  COMPILE;

2- Connect to 10g database and take RMAN full backup.

#rman target /
RMAN> backup as backupset database;

3- Copy 10g database backup files and archive files to 11g database server.

The steps for 11g database:

1- Create temporary pfile in $ORACLE_HOME/dbs

*.audit_file_dest=’/oracle/admin/TALIPDB/adump’
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/talipdb/controlfile/current.257.787742981′,’+DATA/talipdb/controlfile/current.258.787742983′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_create_online_log_dest_1=’+RECO’
*.db_domain=”
*.db_name=’TALIPDB’
*.diagnostic_dest=’/oracle’
*.job_queue_processes=0
*.open_cursors=300
*.pga_aggregate_target=1G
*.processes=150
*.remote_login_passwordfile=’EXCLUSIVE’
*.sga_target=2G
*.undo_tablespace=’UNDOTBS1′

2- Open the database in NOMOUNT state.

# rman target /

RMAN> startup nomount;

3- Restore controlfile from backup.

RMAN> restore controlfile from ‘/oracle/ora11g/talipdb/backup/c-784951186-20120620-02′;

4- Open the database in MOUNT state.

RMAN> alter database mount;

5- Catalog RMAN backup files and archive log files.

RMAN> catalog start with ‘/oracle/ora11g/talipdb/backup’;
RMAN> catalog start with ‘/oracle/ora11g/talipdb/archive’;

6- Restore 10g database backup to +DATA diskgroup and perform incomplete recovery.

RMAN> run
{
allocate channel c1 device type disk;
SET NEWNAME FOR DATAFILE 1 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 2 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 3 TO ‘+DATA’;
SET NEWNAME FOR DATAFILE 4 TO ‘+DATA’;
restore database until sequence 4;
switch datafile all;
recover database until sequence 4;
}

7-  Open the database with RESETLOGS UPGRADE.

# sqlplus / as sysdba
SQL> alter database open resetlogs upgrade;

8- Run the upgrade script.

SQL> SPOOL upgrade.log
SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql
SQL> SPOOL off

9- If our 10g and 11g database os platforms are different then you must run utlmmig.sql script.

————–Changing 32 bit to 64 bit————–
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP UPGRADE
SQL> SPOOL migrate.log
SQL> @$ORACLE_HOME/rdbms/admin/utlmmig.sql
SQL> SPOOL off
——————————————-

10- Now, you can open the database.

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

11- Run the Post-Upgrade script to check problems.

SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql

12- Compile invalid objects.

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

13- Drop the temporary file and create new one in +DATA diskgroup.

SQL> alter tablespace temp drop tempfile ‘/data_TALIPDB/temp01.dbf’;
SQL> alter tablespace temp add tempfile ‘+DATA’ size 1024M;

Followers