Thursday, February 14, 2013

Migration using database link Performance


alter database noarchivelog;

NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated



DB1 : LD
DB2: DPNSDL
Database link created at LD
Processes are running on DPNSDL DB.

Process:

Old procedure we just Select the data from DB1,DB2 and manipulate and insert into DB1. IT takes 5-6 hours

INSERT INTO LEDGER@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK
INSERT INTO FALEDDESCRIPT@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK

New procedure, we create temp table on DPNSDL same as LD where database will have to inserted. Manipulate and store the data into temp table and in last insert the data into LD using database link
It takes 20 min.

INSERT INTO TBLTEMPLEDGERSUM
          (COOWNCODE, NBALANCE)
             SELECT OOWNCODE, NVL(SUM(CAMOUNT - DAMOUNT), 0) AS BAL
             FROM LEDGER@DPLDLINK
WHERE FIRMNUMBER = ILDFIRMNO
AND DTOFTRAN <= TO_DATE(ILDBALDATE, 'dd/mm/yyyy')
GROUP BY OOWNCODE;

INSERT INTO TBLTEMPLEDGERSUM
INSERT INTO TBLTEMPLDLEDGER
INSERT INTO TBLTEMPJOURNDC
INSERT INTO TBLTEMPFALEDDESCRIPT




Changes

Create LD Ledger table into NSDL database and store it at NSDL and later insert into LD db



               NSDL Financial Transmission which was taking around 5 hours has been brought down to only 20 minutes at Aditya Birla. All programmers need remember one thing

1) Problem if it arises at clients place need not necessarily mean that there was some recent change in the program. The problem could be because the data size has increased considerably and there is a change required in query or pl/sql execution to complete the same.

The above is very important as once LD moves to Silverlight the onus would be on programming team. I would be coming out of programming. Remember LD is the flagship product of the company.

Kudos to the programming and support team to tackle the same and solve it on priority basis.




Generally speaking dblink performance limited by network speed, but there are some pitfalls, leading to performance issues:

unnecessary joins between local and remote tables that leads to transferring large amounts of data;
lack of parallelism built into the query (unions help in this case);
implicit sorting on remote database side;
failure to comply with Oracle recommendations such as using of collocated views and hints (mainly DRIVING_SITE and NO_MERGE).




NO_UNNEST hint


USE_NL


http://www.unitask.com/oracledaily/2012/03/01/tuning-query-with-database-link-using-use_nl-hint/
https://cn.forums.oracle.com/forums/thread.jspa?threadID=2256982


create sequence to get import speed

select * from dba_sequences where sequence_name



I was in a very large volume shop pulling very large volumes of data from a collector system to be rolled into reporting tables. They selected from materized view on remote collector system as it was more efficient way to build the result set, which was then inserted noappend into local temp table with no constraints from which data was rolled into local tables.


http://msutic.blogspot.in/2012/03/tuning-distributed-query-using-usenl.html
http://msutic.blogspot.in/2009/08/sql-tuning-using-usehash-hint-dblink.html


I once solved similar issue using USE_HASH hint (SQL Tuning - using USE_HASH hint - dblink issue) so I've tried with USE_NL hint in this case.
It helped! I've received result under a second.

Using USE_HASH hint I resolved network bottleneck as much less data was being sent over the network.

SELECT /*+ use_nl(cc cfp) */ *
  FROM TAB1 cc, TAB2@DB2 cfp
 WHERE cc.C_ID = cfp.B_ID AND cc.CODE = '1234567890'

use both local, remote db table in hint


following may slow

/*+ use_nl(CLDP LDFIBS) OPTIMIZER_FEATURES_ENABLE('10.2.0.4') */


----------What is OPAQUE_TRANSFORM Hint and how to Control it (Doc ID 780503.1)


-----------------------------------------Remote tables have a number of performance implications:

If all of the tables from which you are selecting are on the same remote database, then Oracle will get the remote database to run the query and send back the results. The only problems are the cost of shipping the results back, and the fact that you cannot see how the remote database is executing the query in Explain Plan (from Oracle 10g onwards, Explain Plan will get the remote database to plan its bit of the query and pass back the results to your Explain Plan). Make sure that the whole query is being performed remotely - the Explain Plan output should just have the one step - "REMOTE" - or in 10g+, every line of the plan contains the word "REMOTE".

If some tables are local - or you are joining tables over two or more remote databases - then Oracle will need to ship all of the data to one database before it can perform a join.

If you want to use an Indexed Nested Loop (low volume) join, then the outer (2nd) table cannot be remote - if it is you will get a full table scan and a hash join. You could use the DRIVING SITE hint to make the remote database the driving site of the query. Then the inner (1st) table will be sent over there, and an index can be used on the outer table. The remote database will then send back the results.

Even better, speak to your DBA about having the remote table(s) replicated on your database so that you can avoid the remote join altogether.



Public synonym performance



How to reference a different schema without hard coding it


Oracle resolves all names while parsing, and the query execution plan generated is the same whether or not those names were resolved via synonyms. So there can be no difference in query execution performance. There is a small difference in parsing, but it goes the other way. A reference via a public synonym requires additional library cache and dictionary cache lookups. However, the performance impact should not be noticeable.

What may be more significant is that public synonym usage clutters the library cache and dictionary cache with information needed to track the negative dependencies on non-existent objects. If JONES and KING both refer to SCOTT.EMP and SCOTT.DEPT via public synonyms, then cache entries are needed to represent the non-existent tables JONES.EMP, KING.EMP, JONES.DEPT and KING.DEPT, and all dependent SQL statements must have negative dependencies on all these objects. With say 1000 users, 200 tables, and 1000 SQL statements each accessing 2 tables on average, you would have 200,000 non-existent object records and 2,000,000 negative dependency records. This clutter can cause latch contention in the library cache, dictionary cache and shared pool.


http://www.techrepublic.com/article/choose-alternatives-for-public-and-private-synonyms/5693565

In many large applications, a common way of separating the end user from the schema owner is to use private synonyms for application schema objects. For example, user A and user B both have their own login accounts. Schema user X has two tables, T1 and T2, so the application designers set up private synonyms for both A and B that reference X.T1 and X.T2, like this:

connect X/X
 create table T1(…);
 create table T2(…);
 connect A/A
 create synonym T1 for X.T1;
 create synonym T2 for X.T2;
 connect B/B
 create synonym T1 for X.T1;
 create synonym T2 for X.T2;
With two schema objects and two users, you have only four private synonyms. When you have a large number of users and application objects, you'll probably see a slow degradation of database performance. The performance problem will mostly be during the parse phase.

Every time a user wants to query T1, the parser must query across a large set of synonyms and put each synonym in the library cache along with its dependency structure. This would even affect queries such as select * from dual. You should avoid synonyms and use alternatives that depend on your application design.

The best alternative is to always fully qualify a table with its schema name. This alternative is only effective if you can guarantee the schema name will not change between installations and that there's only one schema with the given name in the database. In most applications where the designers carefully hide the SQL from the end user, there is no real benefit to coding select * from T1 in the code when select * from X.T1 would work in your situation.

In cases where there may be multiple schemas but each end user only accesses one schema at a time, the ALTER SESSION SET CURRENT_SCHEMA command is better than a synonym. This command makes the default schema for unqualified tables go to a particular schema. For example:

connect A/A
alter session set current_schema = X;
select * from T1;

This final query will select values from X.T1 and doesn't require any synonyms, while still obeying granted privileges on database objects. Even when it isn't possible to update an application so it always issues the alter session after a connect, you can code a database LOGON trigger to automatically set the current schema for a user.
==========================
create or replace trigger logon_trg
   after logon on database
 begin
    if user in ('A','B') then
        execute immediate 'alter session set current_schema=X';
    end if;
 end;
=======================

CREATE OR REPLACE TRIGGER LOGON_TRG
  AFTER LOGON ON SCHEMA
BEGIN
     EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA = LDBO';
EXCEPTION
  when others
    then null;
END;
/

============================

Following Alex's suggestion, here is a logon trigger that checks the role rather than a username:

CREATE OR REPLACE TRIGGER LOGON_TRG
  AFTER LOGON ON DATABASE
declare
  has_role boolean;
BEGIN

    has_role := dbms_session.is_role_enabled('FOOBAR_ROLE');

    if (has_role) then
      EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA = foobar';
    end if;
exception
   when others
      then null; -- prevent a login failure due to an exception  
END logon_trg;
/



============================
When user A or B logs in, his default schema will set automatically to X. The query select * from T1 will choose X.T1 automatically without requiring a synonym lookup or lock in the library.

Even in cases where the end user is accessing multiple schemas, or the names of database objects change between the end user and the application, you can still use views, which users can share in a central account:

create view T1 as select * from X.T1;
All of these alternatives rely on privileges the user gets to access a specific table. However, this may be more access than an application needs to give. A user who gets SELECT on a table in another schema can view all the columns and rows in that table or view any utility that can execute generic SQL.

An alternative that restricts exactly what data and operations the user can perform against the data is to define PL/SQL functions, procedure, and packages to wrap up specific access to an object. The PL/SQL code runs with the owner's privilege, and unqualified names resolve according to the code owner's schema. If the code owner is also the schema object owner, this eliminates the need for synonyms or even hard-coded schema names.

create or replace function t1_get_count return integer
 as
    l_count integer;
 begin
    select count(*) into l_count from t1;
    return l_count;
 exception
    when others then return 0;
 end;
 /
If you use "invoker rights" with AUTHID CURRENT_USER, then unqualified tables will resolve in the current user's schema, and you'll have to use another way of resolving those names at runtime.

Top Buffer Gets


set serverout on size 1000000

declare
top5 number;
text1 varchar2(4000);
x number;
len1 number;
Cursor c1 is
select buffer_gets,substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line('Reads'||'  '||'                          Text');
dbms_output.put_line ('-----'||'  '||'---------------------------------------------------');
dbms_output.put_line('      ');
open c1;
for i in 1 .. 5 loop
fetch c1 into top5, text1;
dbms_output.put_line(rpad(to_char(top5),9)|| ' '||substr(text1,1,66));
len1 :=length(text1);
x := 66;
while len1 > x-1 loop
dbms_output.put_line('"         '||substr(text1,x,64));
x := x+64;
end loop;
end loop;
end;
/

Update on Large Table


http://jonathanlewis.wordpress.com/2008/07/28/big-update/
UPDATE is the heaviest DML operation and it will take a long time, regardless of whether you COMMIT after each update operation (for one table) or at end only.



=============

1. create table tabc select * from tabb;

2. truncate table tabb;

2.1 drop indexes from tabb;

3. insert into tabb (new_id,old_id, col1 ..... col 99)
select decode(a.old_id,null,c.new_id,a.new_id) as new_id, old_id, col1
........ col99
from tabc c, (select old_id,new_id from taba group by old_id,
new_id) a
where c.old_id = a.old_id(+);


4. create indexes on tabb;


===========

At a high level…
Create a new table by selecting everything from the old table and performing the calculation in the select clause
Create table TABLE_NEW nologging as select <perform calculation here> from TABLE_OLD;
Apply all constraints, indexes, grants, etc. to the new table
Drop the old table
Rename TABLE_NEW to TABLE_OLD;
Note the nologging option bypasses generating any redo and will result in significant performance improvement.





Use CTAS in lieu of large updates

When you are updating the majority of rows in a table, using Create Table As Select (CTAS) is often more efficient performance than a standard update.  For example, assume that the following update changed 75% of the table rows:

update
   mytab
set
   status = 'new'
where
   status = 'old;

In this case, a parallelized CTAS may perform far faster (Note: Make sure that you have an SMP server before using the parallel degree option):

create table new_mytab NOLOGGING as
select  /*+ full parallel(mytab,35)*/
   decode (status,'new','old',status,
   col2, col3, col4
from mytab;

-- rebuild indexes, triggers and constraints to new_mytab

rename mytab to bkup_mytab;
rename new_mytab to mytab;


CPU taking SQL


If you noticed that a session is using too much CPU, you can identify the actions performed by that session using top and Explain Plan.
So first, use TOP to identify the session using high CPU and take a note of the PID.

set linesize 140
set pagesize 100
col username format a15
col machine  format a20
ACCEPT Process_ID prompt 'Pid : '
select s.inst_id,p.spid,s.sid,s.serial#,s.username,s.machine
from gv$session s, gv$process p
where s.paddr=p.addr
and p.spid=&proceso;

Once you got the SID associated to that PID, then you can use it with explain plan:
set lines 140
set pages 10000
set long 1000000
ACCEPT Process_SID prompt 'Sid : '
SELECT a.sql_id, a.sql_fulltext
FROM v$sqlarea a, v$session s
WHERE a.address = s.sql_address
 AND s.sid = &proceso;


set lines 150
set pages 40000
col operation format a55
col object format a25
ACCEPT sentencia prompt 'Identificador de SQL ejecutado : '
select lpad(' ',2*depth)||operation||' '||options||decode(id, 0, substr(optimizer,1, 6)||'
                   Cost='||to_char(cost)) operation,
object_name object, cpu_cost, io_cost
from v$sql_plan where sql_id='&sentencia';



synonym - after dba privileges user is not able to access the option. only schema owner can access


create or replace trigger logon_trg
   after logon on database
 begin
    if user in ('A','B') then
        execute immediate 'alter session set current_schema=X';
    end if;
 end;
/
----------------------

create or replace trigger logon_trg
   after logon on database
declare
usr varchar2(1000);
 begin
select wm_concat(''''|| username || '''') into usr from dba_users;
    if user in (usr) then
        execute immediate 'alter session set current_schema=LDBO';
    end if;
 end;
/

SYS@NBS11G> select wm_concat(''''|| username || '''') from dba_users;
select wm_concat(''''|| username || '''') from dba_users
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "WMSYS.WM_CONCAT_IMPL", line 30

Who is using database link


Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
   decode(bitand(ksuseidl,11),
      1,'ACTIVE',
      0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
      2,'SNIPED',
      3,'SNIPED',
      'KILLED'
   ),1,1
) "S",
substr(w.event,1,10) "WAITING"
from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where  g.K2GTDXCB =t.ktcxbxba
and   g.K2GTDSES=t.ktcxbses
and  s.addr=g.K2GTDSES
And  W.Sid=S.Indx
and s2.sid = W.Sid;


partition in a partitioned table truncated by mistake


Import partition data
A partition in a partitioned table was truncated by mistake and data needs to be imported from an export dump.

However instead of importing the data directly into the original table, the plan is to import the partition data into the temporary user and ratify the data before importing it to the original table using partition exchange.

Steps -

1. Create a temporary user.
2. Import partition data only into table in the temporary user
3. Ratify data.
4. Move segments in temporary table into new non-partitioned table.
5. Move this non-partitioned table into the original users tablespace.
6. Exchange partition between temporary and original partition and clean up.

NOTE -

ORIGINAL OWNER is PART_OWNER
TABLESPACE_NAME IS TEST_TBS
TABLE_NAME is TEST
PARTITION IS TEST_PART_DEC

1. Create temporary user with separate tablespace -

Temporary user called PART_RESTORE.

2. Import partition data only into table in the temporary user

imp userid/password file=test_export.dmp log=test_import.log fromuser=PART_OWNER touser=PART_RESTORE tables=TEST:TEST_PART_DEC feedback=10000 buffer = 64000 ignore=y &

3. RATIFY DATA IN PART_RESTORE SCHEMA.

4. Move segments fron temporary table into new non-partitioned table.

CREATE TABLE PART_RESTORE.TEST_TEMP
AS SELECT * FROM PART_RESTORE.TEST
WHERE ROWNUM<1;

5. Move this non-partitioned table into the original users tablespace.

ALTER TABLE PART_RESTORE.TEST_TEMP MOVE TABLESPACE TEST_TBS;

6. Exchange partition between temporary and original partition and clean up.

– set it as a nologging table

ALTER TABLE PART_RESTORE.TEST_TEMP NOLOGGING;

– move the data into the temp table.

INSERT /*+ APPEND */INTO PART_RESTORE.TEST_TEMP
SELECT *
FROM PART_RESTORE.TEST;
COMMIT;

— exchange the partition into the final PART_OWNER TABLE

ALTER TABLE PART_OWNER.TEST
EXCHANGE PARTITION TEST_PART_DEC
WITH TABLE PART_RESTORE.TEST_TEMP
UPDATE GLOBAL INDEXES;

– rebuild any unusable local indexes
ALTER TABLE PART_OWNER.TEST
MODIFY PARTITION TEST_PART_DEC
REBUILD UNUSABLE LOCAL INDEXES;

– gather stats on the new partition…
begin
DBMS_STATS.GATHER_TABLE_STATS (ownname => PART_OWNER, tabname => ‘TEST’, partname => ‘TEST_PART_DEC’, estimate_percent => 5, degree => 1, granularity => ‘ALL’, cascade => FALSE);
end;
/

– drop the ofsa_restored temp table…

DROP TABLE PART_RESTORE.TEST_TEMP ;

temporary tablespace usage


– Listing of temp segments.–

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total – SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

– Temp segment usage per session.–

SELECT S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;

remove two char from string


select substr(oowncode,1,length(oowncode)-2) from ldfibs where oowncode='25TAV014M';

Top 30 I/O Intensive SQL Statements Identification


-- Top 30 I/O Intensive SQL Statements Identification : top30iosql.sql
set linesize 80
set pagesize 58
set heading on
set serveroutput on
spool c:\temp\top30iosql.txt
declare cursor curs1 is
select executions, disk_reads, buffer_gets, first_load_time, sql_text from v$sqlarea order by disk_reads / decode(executions,0,1,executions) desc;
stmnt_ctr number(3);
wrt1 number(3);
begin dbms_output.enable(50000); stmnt_ctr := 0;
for inrec in curs1 loop stmnt_ctr := stmnt_ctr + 1;
if stmnt_ctr >= 31 then
exit;
end if;
 dbms_output.put_line('--------------------------------------' || '--------------------------------------');
dbms_output.put_line('SQL Stmnt Number: ' || to_char(stmnt_ctr)); dbms_output.put_line('--------------------------------------' || '--------------------------------------');
 dbms_output.put_line('Executions : ' || to_char(inrec.executions));
 dbms_output.put_line('Disk Reads : ' || to_char(inrec.disk_reads) || ' Buffer Gets : ' || to_char(inrec.buffer_gets));
dbms_output.put_line('First Load Time: ' || inrec.first_load_time);
dbms_output.put_line('SQL Statement-------->');
end loop;
end;
/

Who called me


http://asktom.oracle.com/tkyte/who_called_me
How Can I find out who called me or what my name is

Many times we are asked "in a procedure/function, can I find out who called me" or "can I dynamically figure out the name of the procedure or package that is currently executing".

You can find it in the call stack returned by dbms_utility.format_call_stack. I wrote a small routine called who_called_me that returns this sort of information (it doesn't tell you who you are, it lets you know who called you). If you wrap who_called_me with a function who_am_i, you'll get what you need. If you create the who_called_me/who_am_i routines, you'll be able to:



SQL> create or replace procedure demo
2  as
3  begin
4     dbms_output.put_line( who_am_i );
5  end;
6  /

Procedure created.

SQL> exec demo;
TKYTE.DEMO



In current releases of the database, this code has been incorporated into the OWA_UTIL package - you probably already have it in your database. If not, you can use this really old version that might need a tweak or two to work in your database release:


create or replace procedure who_called_me( owner      out varchar2,
                        name       out varchar2,
                        lineno     out number,
                        caller_t   out varchar2 )
as
   call_stack  varchar2(4096) default dbms_utility.format_call_stack;
   n           number;
   found_stack BOOLEAN default FALSE;
   line        varchar2(255);
   cnt         number := 0;
begin
--
   loop
       n := instr( call_stack, chr(10) );
       exit when ( cnt = 3 or n is NULL or n = 0 );
--
       line := substr( call_stack, 1, n-1 );
       call_stack := substr( call_stack, n+1 );
--
       if ( NOT found_stack ) then
           if ( line like '%handle%number%name%' ) then
               found_stack := TRUE;
           end if;
       else
           cnt := cnt + 1;
           -- cnt = 1 is ME
           -- cnt = 2 is MY Caller
           -- cnt = 3 is Their Caller
           if ( cnt = 3 ) then
               lineno := to_number(substr( line, 13, 6 ));
               line   := substr( line, 21 );
               if ( line like 'pr%' ) then
                   n := length( 'procedure ' );
               elsif ( line like 'fun%' ) then
                   n := length( 'function ' );
               elsif ( line like 'package body%' ) then
                   n := length( 'package body ' );
               elsif ( line like 'pack%' ) then
                   n := length( 'package ' );
               elsif ( line like 'anonymous%' ) then
                   n := length( 'anonymous block ' );
               else
                   n := null;
               end if;
               if ( n is not null ) then
                  caller_t := ltrim(rtrim(upper(substr( line, 1, n-1 ))));
               else
                  caller_t := 'TRIGGER';
               end if;

               line := substr( line, nvl(n,1) );
               n := instr( line, '.' );
               owner := ltrim(rtrim(substr( line, 1, n-1 )));
               name  := ltrim(rtrim(substr( line, n+1 )));
           end if;
       end if;
   end loop;
end;
/

create or replace function who_am_i return varchar2
is
   l_owner        varchar2(30);
   l_name      varchar2(30);
   l_lineno    number;
   l_type      varchar2(30);
begin
  who_called_me( l_owner, l_name, l_lineno, l_type );
  return l_owner || '.' || l_name;
end;
/

Find string into Database


create or replace procedure sp_ldsearchengine
(val varchar2)
is
  v_old_table user_tab_columns.table_name%type;
  v_where     Varchar2(32766);
  v_first_col boolean := true;
  type rc     is ref cursor;
  c           rc;
  v_rowid     varchar2(20);

begin
  for r in (
    select
      t.*
    from
      user_tab_cols t, user_all_tables a
    where t.table_name = a.table_name
    and t.data_type like '%CHAR%'
    order by t.table_name) loop

    if v_old_table is null then
      v_old_table := r.table_name;
    end if;

    if v_old_table <> r.table_name then
      v_first_col := true;

      -- dbms_output.put_line('searching ' || v_old_table);

      open c for 'select rowid from "' || v_old_table || '" ' || v_where;

      fetch c into v_rowid;
      loop
        exit when c%notfound;
        dbms_output.put_line('  select * from  ' || v_old_table || ' where rowid=''' || v_rowid ||''';');
        fetch c into v_rowid;
      end loop;

      v_old_table := r.table_name;
    end if;

    if v_first_col then
        v_where := ' where ' || r.column_name || ' like ''%' || val || '%''';
      v_first_col := false;
    else
      v_where := v_where || ' or ' || r.column_name || ' like ''%' || val || '%''';
    end if;

  end loop;
end;
/

UTL_FILE_DIR Checking


I found some sample code provided by Oracle for testing utl_file_dir

Metalink Note 45327.1 gives some good procedures to test whether pl/sql code calling utl_file package is able to write to the directories defnied in utl_file_dir

Metalink Note 1016653.4 gives this code to test to verify setup for UTL_FILE Package



-----------Following will create a file and write into it and read from it and create another file and write into it--------------------


SET SERVEROUTPUT ON
DECLARE
fid UTL_FILE.FILE_TYPE;
v VARCHAR2(32767);
PROCEDURE recNgo (str IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE ('UTL_FILE error ' || str);

UTL_FILE.FCLOSE (fid);
END;
BEGIN
/* Change the directory name to one to which you at least
|| THINK you have read/write access.
*/

fid:= UTL_FILE.FOPEN('d:\ldoutput', 'utl_file_test','W');
UTL_FILE.put_line(fid,' Attempt to write to test the file...Kshitij');
UTL_FILE.FCLOSE(fid);

fid := UTL_FILE.FOPEN ('d:\ldoutput', 'utl_file_test', 'R');
UTL_FILE.GET_LINE (fid, v);
dbms_output.put_line (v);
UTL_FILE.FCLOSE (fid);

fid := UTL_FILE.FOPEN ('d:\ldoutput', 'utl_file_test_10', 'W');
UTL_FILE.PUT_LINE (fid, v);
UTL_FILE.FCLOSE (fid);
EXCEPTION
WHEN UTL_FILE.INVALID_PATH
THEN recNgo ('invalid_path');
WHEN UTL_FILE.INVALID_MODE
THEN recNgo ('invalid_mode');
WHEN UTL_FILE.INVALID_FILEHANDLE
THEN recNgo ('invalid_filehandle');
WHEN UTL_FILE.INVALID_OPERATION
THEN recNgo ('invalid_operation');
WHEN UTL_FILE.READ_ERROR
THEN recNgo ('read_error');
WHEN UTL_FILE.WRITE_ERROR
THEN recNgo ('write_error');
WHEN UTL_FILE.INTERNAL_ERROR
THEN recNgo ('internal_error');
END;
/

Purge Queue


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLDIGITALBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLMAINBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.30.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/

Job Analyze Temp tables


CREATE OR REPLACE PROCEDURE Analyzetemp AS
BEGIN
 FOR CUR_REC IN (SELECT DISTINCT OWNER,TABLE_NAME
                  FROM   DBA_TABLES where table_name like '%TEMP%') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE TABLE '  || CUR_REC.OWNER || '.' || CUR_REC.TABLE_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR CUR_REC IN (SELECT DISTINCT OWNER,INDEX_NAME
                  FROM   DBA_INDEXES where table_name like '%TEMP%') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE INDEX ' || CUR_REC.OWNER || '.' || CUR_REC.INDEX_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


Analyze Full


CREATE OR REPLACE PROCEDURE AnalyzeFull AS
BEGIN
 FOR CUR_REC IN (SELECT DISTINCT OWNER,TABLE_NAME
                  FROM   DBA_TABLES) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE TABLE '  || CUR_REC.OWNER || '.' || CUR_REC.TABLE_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR CUR_REC IN (SELECT DISTINCT OWNER, INDEX_NAME
                  FROM   DBA_INDEXES) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE INDEX ' || CUR_REC.OWNER || '.' || CUR_REC.INDEX_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


User Creation With job


host mkdir c:\yep1314

spool c:\yep1314\yep1314.log

--------------User Creation

CREATE USER LDBO PROFILE "DEFAULT" IDENTIFIED BY ldbo DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;


BEGIN
DBMS_WM.GrantSystemPriv('ACCESS_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/

GRANT ADMINISTER ANY SQL TUNING SET TO "LDBO" ;
GRANT ADMINISTER DATABASE TRIGGER TO "LDBO" ;

BEGIN
dbms_resource_manager_privs.grant_system_privilege(privilege_name=>'ADMINISTER_RESOURCE_MANAGER', grantee_name=>'LDBO', admin_option=>FALSE);
END;
/
GRANT ADMINISTER SQL TUNING SET TO "LDBO" ;
GRANT ADVISOR TO "LDBO" ;
GRANT ALTER ANY CLUSTER TO "LDBO" ;
GRANT ALTER ANY DIMENSION TO "LDBO" ;
GRANT ALTER ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT ALTER ANY INDEX TO "LDBO" ;
GRANT ALTER ANY INDEXTYPE TO "LDBO" ;
GRANT ALTER ANY LIBRARY TO "LDBO" ;
GRANT ALTER ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT ALTER ANY OUTLINE TO "LDBO" ;
GRANT ALTER ANY PROCEDURE TO "LDBO" ;
GRANT ALTER ANY ROLE TO "LDBO" ;
GRANT ALTER ANY RULE TO "LDBO" ;
GRANT ALTER ANY RULE SET TO "LDBO" ;
GRANT ALTER ANY SEQUENCE TO "LDBO" ;
GRANT ALTER ANY SQL PROFILE TO "LDBO" ;
GRANT ALTER ANY TABLE TO "LDBO" ;
GRANT ALTER ANY TRIGGER TO "LDBO" ;
GRANT ALTER ANY TYPE TO "LDBO" ;
GRANT ALTER DATABASE TO "LDBO" ;
GRANT ALTER PROFILE TO "LDBO" ;
GRANT ALTER RESOURCE COST TO "LDBO" ;
GRANT ALTER ROLLBACK SEGMENT TO "LDBO" ;
GRANT ALTER SESSION TO "LDBO" ;
GRANT ALTER SYSTEM TO "LDBO" ;
GRANT ALTER TABLESPACE TO "LDBO" ;
GRANT ALTER USER TO "LDBO" ;
GRANT ANALYZE ANY TO "LDBO" ;
GRANT ANALYZE ANY DICTIONARY TO "LDBO" ;
GRANT AUDIT ANY TO "LDBO" ;
GRANT AUDIT SYSTEM TO "LDBO" ;
GRANT BACKUP ANY TABLE TO "LDBO" ;
GRANT BECOME USER TO "LDBO" ;
GRANT CHANGE NOTIFICATION TO "LDBO" ;
GRANT COMMENT ANY TABLE TO "LDBO" ;
GRANT CREATE ANY CLUSTER TO "LDBO" ;
GRANT CREATE ANY CONTEXT TO "LDBO" ;
GRANT CREATE ANY DIMENSION TO "LDBO" ;
GRANT CREATE ANY DIRECTORY TO "LDBO" ;
GRANT CREATE ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT CREATE ANY INDEX TO "LDBO" ;
GRANT CREATE ANY INDEXTYPE TO "LDBO" ;
GRANT CREATE ANY JOB TO "LDBO" ;
GRANT CREATE ANY LIBRARY TO "LDBO" ;
GRANT CREATE ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT CREATE ANY OPERATOR TO "LDBO" ;
GRANT CREATE ANY OUTLINE TO "LDBO" ;
GRANT CREATE ANY PROCEDURE TO "LDBO" ;
GRANT CREATE ANY RULE TO "LDBO" ;
GRANT CREATE ANY RULE SET TO "LDBO" ;
GRANT CREATE ANY SEQUENCE TO "LDBO" ;
GRANT CREATE ANY SQL PROFILE TO "LDBO" ;
GRANT CREATE ANY SYNONYM TO "LDBO" ;
GRANT CREATE ANY TABLE TO "LDBO" ;
GRANT CREATE ANY TRIGGER TO "LDBO" ;
GRANT CREATE ANY TYPE TO "LDBO" ;
GRANT CREATE ANY VIEW TO "LDBO" ;
GRANT CREATE CLUSTER TO "LDBO" ;
GRANT CREATE DATABASE LINK TO "LDBO" ;
GRANT CREATE DIMENSION TO "LDBO" ;
GRANT CREATE EVALUATION CONTEXT TO "LDBO" ;
GRANT CREATE EXTERNAL JOB TO "LDBO" ;
GRANT CREATE INDEXTYPE TO "LDBO" ;
GRANT CREATE JOB TO "LDBO" ;
GRANT CREATE LIBRARY TO "LDBO" ;
GRANT CREATE MATERIALIZED VIEW TO "LDBO" ;
GRANT CREATE OPERATOR TO "LDBO" ;
GRANT CREATE PROCEDURE TO "LDBO" ;
GRANT CREATE PROFILE TO "LDBO" ;
GRANT CREATE PUBLIC DATABASE LINK TO "LDBO" ;
GRANT CREATE PUBLIC SYNONYM TO "LDBO" ;
GRANT CREATE ROLE TO "LDBO" ;
GRANT CREATE ROLLBACK SEGMENT TO "LDBO" ;
GRANT CREATE RULE TO "LDBO" ;
GRANT CREATE RULE SET TO "LDBO" ;
GRANT CREATE SEQUENCE TO "LDBO" ;
GRANT CREATE SESSION TO "LDBO" ;
GRANT CREATE SYNONYM TO "LDBO" ;
GRANT CREATE TABLE TO "LDBO" ;
GRANT CREATE TABLESPACE TO "LDBO" ;
GRANT CREATE TRIGGER TO "LDBO" ;
GRANT CREATE TYPE TO "LDBO" ;
GRANT CREATE USER TO "LDBO" ;
GRANT CREATE VIEW TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('CREATE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT DEBUG ANY PROCEDURE TO "LDBO" ;
GRANT DEBUG CONNECT SESSION TO "LDBO" ;
GRANT DELETE ANY TABLE TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'DEQUEUE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT DROP ANY CLUSTER TO "LDBO" ;
GRANT DROP ANY CONTEXT TO "LDBO" ;
GRANT DROP ANY DIMENSION TO "LDBO" ;
GRANT DROP ANY DIRECTORY TO "LDBO" ;
GRANT DROP ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT DROP ANY INDEX TO "LDBO" ;
GRANT DROP ANY INDEXTYPE TO "LDBO" ;
GRANT DROP ANY LIBRARY TO "LDBO" ;
GRANT DROP ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT DROP ANY OPERATOR TO "LDBO" ;
GRANT DROP ANY OUTLINE TO "LDBO" ;
GRANT DROP ANY PROCEDURE TO "LDBO" ;
GRANT DROP ANY ROLE TO "LDBO" ;
GRANT DROP ANY RULE TO "LDBO" ;
GRANT DROP ANY RULE SET TO "LDBO" ;
GRANT DROP ANY SEQUENCE TO "LDBO" ;
GRANT DROP ANY SQL PROFILE TO "LDBO" ;
GRANT DROP ANY SYNONYM TO "LDBO" ;
GRANT DROP ANY TABLE TO "LDBO" ;
GRANT DROP ANY TRIGGER TO "LDBO" ;
GRANT DROP ANY TYPE TO "LDBO" ;
GRANT DROP ANY VIEW TO "LDBO" ;
GRANT DROP PROFILE TO "LDBO" ;
GRANT DROP PUBLIC DATABASE LINK TO "LDBO" ;
GRANT DROP PUBLIC SYNONYM TO "LDBO" ;
GRANT DROP ROLLBACK SEGMENT TO "LDBO" ;
GRANT DROP TABLESPACE TO "LDBO" ;
GRANT DROP USER TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'ENQUEUE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT EXECUTE ANY CLASS TO "LDBO" ;
GRANT EXECUTE ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT EXECUTE ANY INDEXTYPE TO "LDBO" ;
GRANT EXECUTE ANY LIBRARY TO "LDBO" ;
GRANT EXECUTE ANY OPERATOR TO "LDBO" ;
GRANT EXECUTE ANY PROCEDURE TO "LDBO" ;
GRANT EXECUTE ANY PROGRAM TO "LDBO" ;
GRANT EXECUTE ANY RULE TO "LDBO" ;
GRANT EXECUTE ANY RULE SET TO "LDBO" ;
GRANT EXECUTE ANY TYPE TO "LDBO" ;
GRANT EXPORT FULL DATABASE TO "LDBO" ;
GRANT FLASHBACK ANY TABLE TO "LDBO" ;
GRANT FORCE ANY TRANSACTION TO "LDBO" ;
GRANT FORCE TRANSACTION TO "LDBO" ;
GRANT GRANT ANY OBJECT PRIVILEGE TO "LDBO" ;
GRANT GRANT ANY PRIVILEGE TO "LDBO" ;
GRANT GRANT ANY ROLE TO "LDBO" ;
GRANT IMPORT FULL DATABASE TO "LDBO" ;
GRANT INSERT ANY TABLE TO "LDBO" ;
GRANT LOCK ANY TABLE TO "LDBO" ;
GRANT EXECUTE ON DBMS_LOCK TO "LDBO" ;
GRANT MANAGE ANY FILE GROUP TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'MANAGE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT MANAGE FILE GROUP TO "LDBO" ;
GRANT MANAGE SCHEDULER TO "LDBO" ;
GRANT MANAGE TABLESPACE TO "LDBO" ;
GRANT MERGE ANY VIEW TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('MERGE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT ON COMMIT REFRESH TO "LDBO" ;
GRANT QUERY REWRITE TO "LDBO" ;
GRANT READ ANY FILE GROUP TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('REMOVE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT RESTRICTED SESSION TO "LDBO" ;
GRANT RESUMABLE TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('ROLLBACK_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT SELECT ANY DICTIONARY TO "LDBO" ;
GRANT SELECT ANY SEQUENCE TO "LDBO" ;
GRANT SELECT ANY TABLE TO "LDBO" ;
GRANT SELECT ANY TRANSACTION TO "LDBO" ;
GRANT SYSDBA TO "LDBO" ;
GRANT SYSOPER TO "LDBO" ;
GRANT UNDER ANY TABLE TO "LDBO" ;
GRANT UNDER ANY TYPE TO "LDBO" ;
GRANT UNDER ANY VIEW TO "LDBO" ;
GRANT UNLIMITED TABLESPACE TO "LDBO" ;
GRANT UPDATE ANY TABLE TO "LDBO" ;

GRANT "AQ_ADMINISTRATOR_ROLE" TO "LDBO" ;
GRANT "AQ_USER_ROLE" TO "LDBO" ;
GRANT EXECUTE ON DBMS_AQ TO "LDBO";
GRANT EXECUTE ON DBMS_AQADM TO "LDBO";


GRANT EXECUTE ON SYS.DBMS_DEFER_IMPORT_INTERNAL TO "LDBO";
GRANT EXECUTE ON SYS.DBMS_EXPORT_EXTENSION TO "LDBO";


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(privilege => 'ENQUEUE_ANY',grantee => 'LDBO',admin_option => FALSE);
END;
/

BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(privilege => 'DEQUEUE_ANY',grantee => 'LDBO',admin_option => FALSE);
END;
/

GRANT "CONNECT" TO "LDBO" ;
GRANT "DBA" TO "LDBO" ;
GRANT "EXP_FULL_DATABASE" TO "LDBO" ;
GRANT "IMP_FULL_DATABASE" TO "LDBO" ;

ALTER USER LDBO QUOTA UNLIMITED on USR;
ALTER USER LDBO QUOTA UNLIMITED on INDX;



GRANT execute ON utl_recomp TO "LDBO";

alter system set job_queue_processes=1000;
alter system set sec_case_sensitive_logon=False;
alter system set open_cursors=1000 scope=spfile;
alter system set session_cached_cursors=500 scope=spfile;


CREATE USER CMUSER PROFILE "DEFAULT" IDENTIFIED BY cmuser DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;
GRANT CREATE SESSION TO cmuser;
GRANT CONNECT TO cmuser;

create or replace directory LDDIGITAL as 'd:\ldoutput\lddigital';

grant all on directory LDDIGITAL to public;



-----------------Auditing

Audit user,system grant,role,alter system,profile whenever successful ;
audit create session by access;
audit audit system by access;
audit grant any privilege by access;
audit grant any object privilege by access;
audit grant any role by access;
audit system grant by access;
audit create user by access;
audit create any table by access;
audit create public database link by access;
audit create any procedure by access;
audit alter user by access;
audit alter any table by access;
audit alter any procedure by access;
audit alter database by access;
audit alter system by access;
audit alter profile by access;
audit drop user by access;
audit drop any procedure by access;
audit drop any table by access;
audit drop profile by access;
audit drop any index by access;

-----------------Profile Setting

ALTER PROFILE DEFAULT LIMIT
FAILED_LOGIN_ATTEMPTS UNLIMITED
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_GRACE_TIME UNLIMITED;

-----------------Network Access

Exec dbms_network_acl_admin.create_acl ('utl_http_access.xml','Normal Access','LDBO',TRUE,'connect',NULL,NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_http_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_http_access.xml', '*',NULL,NULL);
commit ;
               
Exec dbms_network_acl_admin.create_acl ('utl_inaddr_access.xml','Normal Access','LDBO',TRUE,'resolve',NULL, NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_inaddr_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_inaddr_access.xml', '*',NULL,NULL);
commit;

Exec dbms_network_acl_admin.create_acl ('utl_mail.xml','Allow mail to be send','LDBO',TRUE,'connect' );
Exec dbms_network_acl_admin.add_privilege ('utl_mail.xml','LDBO',TRUE,'resolve');
Exec dbms_network_acl_admin.assign_acl('utl_mail.xml','*',NULL,NULL);
commit ;


Exec dbms_network_acl_admin.create_acl ('utl_http.xml','HTTP Access','LDBO',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_http.xml','LDBO',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_http.xml','*',NULL,NULL);
commit;

Grant Execute on utl_inaddr to ldbo ;
Grant Execute on utl_http to ldbo ;


--------------Jobs


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'compile',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN UTL_RECOMP.recomp_serial; END;',
    start_date      => '01-APR-12 06:31.00.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to compile invalid objects');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
    start_date      => '01-APR-12 04.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=DAILY',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO statistics every DAY');
END;
/


CREATE OR REPLACE PROCEDURE Analyzetemp wrapped
a000000
b2
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
7
2a2 168
oUpO82c0m3kRqNYu+ieu9+lzKuMwg+1pLdwdf3QBvGQwDJTIaaoOXtFSqGZSURSwYVZJQEip
BFRn/ggU7vLVLWjhHH7ZwpoSP6gj6SPjNjqS0uQIQs8kvDQ16OKw2mqkYSLmslJfGnAEsfRW
JnR5Cd9xq50LGMSTM6dgp0p75Bh50uKOVdktzWyKuSYvQdBw1x012GW+S9N4SxkHOSFtETUS
CYQbBMFa/ZN7qJODaoZfNvtZ66GKRb0KHS1vnE6ZPN7TaRkCGOzCR4FJjTbfVXhPUcjwJAgf
s3RPbdJG8TrBXhzYkK5UseiJMxXJJhBAKs0ftLEz+d2dQJkt27wDd6pLZMvivKZk5hqg

/

show errors

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'ANALYZE_TEMP',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN Analyzetemp; END;',
    start_date      => '01-APR-12 08.01.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Analyze TEMP tables indexes');
END;
/

CREATE OR REPLACE PROCEDURE AnalyzeFull wrapped
a000000
b2
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
7
265 154
+WnW35aMpBSqlXDoysgRZr/L8jowg+1pLdwdf3QBvGQwumfmczfcvtuA72wzPvomAkPb3Z/g
Hml30Em9YungrX+PqGGHfp3BeGB/kbBaULtk2SmGeVfRmfX8IyPAaDjI54KVrfE3C2hJm5Fl
2AwxUj6Rco4sA7kA19gdMs6UhIlp8c2BaJIlHTSFtHuWVmJ9Z8mXNh/hk7AMYyH2teTmEcdp
3TrPXAPeR5Rp/YG2nd4+Taz3fDw7Ph5x6RMaS828znTwMfqNqp3vQF2klUabm/4+ekKqiDi6
bfGDvQUN+YmWDAeUOQ28nVd3OWs8BPXrmyg5XI+5tpcsjUQ=

/

show errors

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'ANALYZE_FULL',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN analyzefull; END;',
   start_date      => '01-APR-12 01:00.00.00 PM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN; BYHOUR=01; BYMINUTE=01;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Analyze all tables indexes');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_rkqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLRKQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.10.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLDIGITALBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLMAINBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.30.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_rkqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLRKQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.00.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLDIGITALBOQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.11.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLMAINBOQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


---------------Backup job

host mkdir d:\expdp1314

create directory export_auto as 'd:\expdp1314';


CREATE USER dba_export_user PROFILE "DEFAULT" IDENTIFIED BY dba_export_user DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;

grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;

ALTER USER dba_export_user QUOTA UNLIMITED on USR;
ALTER USER dba_export_user QUOTA UNLIMITED on INDX;


CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
   hdl_job        NUMBER;
   l_cur_scn      NUMBER;
   l_job_state    VARCHAR2 (20);
   l_status       SYS.ku$_status1010;
   l_job_status   SYS.ku$_jobstatus1010;
BEGIN
 begin
    execute immediate 'drop table  dba_export_user.AUTO_EXPORT';
    exception when others then null;
   end;
   hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1314.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file,reusefile => 1);
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1314.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file,reusefile => 1);
   DBMS_DATAPUMP.start_job (handle => hdl_job);
   DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
   DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);
    DBMS_DATAPUMP.detach(handle => hdl_job);
END;
/

show errors


begin
 dbms_scheduler.create_job(
      job_name => 'EXPORT_JOB'
     ,job_type => 'STORED_PROCEDURE'
     ,job_action => 'dba_export_user.start_export'
     ,start_date => '01-MAR-13 10.00.00.00 PM ASIA/CALCUTTA'
       ,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
     ,enabled => TRUE
     ,comments => 'EXPORT_DATABASE_JOB');
end;
/


spool off

job rebuild index 11G


create or replace procedure sp_rebuildindex as
  vOwner   dba_indexes.owner%TYPE;            /* Index Owner            */
  vIdxName dba_indexes.index_name%TYPE;       /* Index Name             */
  vTbName dba_indexes.table_name%TYPE;        /* Table Name             */
  vAnalyze VARCHAR2(100);                     /* String of Analyze Stmt */
  vCursor  NUMBER;                            /* DBMS_SQL cursor        */
  vNumRows INTEGER;                           /* DBMS_SQL return rows   */
  vHeight  index_stats.height%TYPE;           /* Height of index tree   */
  vLfRows  index_stats.lf_rows%TYPE;          /* Index Leaf Rows        */
  vDLfRows index_stats.del_lf_rows%TYPE;      /* Deleted Leaf Rows      */
  vDLfPerc   NUMBER;                          /* Del lf Percentage      */
  vMaxHeight NUMBER;                          /* Max tree height        */
  vMaxDel    NUMBER;                          /* Max del lf percentage  */
  CURSOR cGetIdx IS SELECT owner,index_name,table_name
     FROM dba_indexes WHERE OWNER NOT in ('DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSMAN','SYSTEM','TSMSYS','WMSYS','XDB','SYSMAN');
BEGIN
  /* Define maximums. This section can be customized. */
  vMaxHeight := 3;
  vMaxDel    := 20;

  /* For every index, validate structure */
  OPEN cGetIdx;
  LOOP
     FETCH cGetIdx INTO vOwner,vIdxName,VTbName;
     EXIT WHEN cGetIdx%NOTFOUND;
     /* Open DBMS_SQL cursor */
     vCursor := DBMS_SQL.OPEN_CURSOR;
     /* Set up dynamic string to validate structure */
     vAnalyze := 'ANALYZE INDEX ' || vOwner || '.' || vIdxName || ' VALIDATE STRUCTURE';
     DBMS_SQL.PARSE(vCursor,vAnalyze,DBMS_SQL.V7);
     vNumRows := DBMS_SQL.EXECUTE(vCursor);
     /* Close DBMS_SQL cursor */
     DBMS_SQL.CLOSE_CURSOR(vCursor);
     /* Does index need rebuilding?  */
     /* If so, then generate command */
     SELECT height,lf_rows,del_lf_rows INTO vHeight,vLfRows,vDLfRows
        FROM INDEX_STATS;
     IF vDLfRows = 0 THEN         /* handle case where div by zero */
        vDLfPerc := 0;
     ELSE
        vDLfPerc := (vDLfRows / vLfRows) * 100;
     END IF;
     IF (vHeight > vMaxHeight) OR (vDLfPerc > vMaxDel) THEN
EXECUTE IMMEDIATE 'ALTER INDEX ' || vIdxName || ' REBUILD ONLINE';
EXECUTE IMMEDIATE 'ANALYZE TABLE ' || vTbName || ' compute statistics';
EXECUTE IMMEDIATE 'ANALYZE INDEX '  || vIdxName || ' compute statistics';
     END IF;
  END LOOP;
  CLOSE cGetIdx;
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'REBUILDINDEX',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN sp_rebuildindex; END;',
   start_date      => '01-APR-12 09:00.00.00 AM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN; BYHOUR=09; BYMINUTE=01;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'rebuild index');
END;
/

EXEC DBMS_SCHEDULER.RUN_JOB('REBUILDINDEX');


Temporary Tablespace Cleanup 11g


alter tablespace temporary shrink space keep 1G;

exec dbms_scheduler.drop_job('tempshrink');

 begin
dbms_scheduler.create_job
(job_name => 'tempshrink',
job_type => 'PLSQL_BLOCK',
job_action => 'begin execute immediate ''alter tablespace temporary shrink space keep 1G''; end;',
start_date => '01-APR-12 01:00.00.00 AM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN;',
comments=>'Shrink Temporary Tablespace');
end;
/

exec dbms_scheduler.run_job('tempshrink');

========================

select distinct t1.tablespace_name TB,t2.file_name TempFile_name,t1.tablespace_size/1024/1024 Used_Allocated_size,t1.allocated_space/1024/1024 Allocated_size,t1.free_space/1024/1024 Free_space,t2.Autoextensible,t2.bytes/1024/1024 Used_space, t2.maxbytes/1024/1024 Max_TB_size from dba_temp_free_space t1,dba_temp_files t2 where t1.tablespace_name=t2.tablespace_name ;


select tablespace_size/1024/1024,allocated_space/1024/1024,free_space/1024/1024 from dba_temp_free_space;

select tablespace_name,Autoextensible,bytes/1024/1024,maxbytes/1024/1024 from dba_temp_files;


=================================

create or replace trigger tg_clear_tempTB
    after startup
     on database
  declare
     j   integer;
    pragma autonomous_transaction;
  begin
    dbms_job.submit (j,  'begin execute immediate ''alter tablespace temporary shrink space keep 5G''; end;');
   commit;
 end tg_clear_logindetails;
 /


job OS audit files maintaince


For Oracle on unix .aud files are created whether you have auditing enabled or not.  They record sys operations.  If you've got a lot of shell scripts that connect / as sysdba you are going to get a lot of .aud files. (In windows they are written to the event viewer)

They will go to where you set audit_file_dest (aka adump) to be.  If you don't set adump the first default value is ORACLE_BASE/admin/ORACLE_SID/adump. If this doesn't exist then they will go in ORACLE_HOME/rdbms/audit

If you didn't know about this they tend to raise their heads when $ORACLE_HOME fills up and you wonder why.

Please be clear, I am talking about the OS .aud files.  This process will not touch $aud in your database.


Oracle at 11.2  provide a way to manage these .aud OS files using the audit management package DBMS_AUDIT_MGMT

Here's how:

Initialize DBMS_AUDIT_MGMT

Call just once the initialization procedure INIT_CLEANUP to set up the audit management infrastructure.

BEGIN
 DBMS_AUDIT_MGMT.INIT_CLEANUP(
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
  DEFAULT_CLEANUP_INTERVAL    => 24 );
END;
/



Create the Procedure to delete files (over a year old) for a single instance

CREATE OR REPLACE procedure SYS.delete_OSaud_files
is
ThisProc             VARCHAR2(30) := 'delete_OSaud_files';
ThisAppErr            NUMBER := -20000;
BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366);
 dbms_audit_mgmt.clean_audit_trail(
   audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   use_last_arch_timestamp => TRUE
  );
EXCEPTION
WHEN OTHERS THEN
   dbms_output.put_line(ThisProc||' - '||SQLERRM);
   ROLLBACK;
   RAISE_APPLICATION_ERROR(ThisAppErr, ThisProc);
END;
/

Create the Procedure to delete files (over a year old) for RAC

CREATE OR REPLACE procedure SYS.delete_OSaud_files
is
ThisProc             VARCHAR2(30) := 'delete_OSaud_files';
ThisAppErr            NUMBER := -20000;
BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366,
    rac_instance_number => 1);
   DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366,
    rac_instance_number => 2);
 dbms_audit_mgmt.clean_audit_trail(
   audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   use_last_arch_timestamp => TRUE);
EXCEPTION
WHEN OTHERS THEN
   dbms_output.put_line(ThisProc||' - '||SQLERRM);
   ROLLBACK;
   RAISE_APPLICATION_ERROR(ThisAppErr, ThisProc);
END;
/


If you don't need to keep a years worth, just change SYSTIMESTAMP-366


Feel free to moan at me about 'when others then' and post a better procedure in the comments - I'll readily admit my plsql is not what it should be and I'm happy to be corrected.


Create a Schedule
(I like this type of thing to run when I'm actually working so I don't get called out of hours if something goes wrong)

Begin
DBMS_SCHEDULER.CREATE_SCHEDULE (
   schedule_name   => 'DELETE_OSAUD_FILES_SCHED',
      repeat_interval =>'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=10;',
      comments        => 'Delete adump files');
END;
/

Schedule the Job

BEGIN
dbms_scheduler.create_job (
job_name =>'DELETE_OSAUD_FILES_JOB',
job_type =>'STORED_PROCEDURE',
job_action => 'SYS.delete_OSaud_files',
enabled    =>  TRUE,
auto_drop => false,
schedule_name =>  'DELETE_OSAUD_FILES_SCHED',
comments => 'Remove aud files from adump');
END;
/


Set up mail notifications
(I like to know when my jobs error)

BEGIN
DBMS_SCHEDULER.set_scheduler_attribute('email_server', 'smtp.mycompany.com:25');
DBMS_SCHEDULER.set_scheduler_attribute('email_sender', 'do_not_reply@mydatabase');
END;
/

Create the events that I want mailing about

to test notifications work first set up events for everything

BEGIN
 DBMS_SCHEDULER.add_job_email_notification (
  job_name   =>  'DELETE_OSAUD_FILES_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_all_events');
  END;
  /

run your job

BEGIN
dbms_scheduler.run_job (
job_name =>'DELETE_OSAUD_FILES_JOB');
END;
/

You should get an email saying it ran.
But you don't want an 'I ran' mail every day, I only want to know if it's failed so:

BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_succeeded');
END;
/
BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_started');
END;
/
 BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_completed');
END;
/


Which is the equivalent of :

BEGIN
 DBMS_SCHEDULER.add_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_broken,job_chain_stalled,job_completed,job_disabled,job_failed,
job_over_max_dur,job_sch_lim_reached,job_stopped');
  END;
  /



So now your .aud files will be kept in check for you.

Create Database Structure / MetaData / scripts using EXPDP


Dont Use PLSQL Developer, it will corrupt code bodies


1) Take Export Full Dump of apx1213 DB using Data Pump

conn sys@apx1213srv as sysdba

CREATE DIRECTORY DPUMP_DIR1 AS 'D:\expdp1213';

GRANT read, write ON DIRECTORY DPUMP_DIR1 TO ldbo;

cmd
expdp ldbo/ldbo@apx1213srv FULL=y directory=dpump_dir1 dumpfile=apx1213.DMP LOGFILE=apx1213.LOG EXCLUDE=STATISTICS

2) Install Oracle 11g

3) Create apx1314 Database (Note: Don't Change oracle default parameters)

4) Create LDBO user using attached script (Note:only used attached, Don't use old createuserldbo.sql)

5) Login into LDBO user and create synonym

create synonym script from last year database apx1213 and run into current year apx1314

conn sys@apx1213srv as sysdba
set pagesize 1000
set linesize 150
set long 1000
col metadata for a145
SPOOL C:\YEP1314\SYNONYMNS1213.SQL
select dbms_metadata.get_ddl('SYNONYM',synonym_name, 'PUBLIC') || '/' metadata
from dba_synonyms
where owner = 'PUBLIC'
  and table_owner = 'LDBO'
order by synonym_name;
SPOOL OFF

conn sys@apx1314srv as sysdba
@C:\YEP1314\SYNONYMNS1213.SQL

6) Login into LDBO user and create Database Link

Note: Please change the password and connect string as per client software

CREATE PUBLIC DATABASE LINK "LNK_RAKSHAK"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

 CREATE PUBLIC DATABASE LINK "LNK_CCM"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_DIGITAL"
  CONNECT TO "LDBO" IDENTIFIED BY ldbo USING 'APX1314SRV';

CREATE DATABASE LINK "LNK_PREVIOUSYEARBALANCE"
  CONNECT TO "LDBO" IDENTIFIED BY LDBO USING 'APX1213SRV';

 CREATE PUBLIC DATABASE LINK "CMLDLINK"
  CONNECT TO cmuser IDENTIFIED BY cmuser USING 'CMMASTERSRV';

7) Login into LDBO user and create Sequence
Already exist in Createuserldbo sql

please drop all sequences and recreate them

select 'DROP SEQUENCE ' || Sequence_Name || ';' from dba_Sequences where Sequence_Owner='LDBO';


Create Sequence LDBO.Dematuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Saudauniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Billuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Jobbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Delbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Turnbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Fwdbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Extbrokuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Dpuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Bankuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Payrequniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Riskuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Portuniquenumber minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Axiscms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Pmsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Debarreduniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Bkrecouniquenumber minvalue 50000000 maxvalue 9999999999 start with 50000000 increment by 1 nocache;
Create Sequence LDBO.Sq_Smsuniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Offlinenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Phytodemnumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Ipouniquenumber minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Trademodification minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_SpicePoa minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Idbicms minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Scripuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Clientuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Brokeruniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Nsdlcdsldpuniquenumber minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Orarupee minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Rtgssequence minvalue 1 maxvalue 9999999 ;
Create Sequence LDBO.Sq_Bobbentopool minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Clientcode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_Familycode minvalue 1 maxvalue 99999999999 ;
Create Sequence LDBO.Sq_NigPortclose Minvalue 1 maxvalue 9999999999 Start With 1 Increment by 1 noCache;
Create Sequence LDBO.Seqclientsequence minvalue 1 maxvalue 99 start with 1 increment by 1;
Create Sequence LDBO.Sq_collateral minvalue 1 maxvalue 9999999999 ;
Create Sequence LDBO.Sq_Nsdlkra minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.LdOraclebatchid minvalue 1 maxvalue 9999999999 nocache;
Create Sequence LDBO.Sq_Limittokenno minvalue 1 maxvalue 99999999 ;
Create Sequence LDBO.SQ_CDSLXMLUNIQUENUMBER minvalue 10000 maxvalue 99999;


8) Create Data pump Directory expdp1314 and copy apx1213 dump (last year dump file) and Create optable

Note: Run from Database server only

conn sys@apx1314srv as sysdba

host mkdir D:\expdp1314

CREATE DIRECTORY DPUMP_DIR1 AS 'D:\expdp1314';

GRANT read, write ON DIRECTORY DPUMP_DIR1 TO ldbo;

cmd
impdp ldbo/ldbo@apx1314SRV directory=DPUMP_DIR1 dumpfile=apx1213.DMP LOGFILE=apx1314metadata.LOG schemas=LDBO content=metadata_only EXCLUDE=TABLESPACE,DB_LINK,SEQUENCE,STATISTICS,USER,GRANT sqlfile=optable.sql

9) Create splogin,tglogin manually
Already exist in Createuserldbo sql

SELECT owner,object_name FROM Dba_Objects WHERE OBJECT_NAME IN ('SPLOGININFORMATION','TGLOGININFORMATION');

10)  Compile Invalid Objects

conn sys@apx1314srv as sysdba

EXEC sys.UTL_RECOMP.recomp_serial;

select object_name  from dba_objects where owner='LDBO' and status='INVALID';
select * from SYS.USER_ERRORS ;

11) Match Last and Current Year Database Objects (Run the following command in last and current year database and match the records)

Select Object_Type,Count(*) From User_Objects Group By Object_Type Order By 1;
select COUNT(*)  from user_objects where status='INVALID';
Select * From Dba_Synonyms Where Table_Owner='LDBO';
select * from dba_Constraints where owner='LDBO';
SELECT * FROM DBA_DB_LINKS;
SELECT * FROM DBA_DIRECTORIES;

12) Create Users

conn sys@apx1213srv as sysdba

set head off
set pages 0
set long 9999999
spool c:\yep1314\user_script.sql
SELECT DBMS_METADATA.GET_DDL('PROFILE',profile) || '/' DDL
FROM DBA_PROFILES WHERE profile != 'DEFAULT' GROUP BY PROFILE;
UNION ALL
SELECT DBMS_METADATA.GET_DDL('USER', USERNAME) || '/' DDL
FROM DBA_USERS
UNION ALL
SELECT DBMS_METADATA.GET_DDL('ROLE', role)  || '/' DDL
FROM dba_roles
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('TABLESPACE_QUOTA', USERNAME) || '/' DDL
FROM DBA_USERS where username in (select username from dba_ts_quotas)
UNION ALLs
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', USERNAME) || '/' DDL
FROM DBA_USERS where username in (select grantee from dba_role_privs)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT', USERNAME) || '/' DDL
FROM DBA_USERS where username in (select grantee from dba_sys_privs)
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT', USERNAME) || '/' DDL
FROM DBA_USERS where username in (select grantee from dba_tab_privs);
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT', GRANTEE) || '/' DDL
FROM DBA_TAB_PRIVS GROUP BY GRANTEE;
UNION ALL
SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT', GRANTEE) || '/' DDL
FROM DBA_ROLE_PRIVS GROUP BY GRANTEE;
spool off;

conn sys@apx1314srv as sysdba
@c:\yep1314\user_script.sql

13) Create Roles using database link
Note: Please check database link (lnk_previousyearbalance) is working or not before running following script

conn sys@apx1314srv as sysdba
BEGIN
 FOR cur_rec IN (SELECT role
                  FROM   dba_roles@lnk_previousyearbalance) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'CREATE ROLE ' || cur_rec.role ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,privilege
                  FROM   DBA_SYS_PRIVS@lnk_previousyearbalance where grantee not in ('SYS','SYSTEM','SYSMAN','TSMSYS','WMSYS','RECOVERY_CATALOG_OWNER','RESOURCE','OUTLN','ORACLE_OCM','OEM_MONITOR','OEM_ADVISOR','MGMT_USER','IMP_FULL_DATABASE','EXP_FULL_DATABASE','DBA','CONNECT','AQ_ADMINISTRATOR_ROLE','DBSNMP','SCHEDULER_ADMIN')) LOOP
    BEGIN
      EXECUTE IMMEDIATE ('Grant ' || cur_rec.privilege || ' to ' || cur_rec.grantee );
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name
                  FROM   dba_tab_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant ' || cur_rec.privilege || ' on ' || cur_rec.table_name || ' to ' || cur_rec.grantee ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/

BEGIN
 FOR cur_rec IN (SELECT grantee,privilege,table_name,column_name
                  FROM   dba_col_privs@lnk_previousyearbalance  Where Grantor='LDBO') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.PRIVILEGE || '('|| cur_rec.COLUMN_NAME ||') on '|| cur_rec.TABLE_NAME || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


BEGIN
 FOR cur_rec IN (SELECT grantee,granted_role
                  FROM   dba_role_privs@lnk_previousyearbalance Where Grantee!='SYSTEM' and Grantee!='SYS' and Grantee!='DBSNML' and Grantee!='REPADMIN') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'Grant '|| cur_rec.granted_role || ' to ' || cur_rec.GRANTEE ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


impdp ldbo/ldbo@apx1314SRV directory=DATA_PUMP_DIR dumpfile=apx1213.DMP LOGFILE=apx1314metadata.LOG schemas=LDBO content=metadata_only EXCLUDE=TABLESPACE,DB_LINK,SEQUENCE,STATISTICS,USER,GRANT sqlfile=optable.sql

D:\app\Administrator\product\11.2.0\dbhome_1\BIN\impdp.exe LDBO/LDBO@ALP1314SRV DIRECTORY=DPUMP_DIR1 DUMPFILE=ALP1213.DMP LOGFILE=ALP1314METADATA1.LOG CONTENT=METADATA_ONLY SCHEMAS=LDBO EXCLUDE=TABLESPACE,DB_LINK,SEQUENCE,STATISTICS,USER,GRANT,ROLE,ROLE_GRANT,DIRECTORY,PROCACT_SCHEMA,AUDIT_OBJ sqlfile=optable1.sql


How to create Database structure same as another Database using Export Dump


If you want to create a Database Skeleton (All tables (without Data), indexes and other objects) same as another database

Then You just need the full export dump (apx1213.dmp) of source database

Source database A: apx1213
Target Database B: apx1314


1)     Create database using Database Configuration Assistant (DBCA)
2)     And import dump (without data) using following command

imp ldbo@apx1314srv FILE=D:\apx1213.dmp ROWS=N FULL=Y CONSTRAINTS=Y GRANTS=Y INDEXES=Y STATISTICS=NONE

Tuesday, December 18, 2012

sqlserver to oracle10g / 11g connection


Accessing SQL Server from Oracle with Database Gateway for ODBC (DG4ODBC)

To connect Oracle to a non-Oracle system through DG4ODBC:

Install and configure the ODBC driver on the machine where DG4ODBC is installed.

D:\app\Administrator\product\11.2.0\dbhome_1\BIN\dg4odbc.exe


D:\oracle\product\10.2.0\db_1\bin\hsodbc.exe

D:\oracle\product\10.2.0\db_1\bin\dg4odbc.exe

for 10g 32 bit listener
    (SID_DESC=
         (SID_NAME=dg4odbc)
         (ORACLE_HOME=D:\oracle\product\10.2.0\db_1)
         (PROGRAM=hsodbc)
      )

for 11g OR 10G 64 BIT listener

    (SID_DESC=
         (SID_NAME=dg4odbc)
         (ORACLE_HOME=D:\oracle\product\10.2.0\db_1)
         (PROGRAM=dg4odbc)
      )



Configure Oracle:
Database gateway (init*.ora).  initdg4odbc.ora
Database listener (listener.ora).
Network client (tnsnames.ora).
Create a database link with SQL*Plus.



----------odbcad32---------------------
datasource name (DSN) : dg4odbc
description: dg4odbc
sqlserver : 172.168.0.1

database name: mysqlserverdatabase
username: sa(sqlserver user to connect to oracle)
password: sa



test connection successfully


--------------------oracle------
Select * from v$parameter where name like 'global_names%';

alter system set global_names=false scope = both;

create public database link lnk_sqlserver connect to "sa" identified by "sa" using 'dg4odbc';

username password should be double quotes




The db link name is sqlserver. Username and password must be in double quotes, because the username and password are case sensitive in SQL Server. 'DG4ODBC' points to the alias in the tnsnames.ora file that calls the HS subsystem.




rename init<>.ora to initdg4odbc.ora

D:\app\Administrator\product\11.2.0\dbhome_1\hs\admin\initdg4odbc.ora
D:\oracle\product\10.2.0\db_1\hs\admin\initdg4odbc.ora

HS_FDS_CONNECT_INFO = dg4odbc
HS_FDS_TRACE_LEVEL = off


-----------listener-----D:\oracle\product\10.2.0\db_1------10g

    (SID_DESC=
         (SID_NAME=dg4odbc)
         (ORACLE_HOME=D:\oracle\product\10.2.0\db_1)
         (PROGRAM=hsodbc)
      )

Please check hsodbc, dg4odbc at $oracle_home/bin
-----------listener-----D:\oracle\product\10.2.0\db_1------11g

    (SID_DESC=
         (SID_NAME=dg4odbc)
         (ORACLE_HOME=D:\oracle\product\10.2.0\db_1)
         (PROGRAM=dg4odbc)
      )

-----------------tnsnames.ora--------D:\oracle\product\10.2.0\db_1-----
Host,SID should be same as listener entry and should be of Oracle (not sql server)

dg4odbc  =
  (DESCRIPTION=
    (ADDRESS=(PROTOCOL=tcp)(HOST=172.16.0.1)(PORT=1521))
    (CONNECT_DATA=(SID=dg4odbc))
    (HS=OK)
  ) 

Now Check the connection

select * from all_catalog@lnk_sqlserver

select * from "systables"@lnk_sqlserver; 




INSERT INTO DBO.AMR@LNKASIANTRANSMIT("Header","ClientCode","BaseDeposit","SegmentAccount")
values('03','ksh-001',1,'a');

INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit,SegmentAccount)
values('03','ksh-001',1,'a');


INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit)
values('03','ksh-001','a');


Enclose the column names in double quotes due to case sensitivitiy of the columns

INSERT INTO DBO.AMR@LNKASIANTRANSMIT("Header","ClientCode","BaseDeposit","SegmentAccount")
 values('03','ksh-001',1,'a');

SQL> alter package pk_asianauto compile body;

Warning: Package Body altered with compilation errors.

SQL> sho err
Errors for PACKAGE BODY PK_ASIANAUTO:

LINE/COL ERROR
-------- -----------------------------------------------------------------
28/13    PL/SQL: SQL Statement ignored
28/80    PL/SQL: ORA-00904: "SEGMENTACCOUNT": invalid identifier
60/13    PL/SQL: SQL Statement ignored
60/114   PL/SQL: ORA-00904: "ACTION": invalid identifier
92/13    PL/SQL: SQL Statement ignored
92/85    PL/SQL: ORA-00904: "ISPRIMARYDEALER": invalid identifier
126/13   PL/SQL: SQL Statement ignored
127/13   PL/SQL: ORA-00904: "ADHOCCAP": invalid identifier
169/13   PL/SQL: SQL Statement ignored
171/70   PL/SQL: ORA-00904: "FLAG": invalid identifier
204/13   PL/SQL: SQL Statement ignored

LINE/COL ERROR
-------- -----------------------------------------------------------------
204/57   PL/SQL: ORA-00904: "CLIENTCODE": invalid identifier
261/17   PL/SQL: SQL Statement ignored
262/48   PL/SQL: ORA-00904: "DAYSELLAMT": invalid identifier
266/17   PL/SQL: SQL Statement ignored
267/48   PL/SQL: ORA-00904: "DAYSELLAMT": invalid identifier
278/14   PL/SQL: SQL Statement ignored
278/90   PL/SQL: ORA-00904: "SEGMENTACCOUNT": invalid identifier
288/14   PL/SQL: SQL Statement ignored
288/90   PL/SQL: ORA-00904: "SEGMENTACCOUNT": invalid identifier


SQL> INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit,SegmentAccount)
  2  values('03','ksh-001','a','a');
INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit,SegmentAccount)
                                                                   *
ERROR at line 1:
ORA-00904: "SEGMENTACCOUNT": invalid identifier


SQL> INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit)
  2  values('03','ksh-001','a');
INSERT INTO DBO.AMR@LNKASIANTRANSMIT(Header,ClientCode,BaseDeposit)
                                                       *
ERROR at line 1:
ORA-00904: "BASEDEPOSIT": invalid identifier


SQL> INSERT INTO DBO.AMR@LNKASIANTRANSMIT("Header","ClientCode","BaseDeposit","SegmentAccount")
  2  values('03','ksh-001','a','a');
values('03','ksh-001','a','a')
                      *
ERROR at line 2:
ORA-28534: Heterogeneous Services preprocessing error

-------------
we are trying to insert char into number
--------------------

SQL> desc DBO.AMR@LNKASIANTRANSMIT
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 Header                                             VARCHAR2(2)
 ClientCode                                         VARCHAR2(25)
 BaseDeposit                                        NUMBER(24,6)
 SegmentAccount                                     VARCHAR2(15)

SQL> INSERT INTO DBO.AMR@LNKASIANTRANSMIT("Header","ClientCode","BaseDeposit","SegmentAccount")
  2  values('03','ksh-001',1,'a');

1 row created.

SQL> rollback;



Followers