Friday, August 6, 2010

Oracle 10g Standby Database

Oracle 10g Standby Database
--------------------------------------


PRODUCTION DATABASE: 10.100.0.65
STANDBY DATABASE: 10.100.0.32


-----------------I. Before you get started:-------------------

1. Make sure the operating system and platform architecture on the primary and standby systems are the same;

2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.

---------II. On the Primary Database Side:---------------------

Enable forced logging on your primary database:
Select FORCE_LOGGING from V$DATABASE;
ALTER DATABASE FORCE LOGGING;


1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:

SQL> select bytes from v$log;

BYTES
----------
52428800
52428800
52428800

2) Use the following command to determine your current log file groups:

SQL> select group#, member from v$logfile;

3) Create standby Redo log groups.
My primary database had 5 log file groups originally and I created 5 standby redo log groups using the following commands:
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 8 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 9 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 10 SIZE 50M;


4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;

-----------NO NEED--------------5) Enable Archiving on Primary.

If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;

6) Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

A) Create pfile from spfile for the primary database:

SQL>create pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS6.ORA' from spfile;

B) Edit INITSNS6.ora to add the new primary and standby role parameters:


select * from v$parameter where name like '%log_archive_format%';
select * from v$parameter where name like '%standby%';
select * from v$parameter where name like '%remote_archive_enable%';
select * from v$parameter where name like '%log_archive_dest_state_%';
select * from v$parameter where name like '%convert%';

----------------------INITSNS1011.ORA------------------

sns6.__db_cache_size=1006632960
sns6.__java_pool_size=8388608
sns6.__large_pool_size=8388608
sns6.__shared_pool_size=645922816
sns6.__streams_pool_size=0
*.audit_file_dest='d:\oracle\product\10.2.0\admin\sns1011\adump'
*.audit_trail='DB'
*.background_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\bdump'
*.compatible='10.2.0.3.0'
*.control_files='e:\snsd1011\control01.ora','e:\snsd1011\control02.ora','e:\snsd1011\control03.ora'
*.core_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\cdump'
*.db_block_size=16384
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='sns1011'
*.job_queue_processes=35
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(sns1011,sns1011sby)'
*.LOG_ARCHIVE_DEST_1='LOCATION=D:\archive0910\sns1011\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=sns1011'
*.LOG_ARCHIVE_DEST_2='SERVICE=sns1011sby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=sns1011sby
*.FAL_CLIENT=sns1011
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\admin\sns1011sby','d:\oracle\product\10.2.0\admin\sns1011'
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\admin\sns1011sby','d:\oracle\product\10.2.0\admin\sns1011'
*.log_archive_format='ARC%S_%R.%T'
*.open_cursors=300
*.pga_aggregate_target=337641472
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=555
*.sga_max_size=1677721600
*.sga_target=1677721600
*.smtp_out_server='mail.uniconindia.in'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\udump'
*.utl_file_dir='d:\ldoutput'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sns1011XDB)'
-------------------


C. Create spfile from pfile, and restart primary database using the new spfile.

SQL> shutdown immediate;
SQL> startup nomount pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS1011.ORA';
SQL>create spfile from pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS1011.ORA';
SQL>shutdown immediate;
SQL>Startup;


7) CREATE STANDBY CONTROLFILE
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database create standby controlfile as 'C:\SBY.ORA';
SQL>ALTER DATABASE OPEN;


8) take target db backup using rman and restore to standby

run RMAN backup script


----------------III. On the Standby Database Site:---------------

1. CREATE STANDBY DATABASE WITHOUT STARTUP DATABASE

2. Create directory STRUCTURE SAME AS PRIMARY DATABASE for data files. ALSO Create directory (multiplexing) for online logs.
create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.

3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy INITSNS1011.ora from Primary server to Standby server, to database folder C:\oracle\product\10.2.0\db_1\database.

2) Rename it to INITSNS1011SBY.ORA, and modify the file as follows

NOTE: The db_name in the standby's init file should be the same as the primary database.
--------------------------INITSNS1011SBY.ORA----------------------
sns6.__db_cache_size=1207959552
sns6.__java_pool_size=8388608
sns6.__large_pool_size=8388608
sns6.__shared_pool_size=343932928
sns6.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\sns1011sby\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\bdump'
*.compatible='10.2.0.3.0'
*.control_files='e:\snsd1011\control01.ora','e:\snsd1011\control02.ora','e:\snsd1011\control03.ora'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\cdump'
*.db_block_size=16384
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='sns1011sby'
*.job_queue_processes=35
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(sns1011sby,sns1011)'
*.LOG_ARCHIVE_DEST_1='LOCATION=D:\archive0910\sns1011sby\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_2='SERVICE=sns1011 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=sns1011
*.FAL_CLIENT=sns1011sby
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='d:\oracle\product\10.2.0\admin\sns1011','c:\oracle\product\10.2.0\admin\sns1011sby'
*.LOG_FILE_NAME_CONVERT='d:\oracle\product\10.2.0\admin\sns1011','c:\oracle\product\10.2.0\admin\sns1011sby'
*.log_archive_format='ARC%S_%R.%T'
*.open_cursors=300
*.pga_aggregate_target=337641472
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=555
*.sga_max_size=1572864000
*.sga_target=1572864000
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*._ALLOW_RESETLOGS_CORRUPTION=TRUE
*.user_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\udump'
*.utl_file_dir='e:\ldoutput'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sns1011SBYXDB)'

--------------------------------------------------------
4. Copy the Primary password file to standby and rename it to pwdsns6SBY.ora.
TO C:\oracle\product\10.2.0\db_1\database.

5. Copy the standby control file 'SBY.ORA' from primary to standby destinations ;

6. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID SNS1011SBY –STARTMODE manual

7. Configure listeners for the primary and standby databases.

--------------TNSNAMES.ORA--PRIMARY---------
SNS1011 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = SNS1011)
)
)
SNS1011SBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = SNS1011sby)
)
)

-----------------LISTENER.ORA----PRIMARY--------
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC)
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:\oracle\product\10.2.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = SNS1011)
(ORACLE_HOME = d:\oracle\product\10.2.0\db_1)
(SID_NAME = SNS1011)
)

)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
)
)



--------------TNSNAMES.ORA--STANDBY---------


SNS1011SBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SNS1011sby)
)
)
SNS1011 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SNS1011)
)
)


-----------------LISTENER.ORA----STANDBY--------

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC1)
(SID_NAME = PLSExtProc1)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = SNS1011sby)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
(SID_NAME = SNS1011sby)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
)
)



------------------------
8.
RESTART LISTENER ON PRIMARY AND STANDBY DATABASE
LSNRCTL>RELOAD


CHECK TNSPING ON PRIMARY AND STANDBY DATABASE
$tnsping SNS1011
$tnsping SNS1011SBY



9. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.

set ORACLE_SID=sns6sby

oradim -new -sid sns6sby -SRVC OracleServicesns6sby -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE c:\oracle\product\10.2.0\db_1\database\initsns6sby.ora


10. Start up nomount the standby database and generate a spfile.

SQL>startup nomount pfile='C:\oracle\product\10.2.0\db_1\database\INITSNS1011sby.ORA';
SQL>create spfile from pfile='C:\oracle\product\10.2.0\db_1\database\INITSNS1011sby.ORA';
SQL>shutdown immediate;
SQL>startup mount;

11.

SET ORACLE_SID=sns6sby
RMAN TARGET SYS/ORACLE@SNS1011SBY
RESTORE CONTROLFILE FROM 'C:\SBY.ORA';
catalog backuppiece 'c:\05LICVI0';
restore database;

12. DUPLICATE DATABASE
NOTE: TARGET DB SHOULD BE MOUNT AND STANDBY SHOULD BE NOMOUNT STATE

rman target sys/oracle@SNS1011 auxiliary sys/oracle@SNS1011sby

RMAN>
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}


13. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
-----
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
-------
14. Verify the standby database is performing properly:
A) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;

B) On Primary, force a logfile switch:
SQL>alter system switch logfile;

C) On Standby, verify the archived redo log files were applied:
SQL>

15. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.


on standby database

shut immediate;
startup mount
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database open;
alter database recover managed standby database using current logfile disconnect;

on primary only one time

alter system switch logfile;
alter system switch logfile;

16. To create multiple standby databases, repeat this procedure.

17) Failover

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;


ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;


--------------------

Thursday, August 5, 2010

SQL Performance Tuning Tips

SQL Performance Tuning

1. Use EXPLAIN to profile the query execution plan
2. Use Slow Query Log (always have it on!)
3. Don't use DISTINCT when you have or could use GROUP BY
4. Insert performance
1. Batch INSERT and REPLACE
2. Use LOAD DATA instead of INSERT
5. LIMIT m,n may not be as fast as it sounds.
6. Don't use ORDER BY RAND() if you have > ~2K records
7. Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
8. Avoid wildcards at the start of LIKE queries
9. Avoid correlated subqueries and in select and where clause (try to avoid in)
10. No calculated comparisons -- isolate indexed columns
11. ORDER BY and LIMIT work best with equalities and covered indexes
12. Separate text/blobs from metadata, don't put text/blobs in results if you don't need them
13. Derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs without sorting them. (Self-join can speed up a query if 1st part finds the IDs and uses then to fetch the rest)
14. ALTER TABLE...ORDER BY can take data sorted chronologically and re-order it by a different field -- this can make queries on that field run faster (maybe this goes in indexing?)
15. Know when to split a complex query and join smaller ones
16. Delete small amounts at a time if you can
17. Make similar queries consistent so cache is used
18. Have good SQL query standards
19. Don't use deprecated features
20. Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
21. Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS()
22. Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
23. use groupwise maximum instead of subqueries
24. Avoid using IN(...) when selecting on indexed fields, It will kill the performance of SELECT query.


Scaling Performance Tips:

1. Use benchmarking
2. isolate workloads don't let administrative work interfere with customer performance. (ie backups)
3. Debugging sucks, testing rocks!
4. As your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.

Network Performance Tips:

1. Minimize traffic by fetching only what you need.
1. Paging/chunked data retrieval to limit
2. Don't use SELECT *
3. Be wary of lots of small quick queries if a longer query can be more efficient
2. Use multi_query if appropriate to reduce round-trips
3. Use stored procedures to avoid bandwidth wastage

OS Performance Tips:

1. Use proper data partitions
1. For Cluster. Start thinking about Cluster *before* you need them
2. Keep the database host as clean as possible. Do you really need a windowing system on that server?
3. Utilize the strengths of the OS
4. pare down cron scripts
5. create a test environment
6. source control schema and config files
7. for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward
8. partition appropriately
9. partition your database when you have real data -- do not assume you know your dataset until you have real data

MySQL Server Overall Tips:

1. innodb_flush_commit=0 can help slave lag
2. Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine the smallest data type for your needs.
3. use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATE
4. if you can, compress text/blobs
5. compress static data
6. don't back up static data as often
7. enable and increase the query and buffer caches if appropriate
8. config params -- http://docs.cellblue.nl/2007/03/17/easy-mysql-performance-tweaks/ is a good reference
9. Config variables & tips:
1. use one of the supplied config files
2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory variables
3. be aware of global vs. per-connection variables
4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
5. be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
6. defragment tables, rebuild indexes, do table maintenance
7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller
8. more RAM is good so faster disk speed
9. use 64-bit architectures
10. --skip-name-resolve
11. increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
12. look up memory tuning parameter for on-insert caching
13. increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
14. Run in SQL_MODE=STRICT to help identify warnings
15. /tmp dir on battery-backed write cache
16. consider battery-backed RAM for innodb logfiles
17. use --safe-updates for client
18. Redundant data is redundant

Storage Engine Performance Tips:

1. InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary key very large
2. Utilize different storage engines on master/slave ie, if you need fulltext indexing on a table.
3. BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.
4. Know your storage engines and what performs best for your needs, know that different ones exist.
1. ie, use MERGE tables ARCHIVE tables for logs
2. Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables
5. use row-level instead of table-level locking for OLTP workloads
6. try out a few schemas and storage engines in your test environment before picking one.

Database Design Performance Tips:

1. Design sane query schemas. don't be afraid of table joins, often they are faster than denormalization
2. Don't use boolean flags
3. Use Indexes
4. Don't Index Everything
5. Do not duplicate indexes
6. Do not use large columns in indexes if the ratio of SELECTs:INSERTs is low.
7. be careful of redundant columns in an index or across indexes
8. Use a clever key and ORDER BY instead of MAX
9. Normalize first, and denormalize where appropriate.
10. Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn't a real database
11. use INET_ATON and INET_NTOA for IP addresses, not char or varchar
12. make it a habit to REVERSE() email addresses, so you can easily search domains (this will help avoid wildcards at the start of LIKE queries if you want to find everyone whose e-mail is in a certain domain)
13. A NULL data type can take more room to store than NOT NULL
14. Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8.
15. Use Triggers wisely
16. use min_rows and max_rows to specify approximate data size so space can be pre-allocated and reference points can be calculated.
17. Use HASH indexing for indexing across columns with similar data prefixes
18. Use myisam_pack_keys for int data
19. be able to change your schema without ruining functionality of your code
20. segregate tables/databases that benefit from different configuration variables

Other:

1. Hire a MySQL (tm) Certified DBA
2. Know that there are many consulting companies out there that can help, as well as MySQL's Professional Services.
3. Read and post to MySQL Planet at http://www.planetmysql.org
4. Attend the yearly MySQL Conference and Expo or other conferences with MySQL tracks
5. Support your local User Group (link to forge page w/user groups here)

Stored Outlines

Stored Outlines

Oracle preserves the execution plans in objects called “Stored Outlines.” You can create a Stored Outline for one or more SQL statements and group Stored Outlines into categories. Grouping Stored Outlines allows you to control which category of outlines Oracle uses.


select * from v$parameter where name like '%create_stored_outlines%';
select * from dictionary where table_name like '%OUTLINE%';


ALTER SYSTEM SET create_stored_outlines=TRUE;
ALTER SESSION SET create_stored_outlines=TRUE;



GRANT CREATE ANY OUTLINE TO LDBO;
GRANT EXECUTE_CATALOG_ROLE TO LDBO;


-- Create an outline for a specific SQL statement.

CREATE OUTLINE client_email FOR CATEGORY ldbo_outlines
ON select distinct accounts.fibsacct,accountemaildetail.email from accounts,accountemaildetail where accounts.oowncode=accountemaildetail.oowncode;

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

-- List the hints associated with the outline.

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'CLIENT_EMAIL';


SELECT hash_value, child_number, sql_text FROM v$sql WHERE sql_text LIKE 'select distinct accounts.fibsacct,accountemaildetail.email from accounts,account%';


-- Create an outline for the statement.
BEGIN
DBMS_OUTLN.create_outline(
hash_value => 3174963110,
child_number => 0,
category => 'LDBO_OUTLINES');
END;
/

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'SYS_OUTLINE_10080512161704577';


-- Check if the outlines have been used.
SELECT name, category, used FROM user_outlines;


--------In the following example we will enable stored outlines for the current session.

ALTER SESSION SET query_rewrite_enabled=TRUE;
ALTER SESSION SET use_stored_outlines=SCOTT_OUTLINES;



--DROPPING OUTLINES
BEGIN
DBMS_OUTLN.drop_by_cat (cat => 'LDBO_OUTLINES');
END;
/



---------------------

Wednesday, August 4, 2010

AutoTrace

Prerequisites

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

----------
Setting AUTOTRACE On

There is also an easier method with SQL*Plus for generating an EXPLAIN PLAN and statistics about the performance of a query.

SET AUTOTRACE ON
select * from accounts;


----------------------

set autotrace off
set autotrace on
set autotrace traceonly

set autotrace on explain
set autotrace on statistics
set autotrace on explain statistics

set autotrace traceonly explain
set autotrace traceonly statistics
set autotrace traceonly explain statistics


set autotrace off explain
set autotrace off statistics
set autotrace off explain statistics

----------


set autotrace on: Shows the execution plan as well as statistics of the statement.
set autotrace on explain: Displays the execution plan only.
set autotrace on statistics: Displays the statistics only.
set autotrace traceonly: Displays the execution plan and the statistics (as set autotrace on does), but doesn't print a query's result.
set autotrace off: Disables all autotrace
If autotrace is enabled with statistics, then the following statistics are displayed:

* recursive calls
* db block gets
* consistent gets
* physical reads
* redo size
* bytes sent via SQL*Net to client
* bytes received via SQL*Net from client
* SQL*Net roundtrips to/from client
* sorts (memory)
* sorts (disk)

------------

Cost Based Optimizer (CBO) and Database Statistics

Cost Based Optimizer (CBO) and Database Statistics


The mechanisms and issues relating to maintenance of internal statistics are explained below:

* Analyze Statement
* DBMS_UTILITY
* DBMS_STATS
* Scheduling Stats
* Transfering Stats
* Issues


1) Analyze Statement
select 'ANALYZE TABLE '||Owner||'.'||table_name||' compute statistics;'
from sys.all_tables where table_name!='_default_auditing_options_'
/

select 'ANALYZE INDEX '||Owner||'.'||index_name||' compute statistics;'
from sys.all_indexes
/

2) DBMS_UTILITY

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO','COMPUTE');

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO', 'ESTIMATE')

3) DBMS_STATS
EXEC DBMS_STATS.gather_schema_stats('LDBO');

4) Scheduling Stats

SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN

DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''LDBO''); END;',
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/


-----EXEC DBMS_JOB.remove(X);
-----COMMIT;


5) Transfering Stats
It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:

EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');


6) Issues

* Exclude dataload tables from your regular stats gathering, unless you know they will be full at the time that stats are gathered.
* I've found gathering stats for the SYS schema can make the system run slower, not faster.
* Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only.
* Even if scheduled, it may be necessary to gather fresh statistics after database maintenance or large data loads.

7)

select table_name, avg_row_len, chain_cnt, num_rows,last_analyzed from dba_tables where owner ='LDBO' order by last_analyzed desc;

Explain Plan

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

Run EXPLAIN PLAN for the query to be optimized:

1)explain plan for
select * from accounts;

2)select * from PLAN_TABLE; ---to check output


3)explain plan
SET STATEMENT_ID = 'ACC' FOR
select * from accounts;



4)

SELECT cardinality "Rows",
lpad(' ',level-1)||operation||' '||
options||' '||object_name "Plan",cost "Cost by CBO",bytes/1024/1024 "MB",time "Time By CBO"
FROM PLAN_TABLE

CONNECT BY prior id = parent_id
AND prior statement_id = statement_id
START WITH id = 0
AND statement_id = 'ACC'
ORDER BY id;



5)
SELECT LPAD(' ', 2 * (level - 1)) ||
DECODE (level,1,NULL,level-1 || '.' || pt.position || ' ') ||
INITCAP(pt.operation) ||
DECODE(pt.options,NULL,'',' (' || INITCAP(pt.options) || ')') plan,
pt.object_name,
pt.object_type,
pt.bytes,
pt.cost,
pt.partition_start,
pt.partition_stop
FROM plan_table pt
START WITH pt.id = 0
AND pt.statement_id = '&1'
CONNECT BY PRIOR pt.id = pt.parent_id
AND pt.statement_id = '&1';


The following diagram demonstrates the procedures for running TRACE versus EXPLAIN PLAN:



TRACE

It takes four hours to TRACE a query that takes four hours to run.

  • Set up Init.ora Parameters
  • Create PLAN_TABLE table
  • Run Query
  • Statement is executed PLAN_TABLE is populated
  • Run TKPROF
  • Output shows disk and memory reads in addition to EXPLAIN PLAN output
EXPLAIN PLAN

It takes less than a minute to EXPLAIN PLAN a query that takes four hours to run.

  • Create PLAN_TABLE table
  • Explain Query
  • PLAN_TABLE is populated
  • Query PLAN_TABLE
  • Output shows EXPLAIN PLAN

Tuesday, August 3, 2010

ODBC Trace for ODBC Applications

check in Control Panel ->Administrative -> ODBC Manager -> Tracing(tab) -> click on Stop Tracing

Always remember to stop it

Note:ODBC Tracing to SQL.LOG Can Slow SQL or Consume All Space

SQL Trace / TKPROF

To start a SQL trace for the current session, execute:
ALTER SESSION SET sql_trace = true;

ALTER SESSION SET sql_trace = true;
or DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);


ALTER SESSION SET tracefile_identifier = mysqltrace;

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:
select sid, serial# from sys.v_$session where .....


SID SERIAL#
---------- ----------
8 13607

--------------------

Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times. The resulting trace output is more meaningful with these statistics. The command to enable timed statistics is:

SQL> ALTER SYSTEM SET timed_statistics = true;

--------------------

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);


Ask user to run just the necessary to demonstrate his problem.
Disable tracing for your selected process:

execute dbms_system.set_sql_trace_in_session(8,13607, false);


Look for trace file in USER_DUMP_DEST


ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;


Identifying trace files

Trace output is written to the database's UDUMP directory.

The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:

* INSTANCE is the name of the Oracle instance,
* PID is the operating system process ID (V$PROCESS.OSPID); and
* TRACEID is a character string of your choosing.


---http://www.ordba.net/Tutorials/OracleUtilities~TKPROF.htm---

Trace output is quite unreadable. However, Oracle provides a utility, called TKProf, that can be used to format trace output.


SET ORACLE_SID=SNS6


tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n]
[aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table]
[explain=user/password] [record=filename4] [width=n]

C:\> tkprof C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.trc C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.prf explain = ldbo/ldbo sys=no sort = (PRSDSK,EXEDSK,FCHDSK,EXECPU,FCHCPU)




Some of the things to look for in the TKPROF output are listed in this table:

Problems Solutions
High numbers for the parsing The SHARED_POOL_SIZE may need to be increased.
The disk reads are very high Indexes are not used or may not exist.
The "query" and/or "current" (memory reads) are very high Indexes may be on columns with high cardinality (columns where an individual value generally makes up a large percentage of the table). Removing or suppressing the index may increase performance.
The parse elapse time is high There may be a problem with the number of open cursors.
The number of rows processed by a row in the EXPLAIN PLAN is high compared to the other rows This could be a sign of an index with a poor distribution distinct keys (unique values for a column). Or this could also be a sign of a poorly written statement.
If the number of misses in the library cache during parse is greater than 1 This is an indication that the statement had to be reloaded. You may need to increase the SHARED_POOL_SIZE in the init.ora.


----------------

Monday, August 2, 2010

Date Format

mkdir c:\backupfolders\%date:~0,2%
mkdir c:\backupfolders\%date:~3,2%
mkdir c:\backupfolders\%date:~6,4%
mkdir c:\backupfolders\%date:~8,2%

mkdir c:\backupfolders\%date:~0,2%.%date:~3,2%.%date:~6,4%

--------------

Allow IP to sending mail (Relay on)

cat /etc/tcp.smtp



login as: root
root@10.100.0.77's password:
Last login: Wed Jul 14 16:28:50 2010 from 172.16.203.28
[root@mail ~]# cd /etc/tcp.smtp
-bash: cd: /etc/tcp.smtp: Not a directory
[root@mail ~]# cat /etc/tc
tcp.smtp tcp.smtp.cdb tcsd.conf
[root@mail ~]# cat /etc/tcp.smtp
127.:allow,RELAYCLIENT=""
10.100.0.91:allow,RELAYCLIENT=""
#Rachit#
172.16.203.15:allow,RELAYCLIENT=""
#Endosor Server - DailyReports#
10.100.0.26:allow,RELAYCLIENT=""
[root@mail ~]#
------------------------

Content Duplicacy for Multiple Domain with same website

multiple domains point to the one URL

redirect all your secondary domain names to your primary domain name is to do it at your domain name registrar level. Instead of setting the DNS of all your domains to your web hosting account, just set your primary domain name.
n. Log into your domain registrar's website, and look for either "URL Forwarding", "Forwarding", "Redirection" or something to that effect.


I suggest you three solutions that have different effects but are good for the scope:

*

Insert this rule in the robots.txt file only of the multiple domains:

User-agent: *

Disallow: /
*

Apply an permanent redirect from the multiple domains to the main site, the request www.youbusiness.fr (or other) will be redirected to the .COM domain
* Create a landing page to be published in the multiple domains
The difference between a 301 and a 302 is that a 301 status code means that a page has permanently moved to a new location, while a 302 status code means that a page has temporarily moved to a new location.

----------------

Move website to new Domain

Today Google Webmaster team has roll out new Webmaster Tool Interface for all. From today onwards when you login to Google Webmasters you will see a new Webmaster Interface. Along with new Google Webmaster Interface new feature in Google Webmaster Tools named as Change of Address.

This feature will be extremely useful for those who are planning to move site to new domain. Until today their was no way to notify Google of changes domain name but with this new feature of Google Webmaster you can notify Google to update the index to reflect your new URL.

When you login to your Google Webmaster account you will find change of address option under site configuration navigation links. Here are instructions displayed under Change of Address.

1. Setup a New website

2. Redirect all traffic from the old site with the help of 301 Permanent Redirect

3. Add your new site to Google Webmasters tools on same account of your Old domain

4. Update New URL for your old domain

Dynamic Title

-------------



----------------------------------------------------













----------------------

Header Tag Optimization

Headings Tag in HTML:

Headings are defined with the

to

tags.

defines the largest heading and used for main heading

,

,

,

,
defines the smallest heading and used for sub-headings.
Heading Tags

to

are one of the important factors for On Page Optimization. Search engine give more importance for indexing and for ranking well in search results pages.

Header tag is important for visitors also, since heading tag tells both search engine and visitors what the content is all about.

Heading tags are represented as

to

.

is considered the most important tag by search engine and

, the smallest and the least important.

Example of Header tag is:
Head Tag


Head Tag


Head Tag


Head Tag


Head Tag


Head Tag

Tips for Optimizing the Header Tag

to

:

*
Header tags should contain your target keywords along with any other descriptive text relevant to the content of the page.
*
Search engines give more importance to Header tags to what a web page is all about.
*
The Google ranking algorithm things that if you're using a

tag, then the text in between this header tag must be more important than the content on the rest of the page.
*
Use your most target keyword phrases in heading tags on your webpage.
*
By default, H1 tags aren't formatting, so when we are using a CSS style to override the default



*
Use those keywords which are used in Title and Meta tags like description tag and keyword tag. Search engine preferred those keywords which are used in heading tags.
*
Use at least 2-4 heading tags such as

,

,

,

on each page of your website.
*
User those target keyword in header tag which describe the content of the webpage.
*
Add highly relevant keywords in

tag, as it is weighted most than other heading tags.
*
Analyze the relevancy of your keywords and place most important keyword in

tag, less important to

, further less important to

and ultimately least important keywords to

.

import job monitoring(How fast import running)

SELECT SUBSTR(sql_text, INSTR(sql_text,'INTO "'),30) table_name
, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
FROM sys.v_$sqlarea WHERE sql_text like 'INSERT %INTO "%'
AND command_type = 2
AND open_versions > 0;

-----------------

Oracle Table Size Growth Monitoring

select
segment_name table_name,
sum(bytes)/(1024*1024) table_size_meg
from
user_extents
where
segment_type='TABLE'
and
segment_name = 'TBLDIGITALSIGNEDREPORTS'
group by segment_name;


-------------------TABLE SIZE GROWTH WITH TIME-------

select
to_char(begin_interval_time,'DD-MM-YYYY hh24:mm') TIME,
object_name TABLE_NAME,
space_used_total/1024/1024 SPACE_USED_MB
from
dba_hist_seg_stat s,
dba_hist_seg_stat_obj o,
dba_hist_snapshot sn
where
o.owner = 'LDBO'
and
s.obj# = o.obj#
and
sn.snap_id = s.snap_id
and
TRIM(object_name) LIKE 'TBLDIGITALSIGNEDREPORTS'
order by
begin_interval_time DESC;



-------------------------------



select sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Schema Size (M)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where end_interval_time > trunc(sysdate) - &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and b.object_name ='TBLDIGITALSIGNEDREPORTS'
and c.owner = 'LDBO'
and space_used_delta > 0;

--------------------

Tuesday, July 20, 2010

Bulk Updation in Address: Move one column data to another

Declare

cursor CUR_TAB is

SELECT rowid,a.*

FROM accountaddressdetail11 a WHERE CODE LIKE 'In%';

l_b number:=0;

Begin

for CUR_REC in CUR_TAB

loop

Begin

if CUR_REC.TEL2 is not null then

if CUR_REC.TEL1 is null or LENGTH(TRIM(CUR_REC.TEL1)) is null then

update accountaddressdetail11 set TEL1=CUR_REC.TEL2,TEL2=' ' where rowid=CUR_REC.rowid and TEL2=CUR_REC.TEL2 ;

elsif CUR_REC.TEL3 is null or LENGTH(TRIM(CUR_REC.TEL3)) is null then

update accountaddressdetail11 set TEL3=CUR_REC.TEL2,TEL2=' ' where rowid=CUR_REC.rowid and TEL2=CUR_REC.TEL2;

elsif CUR_REC.MOBILE is null or LENGTH(TRIM(CUR_REC.MOBILE)) is null then

update accountaddressdetail11 set MOBILE=SUBSTR(CUR_REC.TEL2,1,12),TEL2=' ' where rowid=CUR_REC.rowid and TEL2=CUR_REC.TEL2;

elsif CUR_REC.PAGER is null or LENGTH(TRIM(CUR_REC.PAGER)) is null then

update accountaddressdetail11 set PAGER=SUBSTR(CUR_REC.TEL2,1,12),TEL2=' ' where rowid=CUR_REC.rowid and TEL2=CUR_REC.TEL2;

end if;

end if;

Exception

when others then

null;

End;

end loop;

End;

-----------------------------------

Bulk Updation: update EmailCC column

Declare
cursor CUR_BULK_UPDATE is
SELECT rowid,a.* FROM ACCOUNTEMAILDETAIL a WHERE CODE NOT LIKE '%M';
l_b number:=0;
Begin
for CURSOR_RECURSION in CUR_BULK_UPDATE
loop
Begin
if CURSOR_RECURSION.EMAIL is not null then
update ACCOUNTEMAILDETAIL set CEMAILCC= (trim(LOWER(CURSOR_RECURSION.OOWNCODE))) ||'@uniconglobal.com' where rowid=CURSOR_RECURSION.rowid and

CURSOR_RECURSION.EMAIL ! = ' ' and CURSOR_RECURSION.EMAIL not like '%@uniconglobal.com%';
end if;
Exception
when others then
null;
End;
end loop;
End;
/
COMMIT;

-----------------------------------

Wednesday, July 14, 2010

Oracle 11g Standby Database

Oracle Standby database on same machine (for testing)

-----------------I. Before you get started:-------------------

1. Make sure the operating system and platform architecture on the primary and standby systems are the same;

2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.

---------II. On the Primary Database Side:---------------------

Enable forced logging on your primary database:
Select FORCE_LOGGING from V$DATABASE;
ALTER DATABASE FORCE LOGGING;


1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;

BYTES
----------
52428800
52428800
52428800

2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;

3) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;

4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;

5) Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;

6) Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

A) Create pfile from spfile for the primary database:

SQL>create pfile='C:\app\kshitij\product\11.1.0\db_1\database\INITORCL.ORA' from spfile;

B) Edit INITORCL.ora to add the new primary and standby role parameters:


select * from v$parameter where name like '%log_archive_format%';
select * from v$parameter where name like '%standby%';
select * from v$parameter where name like '%remote_archive_enable%';
select * from v$parameter where name like '%log_archive_dest_state_%';
select * from v$parameter where name like '%convert%';

----------------------INITORCL.ORA------------------

orcl.__db_cache_size=536870912
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__oracle_base='c:\app\kshitij'
orcl.__pga_aggregate_target=301989888
orcl.__sga_target=939524096
orcl.__shared_pool_size=352321536
*.audit_file_dest='c:\app\kshitij\admin\orcl\adump'
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='c:\app\kshitij\oradata\orcl\control01.ctl','c:\app\kshitij\oradata\orcl\control02.ctl','c:\app\kshitij\oradata\orcl\control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_unique_name='orcl'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,orclsby)'
*.db_recovery_file_dest='c:\app\kshitij\flash_recovery_area'
*.LOG_ARCHIVE_DEST_1='LOCATION=c:\app\kshitij\flash_recovery_area\orcl\onlineLOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl'
*.LOG_ARCHIVE_DEST_2='SERVICE=orclsby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orclsby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=orclsby
*.FAL_CLIENT=orcl
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='C:\app\kshitij\oradata\orclsby','C:\app\kshitij\oradata\orcl'
*.LOG_FILE_NAME_CONVERT='e:\app\kshitij\oradata\orclsby','C:\app\kshitij\oradata\orcl','e:\app\kshitij\flash_recovery_area\orclsby\onlineLOG','c:\app\kshitij\flash_recovery_area\orcl\onlineLOG'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='c:\app\kshitij'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.memory_target=1229979648
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
-------------------


C. Create spfile from pfile, and restart primary database using the new spfile.

SQL> shutdown immediate;
SQL> startup nomount pfile='C:\app\kshitij\product\11.1.0\db_1\database\INITORCL.ORA';
SQL>create spfile from pfile='C:\app\kshitij\product\11.1.0\db_1\database\INITORCL.ORA';
SQL>shutdown immediate;
SQL>Startup;


7) CREATE STANDBY CONTROLFILE
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database create standby controlfile as 'C:\SBY.ORA';
SQL>ALTER DATABASE OPEN;


8) take target db backup using rman and restore to standby
RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'C:\%F';

RMAN TARGET SYS/ORACLE@ORCL
BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT 'C:\%U';


----------------III. On the Standby Database Site:---------------

1. CREATE STANDBY DATABASE WITHOUT STARTUP DATABASE

2. Create directory STRUCTURE SAME AS PRIMARY DATABASE for data files. ALSO Create directory (multiplexing) for online logs.
create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.

3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy INITORCL.ora from Primary server to Standby server, to database folder E:\app\kshitij\product\11.1.0\db_1\database.

2) Rename it to INITORCLSBY.ORA, and modify the file as follows

NOTE: The db_name in the standby's init file should be the same as the primary database.
--------------------------INITORCLSBY.ORA----------------------
orcl.__db_cache_size=536870912
orcl.__java_pool_size=16777216
orcl.__large_pool_size=16777216
orcl.__oracle_base='e:\app\kshitij'
orcl.__pga_aggregate_target=301989888
orcl.__sga_target=939524096
orcl.__shared_pool_size=352321536
*.audit_file_dest='e:\app\kshitij\admin\orclsby\adump'
*.audit_trail='db'
*.compatible='11.1.0.0.0'
*.control_files='e:\app\kshitij\oradata\orclsby\control01.ctl','e:\app\kshitij\oradata\orclsby\control02.ctl','e:\app\kshitij\oradata\orclsby\control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_unique_name='orclsby'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(orclsby,orcl)'
*.db_recovery_file_dest='e:\app\kshitij\flash_recovery_area'
*.LOG_ARCHIVE_DEST_1='LOCATION=e:\app\kshitij\flash_recovery_area\orclsby\onlineLOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orclsby'
*.LOG_ARCHIVE_DEST_2='SERVICE=orclsby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orclsby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=orcl
*.FAL_CLIENT=orclsby
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='c:\app\kshitij\oradata\orcl','e:\app\kshitij\oradata\orclsby'
*.LOG_FILE_NAME_CONVERT='c:\app\kshitij\oradata\orcl','e:\app\kshitij\oradata\orclsby','c:\app\kshitij\flash_recovery_area\orcl\onlineLOG','e:\app\kshitij\flash_recovery_area\orclsby\onlineLOG'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='e:\app\kshitij'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclsbyXDB)'
*.memory_target=1229979648
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'

--------------------------------------------------------
4. Copy the Primary password file to standby and rename it to pwdSBY.ora.
TO E:\app\kshitij\product\11.1.0\db_1\database.

5. Copy the standby control file 'SBY.ORA' from primary to standby destinations ;

6. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID ORCLSBY –STARTMODE manual

7. Configure listeners for the primary and standby databases.

--------------TNSNAMES.ORA--PRIMARY---------
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = Kshitij-PC)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = orcl)
)
)
ORCLSBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = Kshitij-PC)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = orclsby)
)
)

-----------------LISTENER.ORA----PRIMARY--------
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC)
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\app\kshitij\product\11.1.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl)
(ORACLE_HOME = C:\app\kshitij\product\11.1.0\db_1)
(SID_NAME = orcl)
)

)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = Kshitij-PC)(PORT = 1521))
)
)



--------------TNSNAMES.ORA--STANDBY---------


ORCLSBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = kshitij-PC)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orclsby)
)
)
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = kshitij-PC)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)


-----------------LISTENER.ORA----STANDBY--------

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC1)
(SID_NAME = PLSExtProc1)
(ORACLE_HOME = e:\app\kshitij\product\11.1.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = orclsby)
(ORACLE_HOME = e:\app\kshitij\product\11.1.0\db_1)
(SID_NAME = orclsby)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = Kshitij-PC)(PORT = 1522))
)
)



------------------------
8.
RESTART LISTENER ON PRIMARY AND STANDBY DATABASE
LSNRCTL>RELOAD


CHECK TNSPING ON PRIMARY AND STANDBY DATABASE
$tnsping ORCL
$tnsping ORCLSBY



9. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.

10. Start up nomount the standby database and generate a spfile.

SQL>startup nomount pfile='E:\app\kshitij\product\11.1.0\db_1\database\INITORCLsby.ORA';
SQL>create spfile from pfile='E:\app\kshitij\product\11.1.0\db_1\database\INITORCLsby.ORA';
SQL>shutdown immediate;
SQL>startup mount;

11.
RMAN TARGET SYS/ORACLE@ORCLSBY
RESTORE CONTROLFILE FROM 'C:\SBY.ORA';
catalog backuppiece 'c:\05LICVI0';
restore database;

12. DUPLICATE DATABASE
NOTE: TARGET DB SHOULD BE MOUNT AND STANDBY SHOULD BE NOMOUNT STATE

rman target sys/oracle@orcl auxiliary sys/oracle@orclsby

RMAN>
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}


13. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
-----
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
-------
14. Verify the standby database is performing properly:
A) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;

B) On Primary, force a logfile switch:
SQL>alter system switch logfile;

C) On Standby, verify the archived redo log files were applied:
SQL>

15. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.

If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.

To start real-time apply:

on standby database

shut immediate;
startup mount
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database open;
alter database recover managed standby database using current logfile disconnect;

on primary only one time

alter system switch logfile;
alter system switch logfile;

16. To create multiple standby databases, repeat this procedure.

17) Failover

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;


ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

---------IV. Maintenance:-------

1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.

2. Cleanup the archive logs on Primary and Standby servers.

I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.

For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@ORCLSBY;
RMAN>backup archivelog all delete input;

To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;

3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won't be shipped to the standby server.

-----------------------

Friday, June 25, 2010

Oracle Database Growth Report

select sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Schema Size (M)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where end_interval_time > trunc(sysdate) - &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and c.owner = 'SYS'
and space_used_delta > 0;



----------------------


SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days
, ts.tsname
, max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB
, max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
FROM DBA_HIST_TBSPC_SPACE_USAGE tsu
, DBA_HIST_TABLESPACE_STAT ts
, DBA_HIST_SNAPSHOT sp
, DBA_TABLESPACES dt
WHERE tsu.tablespace_id= ts.ts#
AND tsu.snap_id = sp.snap_id
AND ts.tsname = dt.tablespace_name
AND ts.tsname NOT IN ('SYSAUX','SYSTEM')
GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
ORDER BY tsname desc,days desc;




-------------------Increase Snap Shot time


BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;
/

----------------------

Database Growth Monitoring Script

Step : 1 Calculate total Size of tablespace
select sum(bytes)/1024/1024 "TOTAL SIZE (MB)" from dba_Data_files
Step : 2 Calculate Free Space in Tablespace
select sum(bytes)/1024/1024 "FREE SPACE (MB)" from dba_free_space
Step : 3 Calculate total size , free space and used space in tablespace
select t2.total "TOTAL SIZE",t1.free "FREE SPACE",(t1.free/t2.total)*100 "FREE (%)" ,(1-t1.free/t2.total)*100 "USED (%)"
from (select sum(bytes)/1024/1024 free from dba_free_space) t1 , (select sum(bytes)/1024/1024 total from dba_Data_files) t2
Step : 4 Create table which is store all free/use space related information of tablespace
create table db_growth
as select *
from (
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t1.free/t2.total)*100 "FREE% "
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
)
Step : 5 Insert free space information in DB_GROWTH table (if you want to populate data Manually)
insert into db_growth
select *
from (
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
)
Step : 6 Create View on DB_GROWTH based table ( This Steps is Required if you want to populate data automatically)
create view v_db_growth
as select *
from
(
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
)

Step : 7 Insert data into DB_GROWTH table from V_DD_GROWTH view
insert into db_growth select *
from v_db_growth
Step : 8 Check everything goes fine.
select * from db_growth;
Check Result
Step : 9 Execute following SQL for more time stamp information
alter session set nls_date_format ='dd-mon-yyyy hh24:mi:ss' ;
Session altered.
Step : 10 Create a DBMS jobs which execute after 24 hours
declare
jobno number;
begin
dbms_job.submit(
jobno, 'begin insert into db_growth select * from v_db_growth;commit;end;', sysdate, 'SYSDATE+ 24', TRUE);
commit;
end;

PL/SQL procedure successfully completed.
Step: 11 View your dbms jobs and it's other information
select * from user_jobs;
TIPS: If you want to execute dbms jobs manually execute following command other wise jobs is executing automatically
exec dbms_job.run(ENTER_JOB_NUMBER)
PL/SQL procedure successfully completed.
Step: 13 Finally all data populated in db_growth table
select * from db_growth;



-------------------------

Thursday, June 24, 2010

Oracle User Login Lock Wrong Attempt details

select username,osuser,machine,blocking_session from v$session where username='LDBO';

select username,os_username,to_char(timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS') "Time",terminal,
---UTL_INADDR.get_host_address(terminal) "IP",
decode(returncode,0,'Successful',1017,'WrongAttempt',28000,'Locked',28009,'SYS Login',1005,'Fail_NULL',28001,'EXPIRED',28031,'Roles_Exceeded',returncode) Login_Status
from dba_audit_session where trim(Timestamp) > trunc(sysdate-1)
and username='AHE'
-----and os_username like '%UNICON\%'
order by timestamp desc;

select username,password,account_status,to_char(lock_date, 'Dy DD-Mon-YYYY HH24:MI:SS') lock_date,expiry_date
from dba_users where account_status like '%LOCKED%' order by 3 desc,lock_date desc;

select distinct username "USER ID",osuser,machine,UTL_INADDR.get_host_address(terminal)"System IP", decode(username,'USSB3409','ANILKUMAR','USSB0065','GAURAVSINGH','USSB0737','KAVITA', 'USSB0580','MAHINDERSINGH', 'USSB0624','RAJENDERSINGH','USSB0625','RAKESHKUMAR','UIPL6716','RAKESHLAL','USSB0500','RUPINDERKAUR','UIPL6713','SHASHINATH','USSB0502','SURENDERKUMAR','UIPL6715','VIRENDERSINGH','UIPL6714','SANJAYKUMARSISODIA','UIPL7249','BHAVYASINGH','UFIPL00152','GOVINDGUPTA','UIPL7244','INDIRARAWAT','USSB2693','JYOTI','UIPL7258','MANTUKUMARSINGH','USSB0080','PRIYADAS','USSB2720','RAHUL','UIPL7210','ROBINTYAGI','UFIPL00154','ROHITGUPTA','USSB1740','SHAKSHI','UIPL7259','VINODKUMAR','USSB0991','ARJUNSINGH','UFPL00058','DHEERAJTANEJA','UFIPL00151','PAWANKUMARJINDAL','UIPL7759','PRABHAKARSINGH','USSB0948','PRIYANKARANA','USSB6549','RAJESHSHARMA.','USSB6788','RAMKRISHANKUMAR','USSB6790','ROSHANKUMAR','USSB0550','SACHINSUGANDHA','UIPL7261','SHOBHITAGARWAL','USSB1470','SIDDHARTHBHATIA','USSB6996','KANIKAKHURANA','UFIPL00153','DEVESH','UIPL10283','SWATISALUJA','UIPL10287','NITINGOYAL','UIPL9767','GAURAV','UFIPL00156','AMITKUMARSINGHAL','UIPL10305','PRANJEETKAUSHIK','UFPL00173','PUNEETSHARMA','UIPL10672','SUMITRAJORA','UFPL00188','DEEPAKGUPTA','UFPL00195','VINEETSABHLOK','USSB0019','PRASHANTUPADHYAY','USSB0022','NEERAJGROVER','USSB0109','SAURABHAATRE','USSB0626','SHAHABUDDIN','USSB2701','NITESH','UCPL0491','POONAMSINGH','UCPL0116','DURGESHKESHRI','USSB0211','SAURABHCHAUDHARY','USSB0431','VIRENDERSINGHCHANDEL','USSB5498','AKANSHARAWAL','USSB7030','SATYANARAYANA','UIPL6793','JAGDEEPSINGH','USSB3700','JAYAACHAR','UIPL6778','NEERAJSHARMA','USSB6047','NEHAGOEL','UIPL6798','ROHANSHARMA','UIPL6828','SACHINTYAGI','USSB4173','SEEMAGAMBHIR','USSB4766','SHAKSHIARORA','UCPL0117','SHAMPAMUKHERJEE','UIPL6736','SONIABENWAL','USSB6621','TANUSHREESOMANI','USSB1236','ASHIMAARORA','UIPL6771','SHELLY','USSB5708','KULDEEPKAUR','UIPL6801','SHWETA','UIPL6800','ROHITLUTHRA','USSB7428','AMRESHKUMAR','UIPL10160','SONIASABHARWAL','USSB7444','ANILSHARMA','UIPL10515','SAPNAGUPTA','UIPL10402','KHUSHALRAZDAN','UIPL6742','GITTUKATARIA','UIPL6746','MUNNASINGH','USSB0013','AMITTYAGI','UIPL6722','DEEPANSHUMALHOTRA','USSB5695','HARENDRASHARMA','UIPL6727','MANISHVERMA','USSB1112','PRATIMASINGH','USSB5374','PULKIT','USSB1253','RAHULTYAGI','USSB5274','VARUNKANT','UFPL00178','SUBHASHCHANDRAGEHLOT','UFPL00028','SUBHASHSHARMA','USSB6901','KSHITIJR.GUPTA','UCPL0154','RACHITJAIN','USSB6097','SHEKHARSAXENA','UIPL6712','YESHPALTHAKUR','USSB6002','SANDHYASHARMA','UFIPL00130','ANJALIMUKHIJA','UCPL0111','AMITUPADHYAY','USSB6045','RAJAN','UFIPL00136','SANDEEPSIR','USSB0064','AMITJAIN','UIPL6831','ANKITASRIVASTAVA','USSB6805','SHASHIBHUSHAN','USSB6950','SUNIIGABA','USSB7222','NILESHTRIPATHI','USSB0849','OMPRAKASHBANDERWAL','USSB0458','ROHTASAGARWAL','RMS','RMS','USSB1366','NEETARASTOGI','USSB0256','ANANDJOSHI','USSB1032','SATISHSHARMA','USSB1113','SUNILKUMARSINGHAL','UIPL6721','VISHALGUPTA','UIPL6724','YATENDRASINGHBISHT','USSB6241','KULDEEPSHARMA','USSB5495','RAGHIBHUSAIN','USSB1031','TARUNCHHOKRA','UIPL6717','NEERAJKAUSHIK','USSB6508','ACHINANAND','USSB7530','KHYATIADLAKHA','USSB7565','BHUWANSHARMA','USSB7550','NEHASINGH','LDBO','DBA') Name
from v$session where program ='ld.exe' order by 5;

select username,password,account_status,lock_date, to_char(expiry_date, 'Dy DD-Mon-YYYY HH24:MI:SS') expir_date
from dba_users where account_status like '%EXPIRED%' order by expiry_date desc;

select username,os_username,terminal "System IP",to_char(timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS') "Time",obj_name, decode(ses_actions,'---------S------','SELECT','----------S-----','UPDATE',ses_actions) action,action_name
from dba_audit_trail
where trim(Timestamp)=trunc(sysdate)
order by 4 desc;


select username,os_username,terminal,to_char(timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS') "Time",obj_name,action_name
,decode(returncode,1917,'Grant Revoke',1918,'ALter',1920,'Creation',returncode) Status
from dba_audit_exists
order by timestamp desc;

select firmnumber,oowncode as username,cclientlevelpassword as password from clemaildetail where oowncode='DP122';
SELECT username, terminal, to_char (timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS'),
decode(returncode,0,'Successful',1017,'WrongAttempt',28000,'Locked',28009,'SYS Login',1005,'Fail_NULL',28001,'EXPIRED',28031,'Roles_Exceeded',returncode) Login_Status
FROM dba_audit_session
WHERE returncode <> 0 AND not exists (SELECT 'x' FROM dba_users WHERE dba_users.username=dba_audit_session.username)
and trim(Timestamp)=trunc(sysdate)
order by timestamp desc;


select username,os_username,terminal "System IP",to_char(timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS') "Time",obj_name,action_name from dba_audit_object where trim(Timestamp)=trunc(sysdate) order by timestamp desc;

select username,os_username,
UTL_INADDR.get_host_address(terminal)"System IP",to_char(timestamp, 'Dy DD-Mon-YYYY HH24:MI:SS') "Time",
obj_name,action_name from dba_audit_statement
--where
----trim(Timestamp)=trunc(sysdate)
--obj_name='UIPL6713'
order by timestamp desc;

select * from dba_stmt_audit_opts;

select * from dba_priv_audit_opts;
select * from dba_audit_object where obj_name='LDBO' order by timestamp desc;


select distinct * from TBLAUDITUSERLOGONDETAILS
where cusername not in ('SYS','CLLVL')
and cusername='LDBO'
and trim(dlogonday)=trunc(SYSDATE)
order by dlogonday desc,clogontime desc;

Select Substr(Upper(User_audit_trail.Os_Username),1,30) as cOS_Username, Rpad(' ',10) as Oowncode, Substr(User_audit_trail.Username,1,30) as cUsername,User_audit_trail.Userhost as cUserhost, Upper(Substr(User_audit_trail.Terminal,1,30)) as cTerminal,User_audit_trail.Timestamp as dTimeStamp, nvl(User_audit_trail.Obj_name,' ') as cObjectName,User_audit_trail.Action_name as cActionname, nvl(User_audit_trail.Sys_Privilege,' ') as cSysPrivilege,nvl(User_audit_trail.Grantee,' ') as cGrantee, nvl(User_audit_trail.Priv_Used,' ') as cPrivUsed From User_audit_trail User_audit_trail
Where User_audit_trail.Os_Username not in ('SYSTEM','SYSMAN','DBSNMP')
and User_audit_trail.Username='LDBO'
and trim(User_audit_trail.Timestamp)=trunc(sysdate)
order by timestamp;


Select Dba_Users.Username as cUsername,Dba_Users.Profile as cProfile, Dba_Users.Account_status as cAccountstatus,Dba_Users.Lock_date as dLockDate,Dba_Users.Expiry_Date as dExpiryDate, Dba_users.Created as dCreationDate
From Dba_Users Dba_Users
Where Dba_users.Username not in ('SYS', 'SYSTEM', 'DBSNMP', 'TSMSYS', 'OUTLN', 'ORACLE_OCM','MGMT_VIEW','SYSMAN', 'WMSYS', 'DIP')
and Dba_Users.Created>='13-MAR-09'
--and trim(dlockdate)!=' '
---and trim(dexpirydate)!=' '
and Dba_Users.Created<=trunc(sysdate) order by 5 desc, 4 desc;

-----------------------------

Wednesday, June 9, 2010

Oracle Password Policy

-- This script sets the default password resource parameters
-- This script needs to be run to enable the password features.
-- However the default resource parameters can be changed based
-- on the need.
-- A default password complexity function is also provided.
-- This function makes the minimum complexity checks like
-- the minimum length of the password, password not same as the
-- username, etc. The user may enhance this function according to
-- the need.
-- This function must be created in SYS schema.
-- connect sys/ as sysdba before running the script

CREATE OR REPLACE FUNCTION verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
n boolean;
m integer;
differ integer;
isdigit boolean;
ischar boolean;
ispunct boolean;
digitarray varchar2(20);
punctarray varchar2(25);
chararray varchar2(52);

BEGIN
digitarray:= '0123456789';
chararray:= 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
punctarray:='!#$%^&*()_+|~-=\`{}[]:"'<>?,./';

-- Check if the password is same as the username
IF NLS_LOWER(password) = NLS_LOWER(username) THEN
raise_application_error(-20001, 'Password same as or similar to user');
END IF;

-- Check for the minimum length of the password
IF length(password) < isdigit =" FALSE">>
ischar:=FALSE;
FOR i IN 1..length(chararray) LOOP
FOR j IN 1..m LOOP
IF substr(password,j,1) = substr(chararray,i,1) THEN
ischar:=TRUE;
GOTO findpunct;
END IF;
END LOOP;
END LOOP;
IF ischar = FALSE THEN
raise_application_error(-20003, 'Password should contain at least one digit, one character and one punctuation except @');
END IF;
-- 3. Check for the punctuation
<>
ispunct:=FALSE;
FOR i IN 1..length(punctarray) LOOP
FOR j IN 1..m LOOP
IF substr(password,j,1) = substr(punctarray,i,1) THEN
ispunct:=TRUE;
GOTO endsearch;
END IF;
END LOOP;
END LOOP;
IF ispunct = FALSE THEN
raise_application_error(-20003, 'Password should contain at least one digit, one character and one punctuation except @');
END IF;

<>
-- Check if the password differs from the previous password by at least 3 letters

IF old_password IS NOT NULL THEN
differ := length(old_password) - length(password);

IF abs(differ) < 3 THEN
IF length(password) < length(old_password) THEN
m := length(password);
ELSE
m := length(old_password);
END IF;

differ := abs(differ);
FOR i IN 1..m LOOP
IF substr(password,i,1) != substr(old_password,i,1) THEN
differ := differ + 1;
END IF;
END LOOP;

IF differ < 3 THEN
raise_application_error(-20004, 'Password should differ by at least 3 characters');
END IF;
END IF;
END IF;
-- Everything is fine; return TRUE ;
RETURN(TRUE);
END;
/

-- This script alters the default parameters for Password Management
-- This means that all the users on the system have Password Management
-- enabled and set to the following values unless another profile is
-- created with parameter values set to different value or UNLIMITED
-- is created and assigned to the user.

ALTER PROFILE LD_BACKOFFICE LIMIT
PASSWORD_LIFE_TIME 30
PASSWORD_GRACE_TIME 5
PASSWORD_REUSE_TIME 30
PASSWORD_REUSE_MAX 3
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME Unlimited
PASSWORD_VERIFY_FUNCTION verify_function;




-----------------------------------------------------------------------------------------------------


Tuesday, June 8, 2010

Restore Database when source database(E:) and destination database location(D:) is different

CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'D:\archive1011\sns1011\%F';
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT 'D:\archive1011\sns1011\rmanbackup_db_%d_S_%s_P_%p_T_%t' MAXPIECESIZE 900 M; --if more than one backuppiece

RMAN> list backup of database;
RMAN> CROSSCHECK backup of database;
RMAN> delete expired backup;

RMAN> catalog backuppiece 'D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_149_P_1_T_720717287','D:\archive1011\sns1011\RMANBACKUP_DB_SNS1011_S_149_P_2_T_720717287';-----if more than one backpupiece

SQL> alter database rename file 'E:\SNSD1011\REDO01.ORA' TO 'D:\SNSD1011\REDO01.ORA' ;
SQL> alter database rename file 'E:\SNSD1011\REDO02.ORA' TO 'D:\SNSD1011\REDO02.ORA' ;
SQL> alter database rename file 'E:\SNSD1011\REDO03.ORA' TO 'D:\SNSD1011\REDO03.ORA' ;
SQL> alter database rename file 'E:\SNSD1011\REDO04.ORA' TO 'D:\SNSD1011\REDO04.ORA' ;
SQL> alter database rename file 'E:\SNSD1011\REDO05.ORA' TO 'D:\SNSD1011\REDO05.ORA' ;



run { set until sequence
set newname for datafile 1 to 'D:\SNSD1011\SYSTEM01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\UNDOTBS01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\SYSAUX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\INDX01.ORA' ;
set newname for datafile 1 to 'D:\SNSD1011\USERS01.ORA' ;


restore database;
switch datafile all;
recover database;
alter database open resetlogs; }

------------------------------------------------------------------------

Oracle 10g 32 bit at Window 7 (64 bit)

go to this folder “E:\Installers\Oracle_10g_1023\Disk1\stage\prereq\db”

locate the file “refhost.xml”, open it with notepad

now add the following:
(write click see in view source)






meta tags format




























===============================
























normally recommend that you cut and paste your META Description into the Abstract and Subject META Tags. This ensures that the search engine spiders fully understand what the page is about. One is not enough!
----------------
ou should cut and paste your Element directly into the Title META Tag like this... <br /> <br /><meta name="title" content="">Insert page title here.">
----------------------






Oracle Standby Database / Diaster recovery Solution

-----------------I. Before you get started:-------------------

1. Make sure the operating system and platform architecture on the primary and standby systems are the same;

2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.

3. Test the Standby Database creation on a test environment first before working on the Production database.

---------II. On the Primary Database Side:---------------------

1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;

2. Create a password file if it doesn't exist.
1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;

2) If it doesn't exist, use the following command to create one:

$cd %ORACLE_HOME%\database
$orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with the password for the SYS user.)

3. Configure a Standby Redo log.
1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;

BYTES
----------
52428800
52428800
52428800

2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;

3) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;

4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;

4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;

5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

1) Create pfile from spfile for the primary database:

SQL>create pfile='d:\oracle\product\10.2.0\db_1\database\pfilePRIM.ora' from spfile;


2) Edit pfilePRIM.ora to add the new primary and standby role parameters:


select * from v$parameter where name like '%log_archive_format%';
select * from v$parameter where name like '%standby%';
select * from v$parameter where name like '%remote_archive_enable%';
select * from v$parameter where name like '%log_archive_dest_state_%';
select * from v$parameter where name like '%convert%';




db_name=PRIM
db_unique_name=PRIM
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
LOG_ARCHIVE_DEST_1='LOCATION=F:\Oracle\flash_recovery_area\PRIM\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_2='SERVICE=STAN LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
remote_login_passwordfile='EXCLUSIVE'
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=AUTO
# Specify the location of the standby DB datafiles followed by the primary location;
DB_FILE_NAME_CONVERT='E:\STAN\DATAFILE','E:\PRIM\DATAFILE'
# Specify the location of the standby DB online redo log files followed by the primary location LOG_FILE_NAME_CONVERT='E:\STAN\ONLINELOG','E:\PRIM\ONLINELOG','F:\Oracle\flash_recovery_area\STAN\ONLINELOG','F:\Oracle\flash_recovery_area\PRIM\ONLINELOG'

6. Create spfile from pfile, and restart primary database using the new spfile.

SQL> shutdown immediate;
SQL> startup nomount pfile='\database\pfilePRIM.ora';
SQL>create spfile from pfile='d:\oracle\product\10.2.0\db_1\database\pfilePRIM.ora';
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;


----------------III. On the Standby Database Site:---------------

1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;

On Standby Server (While the Primary database is shut down):
1) Create directory for data files, E:\STAN\DATAFILE.


2) Copy the data files and temp files over.

3) Create directory (multiplexing) for online logs, for example, on Windows, E:\STAN\ONLINELOG and F:\Oracle\flash_recovery_area\STAN\ONLINELOG.


4) Copy the online logs over.

2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as 'STAN.ctl;
SQL>ALTER DATABASE OPEN;

3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder.

2) Rename it to pfileSTAN.ora, and modify the file as follows


*.audit_file_dest='E:\oracle\product\10.2.0\admin\STAN\adump'
*.background_dump_dest='E:\oracle\product\10.2.0\admin\STAN\bdump'
*.core_dump_dest='E:\oracle\product\10.2.0\admin\STAN\cdump'
*.user_dump_dest='E:\oracle\product\10.2.0\admin\STAN\udump'
*.compatible='10.2.0.3.0'
control_files='E:\STAN\CONTROLFILE\STAN.CTL','F:\ORACLE\FLASH_RECOVERY_AREA\STAN\CONTROLFILE\STAN.CTL'
db_name='PRIM'
db_unique_name=STAN
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
LOG_ARCHIVE_DEST_1='LOCATION=F:\Oracle\flash_recovery_area\STAN\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_2= 'SERVICE=PRIM LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=PRIM
FAL_CLIENT=STAN
remote_login_passwordfile='EXCLUSIVE'
# Specify the location of the primary DB datafiles followed by the standby location
DB_FILE_NAME_CONVERT='E:\PRIM\DATAFILE','E:\STAN\DATAFILE'
# Specify the location of the primary DB online redo log files followed by the standby location
LOG_FILE_NAME_CONVERT='E:\PRIM\ONLINELOG','E:\STAN\ONLINELOG','F:\Oracle\flash_recovery_area\PRIM\ONLINELOG','F:\Oracle\flash_recovery_area\STAN\ONLINELOG'
STANDBY_FILE_MANAGEMENT=AUTO



4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.

5. Copy the standby control file 'STAN.ctl' from primary to standby destinations ;

6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to \database folder And then rename the password file.

7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual

8. Configure listeners for the primary and standby databases.

1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start

2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start

9. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN

2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN

10. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.

11. Start up nomount the standby database and generate a spfile.

SQL>startup nomount pfile='\database\pfileSTAN.ora';
SQL>create spfile from pfile='\database\pfileSTAN.ora';
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;

12. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;

If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;

13. Verify the standby database is performing properly:
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;

2) On Primary, force a logfile switch:
SQL>alter system switch logfile;

3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;

14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.

To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;

15. To create multiple standby databases, repeat this procedure.

---------IV. Maintenance:-------

1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.

2. Cleanup the archive logs on Primary and Standby servers.

I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.

For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN;
RMAN>backup archivelog all delete input;

To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;

3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won't be shipped to the standby server.

Refer to section II.2, step 2 to update/recreate password file for the Standby database.

Oracle RMAN Duplicate Database / Clone Database

Configure The Network
We need to configure the network so that there is connectivity to the target
(source) database and to the database we want to create (the clone)
Listener file:
# listener.ora Network Configuration File:
c:\oracle\product\10.2.0\client_1/network/admin/listener.
ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = snsdup)
(ORACLE_HOME = c:\oracle\product\10.2.0\client_1)
(SID_NAME = snsdup)
)
(SID_DESC =
(GLOBAL_DBNAME = sns6)
(ORACLE_HOME = c:\oracle\product\10.2.0\client_1)
(SID_NAME = sns6)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = snsdup))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = sns6))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1521))
)
)

-----------------

Tnsnames file:


(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = snsdup)
)
)
sns6 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = sns6)
)
)

-----------------

Create A Password File For The New Database

[oracle@10.100.0.32 dbs]$ orapwd file=orapwdsnsdup password=oracle

-----------------
Create An Init.Ora For The New Database
Create the pfile using create pfile from spfile from the source database, then
edit it changing all occurrences of old database name to new name

*.audit_file_dest='/oracle/app/admin/snsdup/adump'
*.audit_trail='db'
*.compatible='10.0.2.3.0.0'
#*.control_files='’
*.db_block_size=8192
*.db_create_file_dest='/oracle/app/oradata'
*.db_domain=''
*.db_name='snsdup'
*.db_recovery_file_dest='/oracle/app/flash_recovery_area'
*.db_recovery_file_dest_size=4039114752
*.diagnostic_dest='/oracle/app'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=snsdupXDB)'
*.memory_target=262144000
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
-----------------

Create The Admin Directory For The New Database
mkdir –p ../snsdup/adump

-----------------

Shutdown And Startup Mount The Source Database
This is required if the source database is in no archivelog mode.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

------------------------
Startup Nomount The New Database

SQL> startup nomount
ORACLE instance started.

--------------
Connect To The Target (Source) And Auxiliary (New Clone) Databases Using Rman


[oracle@10.100.0.32 dbs]$ Rman target sys/oracle@sns6 auxiliary sys/oracle@snsdup

--------------
Execute The Duplicate Command

RMAN> DUPLICATE TARGET DATABASE TO snsdup FROM ACTIVE DATABASE;

--------

Remove The Old Pfile
The duplicate database created an spfile that has the new controlfile names in it, the old pfile, without controlfiles can be removed

[oracle@10.100.0.32 dbs]$ rm initsnsdup.ora


--------

Check The New Database
ORACLE_SID=snsdup
ORACLE_HOME=c:\oracle\product\10.2.0\client_1
[oracle@10.100.0.32 dbs]$ sqlplus / as sysdba


SQL> archive log list

SQL> select name from v$datafile;

SQL> show sga



-----------------------------------

Followers