Friday, August 27, 2010

Delete file from date

forfiles only work in windows 2000 /2003/ vista

Folder Location C:\Test\arch
file older than 5 days
*.* all file extension (*.txt, ......)

----------------

forfiles /p C:\Test\arch /s /m *.* /d -5 /c "cmd /c echo @path"


forfiles /p C:\Test\arch /s /m *.* /d -5 /c "cmd /c del @path"


FORFILES [/P pathname] [/M searchmask] [/S] [/C command] [/D [+ | -] {MM/dd/yyyy | dd}]
----------

/p pathname Indicates the path to start searching. The default folder is the current working directory (.).

/m search mask Searches files according to a searchmask. The default searchmask is '*' .

/s Instructs forfiles to recurse into subdirectories. Like "DIR /S".

/C Indicates the command to execute for each file. Command strings should be wrapped in double quotes.

The default command is "cmd /c echo @file".


The following variables can be used in the command string:
@file - returns the name of the file. @fname - returns the file name without extension.
@ext - returns only the extension of the file.
@path - returns the full path of the file.
@relpath - returns the relative path of the file.
@isdir - returns "TRUE" if a file type is a directory, and "FALSE" for files.
@fsize - returns the size of the file in bytes.
@fdate - returns the last modified date of the file.
@ftime - returns the last modified time of the file.


/D date Selects files with a last modified date greater than or equal to (+), or less than or equal to (-), the specified date using the "MM/dd/yyyy" format; or selects files with a last modified date greater than or equal to (+) the current date plus "dd" days, or less than or equal to (-) the current date minus "dd" days. A valid "dd" number of days can be any number in the range of 0 - 32768. "+" is taken as default sign if not specified.



Examples: FORFILES /?
FORFILES
FORFILES /P C:\WINDOWS /S /M DNS*.*
FORFILES /S /M *.txt /C "cmd /c type @file | more"
FORFILES /P C:\ /S /M *.bat
FORFILES /D -30 /M *.exe /C "cmd /c echo @path 0x09 was changed 30 days ago"
FORFILES /D 01/01/2001 /C "cmd /c echo @fname is new since Jan 1st 2001"
FORFILES /D +8/19/2005 /C "cmd /c echo @fname is new today"
FORFILES /M *.exe /D +1
FORFILES /S /M *.doc /C "cmd /c echo @fsize"
FORFILES /M *.txt /C "cmd /c if @isdir==FALSE notepad.exe @file"

Thursday, August 12, 2010

Oracle Streams Replication

Set up below parameters on both databases (db1, db2)

1. Enable ARCHIVELOG MODE on both database

2. Create Stream administrator User
Source Database: DB1
SQL> conn sys@db1 as sysdba
Enter password:
Connected.
SQL> create user strmadmin identified by strmadmin;

User created.

SQL> grant connect, resource, dba to strmadmin;

Grant succeeded.

SQL> begin dbms_streams_auth.grant_admin_privilege
2 (grantee => 'strmadmin',
3 grant_privileges => true);
4 end;
5 /

PL/SQL procedure successfully completed.

SQL> grant select_catalog_role, select any dictionary to strmadmin;

Grant succeeded.

Target Database: DB2
SQL> conn sys@db2 as sysdba
Enter password:
Connected.
SQL> create user strmadmin identified by strmadmin;

User created.

SQL> grant connect, resource, dba to strmadmin;

Grant succeeded.

SQL> begin dbms_streams_auth.grant_admin_privilege
2 (grantee => 'strmadmin',
3 grant_privileges => true);
4 end;
5 /

PL/SQL procedure successfully completed.

SQL> grant select_catalog_role, select any dictionary to strmadmin;

Grant succeeded.

3. Setup INIT parameters
Source Database: DB1
SQL> conn sys@db1 as sysdba
Enter password:
Connected.
SQL> alter system set global_names=true;

System altered.

SQL> alter system set streams_pool_size = 100 m;

System altered.

Target Database: DB2
SQL> conn sys@db2 as sysdba
Enter password:
Connected.
SQL> alter system set global_names=true;

System altered.

SQL> alter system set streams_pool_size = 100 m;

System altered.

4. Create Database Link
Target Database: DB1
SQL> conn strmadmin/strmadmin@db1
Connected.
SQL> create database link db2
2 connect to strmadmin
3 identified by strmadmin
4 using 'DB2';

Database link created.

Source Database: DB2
SQL> conn strmadmin/strmadmin@db2
Connected.
SQL> create database link db1
2 connect to strmadmin
3 identified by strmadmin
4 using 'DB1';

Database link created.

5. Setup Source and Destination queues
Source Database: DB1
SQL> conn strmadmin/strmadmin@db1
Connected.
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

PL/SQL procedure successfully completed.

Target Database: DB2
SQL> conn strmadmin/strmadmin@db2
Connected.
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

PL/SQL procedure successfully completed.

6. Setup Schema for streams
Schema: ldbo
Table: ksh
NOTE: Unlock ldbo schema because in 10g ldbo schema is locked by default
Source Database: DB1
SQL> conn sys@db1 as sysdba
Enter password:
Connected.
SQL> alter user ldbo account unlock identified by ldbo;

User altered.

SQL> conn ldbo/ldbo@db1
Connected.
SQL> create table ksh ( no number primary key,name varchar2(20),ddate date);

Table created.

Target Database: DB2
SQL> conn sys@db2 as sysdba
Enter password:
Connected.
SQL> alter user ldbo account unlock identified by ldbo;

User altered.

SQL> conn ldbo/ldbo@db2
Connected.
SQL> create table ksh ( no number primary key,name varchar2(20),ddate date);

Table created.

7. Setup Supplemental logging at the source database
Source Database: DB1
SQL> conn ldbo/ldbo@db1
Connected.
SQL> alter table ksh
2 add supplemental log data (primary key,unique) columns;

Table altered.

8. Configure capture process at the source database
Source Database: DB1
SQL> conn strmadmin/strmadmin@db1
Connected.
SQL> begin dbms_streams_adm.add_table_rules
2 ( table_name => 'ldbo.ksh',
3 streams_type => 'capture',
4 streams_name => 'capture_stream',
5 queue_name=> 'strmadmin.streams_queue',
6 include_dml => true,
7 include_ddl => true,
8 inclusion_rule => true);
9 end;
10 /

PL/SQL procedure successfully completed.

9. Configure the propagation process
Source Database: DB1
SQL> conn strmadmin/strmadmin@db1
Connected.
SQL> begin dbms_streams_adm.add_table_propagation_rules
2 ( table_name => 'ldbo.ksh',
3 streams_name => 'DB1_TO_DB2',
4 source_queue_name => 'strmadmin.streams_queue',
5 destination_queue_name => 'strmadmin.streams_queue@DB2',
6 include_dml => true,
7 include_ddl => true,
8 source_database => 'DB1',
9 inclusion_rule => true);
10 end;
11 /

PL/SQL procedure successfully completed.
10. Set the instantiation system change number (SCN)
Source Database: DB1
SQL> CONN STRMADMIN/STRMADMIN@DB1
Connected.
SQL> declare
2 source_scn number;
3 begin
4 source_scn := dbms_flashback.get_system_change_number();
5 dbms_apply_adm.set_table_instantiation_scn@DB2
6 ( source_object_name => 'ldbo.ksh',
7 source_database_name => 'DB1',
8 instantiation_scn => source_scn);
9 end;
10 /

PL/SQL procedure successfully completed.

11. Configure the apply process at the destination database
Target Database: DB2
SQL> conn strmadmin/strmadmin@db2
Connected.
SQL> begin dbms_streams_adm.add_table_rules
2 ( table_name => 'ldbo.ksh',
3 streams_type => 'apply',
4 streams_name => 'apply_stream',
5 queue_name => 'strmadmin.streams_queue',
6 include_dml => true,
7 include_ddl => true,
8 source_database => 'DB1',
9 inclusion_rule => true);
10 end;
11 /

PL/SQL procedure successfully completed.
12. Start the capture and apply processes
Source Database: DB1
SQL> conn strmadmin/strmadmin@db1
Connected.
SQL> begin dbms_capture_adm.start_capture
2 ( capture_name => 'capture_stream');
3 end;
4 /

PL/SQL procedure successfully completed.
Target Database: DB2
SQL> conn strmadmin/strmadmin@db2
Connected.
SQL> begin dbms_apply_adm.set_parameter
2 ( apply_name => 'apply_stream',
3 parameter => 'disable_on_error',
4 value => 'n');
5 end;
6 /

PL/SQL procedure successfully completed.

SQL> begin
2 dbms_apply_adm.start_apply
3 ( apply_name => 'apply_stream');
4 end;
5 /

PL/SQL procedure successfully completed.
NOTE: Stream replication environment is ready, just needed to test it.
SQL> conn ldbo/ldbo@db1
Connected.
SQL> --DDL operation
SQL> alter table ksh add (flag char(1));

Table altered.

SQL> --DML operation
SQL> begin
2 insert into ksh values (1,'first_entry',sysdate,1);
3 commit;
4 end;
5 /

PL/SQL procedure successfully completed.

SQL> conn ldbo/ldbo@db2
Connected.
SQL> --TEST DDL operation
SQL> desc ksh
Name Null? Type
----------------------------------------- -------- ----------------------------

NO NOT NULL NUMBER
NAME VARCHAR2(20)
DDATE DATE
FLAG CHAR(1)

SQL> --TEST DML operation
SQL> select * from ksh;

NO NAME DDATE F
---------- -------------------- --------- -
1 first_entry 10-AUG-10 1

Wednesday, August 11, 2010

User Creation Script for prev YR

set heading off verify off feedback off echo off term off linesize 200 wrap on

spool c:\temp\Recreate_Users.sql

SELECT distinct 'create profile '|| profile ||' Limit Sessions_per_user Unlimited;' from dba_profiles where profile!='DEFAULT' ;
Select 'Alter profile '|| profile ||' Limit '|| Resource_name ||' '|| Limit||';' from dba_profiles where profile!='DEFAULT' and Limit!='DEFAULT' ;

SELECT 'create user ' || username ||
' identified ' ||
DECODE(password, NULL, 'EXTERNALLY', ' by values ' || '''' || password || '''') ||
' default tablespace ' || default_tablespace ||
' temporary tablespace ' || temporary_tablespace ||
' profile ' || profile || ';'
FROM dba_users
where username!='SYSTEM' and Username!='SYS' and Username!='DBSNMP' and Username!='REPADMIN' ORDER BY username ;

SELECT 'Grant '|| Granted_role ||' to '|| Grantee||';' from dba_role_privs Where Grantee!='SYSTEM' and
Grantee!='SYS' and Grantee!='DBSNML' and Grantee!='REPADMIN' ;

spool off

Create Like User Script

spool c:\usercreation.sql

set pages 0 feed off veri off lines 500

accept oldname prompt "Enter user to model new user to: "
accept newname prompt "Enter new user name: "
-- accept psw prompt "Enter new user's password: "

-- Create user...
select 'create user &&newname identified by values '''||password||''''||
-- select 'create user &&newname identified by &psw'||
' default tablespace '||default_tablespace||
' temporary tablespace '||temporary_tablespace||' profile '||
profile||';'
from sys.dba_users
where username = upper('&&oldname');

-- Grant Roles...
select 'grant '||granted_role||' to &&newname'||
decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'
from sys.dba_role_privs
where grantee = upper('&&oldname');

-- Grant System Privs...
select 'grant '||privilege||' to &&newname'||
decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'
from sys.dba_sys_privs
where grantee = upper('&&oldname');

-- Grant Table Privs...
select 'grant '||privilege||' on '||owner||'.'||table_name||' to &&newname;'
from sys.dba_tab_privs
where grantee = upper('&&oldname');

-- Grant Column Privs...
select 'grant '||privilege||' on '||owner||'.'||table_name||
'('||column_name||') to &&newname;'
from sys.dba_col_privs
where grantee = upper('&&oldname');

-- Tablespace Quotas...
select 'alter user '||username||' quota '||
decode(max_bytes, -1, 'UNLIMITED', max_bytes)||
' on '||tablespace_name||';'
from sys.dba_ts_quotas
where username = upper('&&oldname');

-- Set Default Role...
set serveroutput on
declare
defroles varchar2(4000);
begin
for c1 in (select * from sys.dba_role_privs
where grantee = upper('&&oldname')
and default_role = 'YES'
) loop
if length(defroles) > 0 then
defroles := defroles||','||c1.granted_role;
else
defroles := defroles||c1.granted_role;
end if;
end loop;
dbms_output.put_line('alter user &&newname default role '||defroles||';');
end;
/

spool off
@c:\usercreation.sql

Drive Share Script

------Drive Share Script------

net share E=e: /unlimited /GRANT:everyone,FULL
exit

------remove Drive Share Script----------
net share D /delete
exit

--------------

Password reset same as before fro all users

set heading off verify off feedback off echo off term off linesize 200 wrap on

spool c:\password_users.sql
SELECT 'alter user ' || username ||
' identified ' ||
DECODE(password, NULL, 'EXTERNALLY', ' by values ' || '''' || password || '''') ||' account unlock;'
FROM dba_users
where username!='SYSTEM' and Username!='SYS' and Username!='DBSNMP' and Username!='REPADMIN' and
username!='WMSYS' and
username!='TSMSYS' and
username!='ACCOUNTOP' and
username!='OUTLN' and
username!='ORACLE_OCM' and
username!='BRANCH' and
username!='TRADE' and
username!='LEGAL' and
username!='ACCOUNTS' and
username!='QUALITYC' and
username!='FINANCE' and
username!='FUNDS' and
username!='STOCKS' and
username!='CRDESK' and
username!='IT'
ORDER BY username ;
spool off
@c:\password_users.sql

Friday, August 6, 2010

Oracle 10g Standby Database

Oracle 10g Standby Database
--------------------------------------


PRODUCTION DATABASE: 10.100.0.65
STANDBY DATABASE: 10.100.0.32


-----------------I. Before you get started:-------------------

1. Make sure the operating system and platform architecture on the primary and standby systems are the same;

2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.

---------II. On the Primary Database Side:---------------------

Enable forced logging on your primary database:
Select FORCE_LOGGING from V$DATABASE;
ALTER DATABASE FORCE LOGGING;


1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:

SQL> select bytes from v$log;

BYTES
----------
52428800
52428800
52428800

2) Use the following command to determine your current log file groups:

SQL> select group#, member from v$logfile;

3) Create standby Redo log groups.
My primary database had 5 log file groups originally and I created 5 standby redo log groups using the following commands:
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 8 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 9 SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 10 SIZE 50M;


4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;

-----------NO NEED--------------5) Enable Archiving on Primary.

If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;

6) Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

A) Create pfile from spfile for the primary database:

SQL>create pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS6.ORA' from spfile;

B) Edit INITSNS6.ora to add the new primary and standby role parameters:


select * from v$parameter where name like '%log_archive_format%';
select * from v$parameter where name like '%standby%';
select * from v$parameter where name like '%remote_archive_enable%';
select * from v$parameter where name like '%log_archive_dest_state_%';
select * from v$parameter where name like '%convert%';

----------------------INITSNS1011.ORA------------------

sns6.__db_cache_size=1006632960
sns6.__java_pool_size=8388608
sns6.__large_pool_size=8388608
sns6.__shared_pool_size=645922816
sns6.__streams_pool_size=0
*.audit_file_dest='d:\oracle\product\10.2.0\admin\sns1011\adump'
*.audit_trail='DB'
*.background_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\bdump'
*.compatible='10.2.0.3.0'
*.control_files='e:\snsd1011\control01.ora','e:\snsd1011\control02.ora','e:\snsd1011\control03.ora'
*.core_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\cdump'
*.db_block_size=16384
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='sns1011'
*.job_queue_processes=35
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(sns1011,sns1011sby)'
*.LOG_ARCHIVE_DEST_1='LOCATION=D:\archive0910\sns1011\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=sns1011'
*.LOG_ARCHIVE_DEST_2='SERVICE=sns1011sby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=sns1011sby
*.FAL_CLIENT=sns1011
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\admin\sns1011sby','d:\oracle\product\10.2.0\admin\sns1011'
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\admin\sns1011sby','d:\oracle\product\10.2.0\admin\sns1011'
*.log_archive_format='ARC%S_%R.%T'
*.open_cursors=300
*.pga_aggregate_target=337641472
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=555
*.sga_max_size=1677721600
*.sga_target=1677721600
*.smtp_out_server='mail.uniconindia.in'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='d:\oracle\product\10.2.0\admin\sns1011\udump'
*.utl_file_dir='d:\ldoutput'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sns1011XDB)'
-------------------


C. Create spfile from pfile, and restart primary database using the new spfile.

SQL> shutdown immediate;
SQL> startup nomount pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS1011.ORA';
SQL>create spfile from pfile='d:\oracle\product\10.2.0\db_1\database\INITSNS1011.ORA';
SQL>shutdown immediate;
SQL>Startup;


7) CREATE STANDBY CONTROLFILE
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database create standby controlfile as 'C:\SBY.ORA';
SQL>ALTER DATABASE OPEN;


8) take target db backup using rman and restore to standby

run RMAN backup script


----------------III. On the Standby Database Site:---------------

1. CREATE STANDBY DATABASE WITHOUT STARTUP DATABASE

2. Create directory STRUCTURE SAME AS PRIMARY DATABASE for data files. ALSO Create directory (multiplexing) for online logs.
create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.

3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy INITSNS1011.ora from Primary server to Standby server, to database folder C:\oracle\product\10.2.0\db_1\database.

2) Rename it to INITSNS1011SBY.ORA, and modify the file as follows

NOTE: The db_name in the standby's init file should be the same as the primary database.
--------------------------INITSNS1011SBY.ORA----------------------
sns6.__db_cache_size=1207959552
sns6.__java_pool_size=8388608
sns6.__large_pool_size=8388608
sns6.__shared_pool_size=343932928
sns6.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\sns1011sby\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\bdump'
*.compatible='10.2.0.3.0'
*.control_files='e:\snsd1011\control01.ora','e:\snsd1011\control02.ora','e:\snsd1011\control03.ora'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\cdump'
*.db_block_size=16384
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='sns1011sby'
*.job_queue_processes=35
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(sns1011sby,sns1011)'
*.LOG_ARCHIVE_DEST_1='LOCATION=D:\archive0910\sns1011sby\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_2='SERVICE=sns1011 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=sns1011sby'
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.FAL_SERVER=sns1011
*.FAL_CLIENT=sns1011sby
*.STANDBY_FILE_MANAGEMENT=AUTO
*.DB_FILE_NAME_CONVERT='d:\oracle\product\10.2.0\admin\sns1011','c:\oracle\product\10.2.0\admin\sns1011sby'
*.LOG_FILE_NAME_CONVERT='d:\oracle\product\10.2.0\admin\sns1011','c:\oracle\product\10.2.0\admin\sns1011sby'
*.log_archive_format='ARC%S_%R.%T'
*.open_cursors=300
*.pga_aggregate_target=337641472
*.processes=500
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=555
*.sga_max_size=1572864000
*.sga_target=1572864000
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*._ALLOW_RESETLOGS_CORRUPTION=TRUE
*.user_dump_dest='c:\oracle\product\10.2.0\admin\sns1011sby\udump'
*.utl_file_dir='e:\ldoutput'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=sns1011SBYXDB)'

--------------------------------------------------------
4. Copy the Primary password file to standby and rename it to pwdsns6SBY.ora.
TO C:\oracle\product\10.2.0\db_1\database.

5. Copy the standby control file 'SBY.ORA' from primary to standby destinations ;

6. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID SNS1011SBY –STARTMODE manual

7. Configure listeners for the primary and standby databases.

--------------TNSNAMES.ORA--PRIMARY---------
SNS1011 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = SNS1011)
)
)
SNS1011SBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(Sid = SNS1011sby)
)
)

-----------------LISTENER.ORA----PRIMARY--------
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC)
(SID_NAME = PLSExtProc)
(ORACLE_HOME = d:\oracle\product\10.2.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = SNS1011)
(ORACLE_HOME = d:\oracle\product\10.2.0\db_1)
(SID_NAME = SNS1011)
)

)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
)
)



--------------TNSNAMES.ORA--STANDBY---------


SNS1011SBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SNS1011sby)
)
)
SNS1011 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SNS1011)
)
)


-----------------LISTENER.ORA----STANDBY--------

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(PROGRAM = EXTPROC1)
(SID_NAME = PLSExtProc1)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
)
(SID_DESC =
(GLOBAL_DBNAME = SNS1011sby)
(ORACLE_HOME = C:\oracle\product\10.2.0\db_1)
(SID_NAME = SNS1011sby)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.32)(PORT = 1522))
)
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.100.0.65)(PORT = 1521))
)
)



------------------------
8.
RESTART LISTENER ON PRIMARY AND STANDBY DATABASE
LSNRCTL>RELOAD


CHECK TNSPING ON PRIMARY AND STANDBY DATABASE
$tnsping SNS1011
$tnsping SNS1011SBY



9. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.

set ORACLE_SID=sns6sby

oradim -new -sid sns6sby -SRVC OracleServicesns6sby -intpwd oracle -MAXUSERS 5 -STARTMODE auto -PFILE c:\oracle\product\10.2.0\db_1\database\initsns6sby.ora


10. Start up nomount the standby database and generate a spfile.

SQL>startup nomount pfile='C:\oracle\product\10.2.0\db_1\database\INITSNS1011sby.ORA';
SQL>create spfile from pfile='C:\oracle\product\10.2.0\db_1\database\INITSNS1011sby.ORA';
SQL>shutdown immediate;
SQL>startup mount;

11.

SET ORACLE_SID=sns6sby
RMAN TARGET SYS/ORACLE@SNS1011SBY
RESTORE CONTROLFILE FROM 'C:\SBY.ORA';
catalog backuppiece 'c:\05LICVI0';
restore database;

12. DUPLICATE DATABASE
NOTE: TARGET DB SHOULD BE MOUNT AND STANDBY SHOULD BE NOMOUNT STATE

rman target sys/oracle@SNS1011 auxiliary sys/oracle@SNS1011sby

RMAN>
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}


13. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
-----
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
-------
14. Verify the standby database is performing properly:
A) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;

B) On Primary, force a logfile switch:
SQL>alter system switch logfile;

C) On Standby, verify the archived redo log files were applied:
SQL>

15. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.


on standby database

shut immediate;
startup mount
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database recover managed standby database disconnect;
alter database recover managed standby database cancel;
alter database open;
alter database recover managed standby database using current logfile disconnect;

on primary only one time

alter system switch logfile;
alter system switch logfile;

16. To create multiple standby databases, repeat this procedure.

17) Failover

SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;


ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;


--------------------

Thursday, August 5, 2010

SQL Performance Tuning Tips

SQL Performance Tuning

1. Use EXPLAIN to profile the query execution plan
2. Use Slow Query Log (always have it on!)
3. Don't use DISTINCT when you have or could use GROUP BY
4. Insert performance
1. Batch INSERT and REPLACE
2. Use LOAD DATA instead of INSERT
5. LIMIT m,n may not be as fast as it sounds.
6. Don't use ORDER BY RAND() if you have > ~2K records
7. Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
8. Avoid wildcards at the start of LIKE queries
9. Avoid correlated subqueries and in select and where clause (try to avoid in)
10. No calculated comparisons -- isolate indexed columns
11. ORDER BY and LIMIT work best with equalities and covered indexes
12. Separate text/blobs from metadata, don't put text/blobs in results if you don't need them
13. Derived tables (subqueries in the FROM clause) can be useful for retrieving BLOBs without sorting them. (Self-join can speed up a query if 1st part finds the IDs and uses then to fetch the rest)
14. ALTER TABLE...ORDER BY can take data sorted chronologically and re-order it by a different field -- this can make queries on that field run faster (maybe this goes in indexing?)
15. Know when to split a complex query and join smaller ones
16. Delete small amounts at a time if you can
17. Make similar queries consistent so cache is used
18. Have good SQL query standards
19. Don't use deprecated features
20. Turning OR on multiple index fields (<5.0) into UNION may speed things up (with LIMIT), after 5.0 the index_merge should pick stuff up.
21. Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary tables, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS()
22. Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to SELECT
23. use groupwise maximum instead of subqueries
24. Avoid using IN(...) when selecting on indexed fields, It will kill the performance of SELECT query.


Scaling Performance Tips:

1. Use benchmarking
2. isolate workloads don't let administrative work interfere with customer performance. (ie backups)
3. Debugging sucks, testing rocks!
4. As your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.

Network Performance Tips:

1. Minimize traffic by fetching only what you need.
1. Paging/chunked data retrieval to limit
2. Don't use SELECT *
3. Be wary of lots of small quick queries if a longer query can be more efficient
2. Use multi_query if appropriate to reduce round-trips
3. Use stored procedures to avoid bandwidth wastage

OS Performance Tips:

1. Use proper data partitions
1. For Cluster. Start thinking about Cluster *before* you need them
2. Keep the database host as clean as possible. Do you really need a windowing system on that server?
3. Utilize the strengths of the OS
4. pare down cron scripts
5. create a test environment
6. source control schema and config files
7. for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll forward
8. partition appropriately
9. partition your database when you have real data -- do not assume you know your dataset until you have real data

MySQL Server Overall Tips:

1. innodb_flush_commit=0 can help slave lag
2. Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine the smallest data type for your needs.
3. use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock. share mode vs. FOR UPDATE
4. if you can, compress text/blobs
5. compress static data
6. don't back up static data as often
7. enable and increase the query and buffer caches if appropriate
8. config params -- http://docs.cellblue.nl/2007/03/17/easy-mysql-performance-tweaks/ is a good reference
9. Config variables & tips:
1. use one of the supplied config files
2. key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory variables
3. be aware of global vs. per-connection variables
4. check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
5. be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
6. defragment tables, rebuild indexes, do table maintenance
7. If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller
8. more RAM is good so faster disk speed
9. use 64-bit architectures
10. --skip-name-resolve
11. increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
12. look up memory tuning parameter for on-insert caching
13. increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
14. Run in SQL_MODE=STRICT to help identify warnings
15. /tmp dir on battery-backed write cache
16. consider battery-backed RAM for innodb logfiles
17. use --safe-updates for client
18. Redundant data is redundant

Storage Engine Performance Tips:

1. InnoDB ALWAYS keeps the primary key as part of each index, so do not make the primary key very large
2. Utilize different storage engines on master/slave ie, if you need fulltext indexing on a table.
3. BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.
4. Know your storage engines and what performs best for your needs, know that different ones exist.
1. ie, use MERGE tables ARCHIVE tables for logs
2. Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables
5. use row-level instead of table-level locking for OLTP workloads
6. try out a few schemas and storage engines in your test environment before picking one.

Database Design Performance Tips:

1. Design sane query schemas. don't be afraid of table joins, often they are faster than denormalization
2. Don't use boolean flags
3. Use Indexes
4. Don't Index Everything
5. Do not duplicate indexes
6. Do not use large columns in indexes if the ratio of SELECTs:INSERTs is low.
7. be careful of redundant columns in an index or across indexes
8. Use a clever key and ORDER BY instead of MAX
9. Normalize first, and denormalize where appropriate.
10. Databases are not spreadsheets, even though Access really really looks like one. Then again, Access isn't a real database
11. use INET_ATON and INET_NTOA for IP addresses, not char or varchar
12. make it a habit to REVERSE() email addresses, so you can easily search domains (this will help avoid wildcards at the start of LIKE queries if you want to find everyone whose e-mail is in a certain domain)
13. A NULL data type can take more room to store than NOT NULL
14. Choose appropriate character sets & collations -- UTF16 will store each character in 2 bytes, whether it needs it or not, latin1 is faster than UTF8.
15. Use Triggers wisely
16. use min_rows and max_rows to specify approximate data size so space can be pre-allocated and reference points can be calculated.
17. Use HASH indexing for indexing across columns with similar data prefixes
18. Use myisam_pack_keys for int data
19. be able to change your schema without ruining functionality of your code
20. segregate tables/databases that benefit from different configuration variables

Other:

1. Hire a MySQL (tm) Certified DBA
2. Know that there are many consulting companies out there that can help, as well as MySQL's Professional Services.
3. Read and post to MySQL Planet at http://www.planetmysql.org
4. Attend the yearly MySQL Conference and Expo or other conferences with MySQL tracks
5. Support your local User Group (link to forge page w/user groups here)

Stored Outlines

Stored Outlines

Oracle preserves the execution plans in objects called “Stored Outlines.” You can create a Stored Outline for one or more SQL statements and group Stored Outlines into categories. Grouping Stored Outlines allows you to control which category of outlines Oracle uses.


select * from v$parameter where name like '%create_stored_outlines%';
select * from dictionary where table_name like '%OUTLINE%';


ALTER SYSTEM SET create_stored_outlines=TRUE;
ALTER SESSION SET create_stored_outlines=TRUE;



GRANT CREATE ANY OUTLINE TO LDBO;
GRANT EXECUTE_CATALOG_ROLE TO LDBO;


-- Create an outline for a specific SQL statement.

CREATE OUTLINE client_email FOR CATEGORY ldbo_outlines
ON select distinct accounts.fibsacct,accountemaildetail.email from accounts,accountemaildetail where accounts.oowncode=accountemaildetail.oowncode;

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

-- List the hints associated with the outline.

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'CLIENT_EMAIL';


SELECT hash_value, child_number, sql_text FROM v$sql WHERE sql_text LIKE 'select distinct accounts.fibsacct,accountemaildetail.email from accounts,account%';


-- Create an outline for the statement.
BEGIN
DBMS_OUTLN.create_outline(
hash_value => 3174963110,
child_number => 0,
category => 'LDBO_OUTLINES');
END;
/

-- Check the outline as been created correctly.

SELECT name, category, sql_text FROM user_outlines WHERE category = 'LDBO_OUTLINES';

SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'SYS_OUTLINE_10080512161704577';


-- Check if the outlines have been used.
SELECT name, category, used FROM user_outlines;


--------In the following example we will enable stored outlines for the current session.

ALTER SESSION SET query_rewrite_enabled=TRUE;
ALTER SESSION SET use_stored_outlines=SCOTT_OUTLINES;



--DROPPING OUTLINES
BEGIN
DBMS_OUTLN.drop_by_cat (cat => 'LDBO_OUTLINES');
END;
/



---------------------

Wednesday, August 4, 2010

AutoTrace

Prerequisites

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

----------
Setting AUTOTRACE On

There is also an easier method with SQL*Plus for generating an EXPLAIN PLAN and statistics about the performance of a query.

SET AUTOTRACE ON
select * from accounts;


----------------------

set autotrace off
set autotrace on
set autotrace traceonly

set autotrace on explain
set autotrace on statistics
set autotrace on explain statistics

set autotrace traceonly explain
set autotrace traceonly statistics
set autotrace traceonly explain statistics


set autotrace off explain
set autotrace off statistics
set autotrace off explain statistics

----------


set autotrace on: Shows the execution plan as well as statistics of the statement.
set autotrace on explain: Displays the execution plan only.
set autotrace on statistics: Displays the statistics only.
set autotrace traceonly: Displays the execution plan and the statistics (as set autotrace on does), but doesn't print a query's result.
set autotrace off: Disables all autotrace
If autotrace is enabled with statistics, then the following statistics are displayed:

* recursive calls
* db block gets
* consistent gets
* physical reads
* redo size
* bytes sent via SQL*Net to client
* bytes received via SQL*Net from client
* SQL*Net roundtrips to/from client
* sorts (memory)
* sorts (disk)

------------

Cost Based Optimizer (CBO) and Database Statistics

Cost Based Optimizer (CBO) and Database Statistics


The mechanisms and issues relating to maintenance of internal statistics are explained below:

* Analyze Statement
* DBMS_UTILITY
* DBMS_STATS
* Scheduling Stats
* Transfering Stats
* Issues


1) Analyze Statement
select 'ANALYZE TABLE '||Owner||'.'||table_name||' compute statistics;'
from sys.all_tables where table_name!='_default_auditing_options_'
/

select 'ANALYZE INDEX '||Owner||'.'||index_name||' compute statistics;'
from sys.all_indexes
/

2) DBMS_UTILITY

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO','COMPUTE');

EXEC DBMS_UTILITY.ANALYZE_SCHEMA('LDBO', 'ESTIMATE')

3) DBMS_STATS
EXEC DBMS_STATS.gather_schema_stats('LDBO');

4) Scheduling Stats

SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN

DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''LDBO''); END;',
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/


-----EXEC DBMS_JOB.remove(X);
-----COMMIT;


5) Transfering Stats
It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:

EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');


6) Issues

* Exclude dataload tables from your regular stats gathering, unless you know they will be full at the time that stats are gathered.
* I've found gathering stats for the SYS schema can make the system run slower, not faster.
* Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only.
* Even if scheduled, it may be necessary to gather fresh statistics after database maintenance or large data loads.

7)

select table_name, avg_row_len, chain_cnt, num_rows,last_analyzed from dba_tables where owner ='LDBO' order by last_analyzed desc;

Explain Plan

SQL> @ORACLE_HOME\rdbms\admin\utlxplan.sql

This creates the PLAN_TABLE for the user executing the script.

Run EXPLAIN PLAN for the query to be optimized:

1)explain plan for
select * from accounts;

2)select * from PLAN_TABLE; ---to check output


3)explain plan
SET STATEMENT_ID = 'ACC' FOR
select * from accounts;



4)

SELECT cardinality "Rows",
lpad(' ',level-1)||operation||' '||
options||' '||object_name "Plan",cost "Cost by CBO",bytes/1024/1024 "MB",time "Time By CBO"
FROM PLAN_TABLE

CONNECT BY prior id = parent_id
AND prior statement_id = statement_id
START WITH id = 0
AND statement_id = 'ACC'
ORDER BY id;



5)
SELECT LPAD(' ', 2 * (level - 1)) ||
DECODE (level,1,NULL,level-1 || '.' || pt.position || ' ') ||
INITCAP(pt.operation) ||
DECODE(pt.options,NULL,'',' (' || INITCAP(pt.options) || ')') plan,
pt.object_name,
pt.object_type,
pt.bytes,
pt.cost,
pt.partition_start,
pt.partition_stop
FROM plan_table pt
START WITH pt.id = 0
AND pt.statement_id = '&1'
CONNECT BY PRIOR pt.id = pt.parent_id
AND pt.statement_id = '&1';


The following diagram demonstrates the procedures for running TRACE versus EXPLAIN PLAN:



TRACE

It takes four hours to TRACE a query that takes four hours to run.

  • Set up Init.ora Parameters
  • Create PLAN_TABLE table
  • Run Query
  • Statement is executed PLAN_TABLE is populated
  • Run TKPROF
  • Output shows disk and memory reads in addition to EXPLAIN PLAN output
EXPLAIN PLAN

It takes less than a minute to EXPLAIN PLAN a query that takes four hours to run.

  • Create PLAN_TABLE table
  • Explain Query
  • PLAN_TABLE is populated
  • Query PLAN_TABLE
  • Output shows EXPLAIN PLAN

Tuesday, August 3, 2010

ODBC Trace for ODBC Applications

check in Control Panel ->Administrative -> ODBC Manager -> Tracing(tab) -> click on Stop Tracing

Always remember to stop it

Note:ODBC Tracing to SQL.LOG Can Slow SQL or Consume All Space

SQL Trace / TKPROF

To start a SQL trace for the current session, execute:
ALTER SESSION SET sql_trace = true;

ALTER SESSION SET sql_trace = true;
or DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);


ALTER SESSION SET tracefile_identifier = mysqltrace;

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:
select sid, serial# from sys.v_$session where .....


SID SERIAL#
---------- ----------
8 13607

--------------------

Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times. The resulting trace output is more meaningful with these statistics. The command to enable timed statistics is:

SQL> ALTER SYSTEM SET timed_statistics = true;

--------------------

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);


Ask user to run just the necessary to demonstrate his problem.
Disable tracing for your selected process:

execute dbms_system.set_sql_trace_in_session(8,13607, false);


Look for trace file in USER_DUMP_DEST


ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;


Identifying trace files

Trace output is written to the database's UDUMP directory.

The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:

* INSTANCE is the name of the Oracle instance,
* PID is the operating system process ID (V$PROCESS.OSPID); and
* TRACEID is a character string of your choosing.


---http://www.ordba.net/Tutorials/OracleUtilities~TKPROF.htm---

Trace output is quite unreadable. However, Oracle provides a utility, called TKProf, that can be used to format trace output.


SET ORACLE_SID=SNS6


tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n]
[aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table]
[explain=user/password] [record=filename4] [width=n]

C:\> tkprof C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.trc C:\oracle\product\10.2.0\admin\sns1011\udump\sns6_ora_976.prf explain = ldbo/ldbo sys=no sort = (PRSDSK,EXEDSK,FCHDSK,EXECPU,FCHCPU)




Some of the things to look for in the TKPROF output are listed in this table:

Problems Solutions
High numbers for the parsing The SHARED_POOL_SIZE may need to be increased.
The disk reads are very high Indexes are not used or may not exist.
The "query" and/or "current" (memory reads) are very high Indexes may be on columns with high cardinality (columns where an individual value generally makes up a large percentage of the table). Removing or suppressing the index may increase performance.
The parse elapse time is high There may be a problem with the number of open cursors.
The number of rows processed by a row in the EXPLAIN PLAN is high compared to the other rows This could be a sign of an index with a poor distribution distinct keys (unique values for a column). Or this could also be a sign of a poorly written statement.
If the number of misses in the library cache during parse is greater than 1 This is an indication that the statement had to be reloaded. You may need to increase the SHARED_POOL_SIZE in the init.ora.


----------------

Monday, August 2, 2010

Date Format

mkdir c:\backupfolders\%date:~0,2%
mkdir c:\backupfolders\%date:~3,2%
mkdir c:\backupfolders\%date:~6,4%
mkdir c:\backupfolders\%date:~8,2%

mkdir c:\backupfolders\%date:~0,2%.%date:~3,2%.%date:~6,4%

--------------

Allow IP to sending mail (Relay on)

cat /etc/tcp.smtp



login as: root
root@10.100.0.77's password:
Last login: Wed Jul 14 16:28:50 2010 from 172.16.203.28
[root@mail ~]# cd /etc/tcp.smtp
-bash: cd: /etc/tcp.smtp: Not a directory
[root@mail ~]# cat /etc/tc
tcp.smtp tcp.smtp.cdb tcsd.conf
[root@mail ~]# cat /etc/tcp.smtp
127.:allow,RELAYCLIENT=""
10.100.0.91:allow,RELAYCLIENT=""
#Rachit#
172.16.203.15:allow,RELAYCLIENT=""
#Endosor Server - DailyReports#
10.100.0.26:allow,RELAYCLIENT=""
[root@mail ~]#
------------------------

Content Duplicacy for Multiple Domain with same website

multiple domains point to the one URL

redirect all your secondary domain names to your primary domain name is to do it at your domain name registrar level. Instead of setting the DNS of all your domains to your web hosting account, just set your primary domain name.
n. Log into your domain registrar's website, and look for either "URL Forwarding", "Forwarding", "Redirection" or something to that effect.


I suggest you three solutions that have different effects but are good for the scope:

*

Insert this rule in the robots.txt file only of the multiple domains:

User-agent: *

Disallow: /
*

Apply an permanent redirect from the multiple domains to the main site, the request www.youbusiness.fr (or other) will be redirected to the .COM domain
* Create a landing page to be published in the multiple domains
The difference between a 301 and a 302 is that a 301 status code means that a page has permanently moved to a new location, while a 302 status code means that a page has temporarily moved to a new location.

----------------

Move website to new Domain

Today Google Webmaster team has roll out new Webmaster Tool Interface for all. From today onwards when you login to Google Webmasters you will see a new Webmaster Interface. Along with new Google Webmaster Interface new feature in Google Webmaster Tools named as Change of Address.

This feature will be extremely useful for those who are planning to move site to new domain. Until today their was no way to notify Google of changes domain name but with this new feature of Google Webmaster you can notify Google to update the index to reflect your new URL.

When you login to your Google Webmaster account you will find change of address option under site configuration navigation links. Here are instructions displayed under Change of Address.

1. Setup a New website

2. Redirect all traffic from the old site with the help of 301 Permanent Redirect

3. Add your new site to Google Webmasters tools on same account of your Old domain

4. Update New URL for your old domain

Dynamic Title

-------------



----------------------------------------------------













----------------------

Header Tag Optimization

Headings Tag in HTML:

Headings are defined with the

to

tags.

defines the largest heading and used for main heading

,

,

,

,
defines the smallest heading and used for sub-headings.
Heading Tags

to

are one of the important factors for On Page Optimization. Search engine give more importance for indexing and for ranking well in search results pages.

Header tag is important for visitors also, since heading tag tells both search engine and visitors what the content is all about.

Heading tags are represented as

to

.

is considered the most important tag by search engine and

, the smallest and the least important.

Example of Header tag is:
Head Tag


Head Tag


Head Tag


Head Tag


Head Tag


Head Tag

Tips for Optimizing the Header Tag

to

:

*
Header tags should contain your target keywords along with any other descriptive text relevant to the content of the page.
*
Search engines give more importance to Header tags to what a web page is all about.
*
The Google ranking algorithm things that if you're using a

tag, then the text in between this header tag must be more important than the content on the rest of the page.
*
Use your most target keyword phrases in heading tags on your webpage.
*
By default, H1 tags aren't formatting, so when we are using a CSS style to override the default



*
Use those keywords which are used in Title and Meta tags like description tag and keyword tag. Search engine preferred those keywords which are used in heading tags.
*
Use at least 2-4 heading tags such as

,

,

,

on each page of your website.
*
User those target keyword in header tag which describe the content of the webpage.
*
Add highly relevant keywords in

tag, as it is weighted most than other heading tags.
*
Analyze the relevancy of your keywords and place most important keyword in

tag, less important to

, further less important to

and ultimately least important keywords to

.

import job monitoring(How fast import running)

SELECT SUBSTR(sql_text, INSTR(sql_text,'INTO "'),30) table_name
, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
FROM sys.v_$sqlarea WHERE sql_text like 'INSERT %INTO "%'
AND command_type = 2
AND open_versions > 0;

-----------------

Oracle Table Size Growth Monitoring

select
segment_name table_name,
sum(bytes)/(1024*1024) table_size_meg
from
user_extents
where
segment_type='TABLE'
and
segment_name = 'TBLDIGITALSIGNEDREPORTS'
group by segment_name;


-------------------TABLE SIZE GROWTH WITH TIME-------

select
to_char(begin_interval_time,'DD-MM-YYYY hh24:mm') TIME,
object_name TABLE_NAME,
space_used_total/1024/1024 SPACE_USED_MB
from
dba_hist_seg_stat s,
dba_hist_seg_stat_obj o,
dba_hist_snapshot sn
where
o.owner = 'LDBO'
and
s.obj# = o.obj#
and
sn.snap_id = s.snap_id
and
TRIM(object_name) LIKE 'TBLDIGITALSIGNEDREPORTS'
order by
begin_interval_time DESC;



-------------------------------



select sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Schema Size (M)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where end_interval_time > trunc(sysdate) - &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and b.object_name ='TBLDIGITALSIGNEDREPORTS'
and c.owner = 'LDBO'
and space_used_delta > 0;

--------------------

Followers