Step : 1 Calculate total Size of tablespace
select sum(bytes)/1024/1024 "TOTAL SIZE (MB)" from dba_Data_files;
Step : 2 Calculate Free Space in Tablespace
select sum(bytes)/1024/1024 "FREE SPACE (MB)" from dba_free_space;
Step : 3 Calculate total size , free space and used space in tablespace
select t2.total "TOTAL DISK USAGE",t1.free "FREE SPACE",(t1.free/t2.total)*100 "FREE (%)",(t2.total-t1.free) "USED SPACE", (1-t1.free/t2.total)*100 "USED (%)"
from (select sum(bytes)/1024/1024 free from dba_free_space) t1 , (select sum(bytes)/1024/1024 total from dba_Data_files) t2 ;
Step : 4 Create table which is store all free/use space related information of tablespace
create table db_growth
as select *
from (
select sysdate,t2.total "TOTAL_DISK_USAGE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE% "
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
);
Step : 5 Insert free space information in DB_GROWTH table (if you want to populate data Manually)
insert into db_growth
select *
from (
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
);
COMMIT;
Step : 6 Create View on DB_GROWTH based table ( This Steps is Required if you want to populate data automatically)
create view v_db_growth
as select *
from
(
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
)
;
Step : 7 Insert data into DB_GROWTH table from V_DD_GROWTH view
insert into db_growth select *
from v_db_growth;
COMMIT;
Step : 8 Check everything goes fine.
select * from db_growth;
Check Result
Step : 9 Execute following SQL for more time stamp information
alter session set nls_date_format ='dd-mon-yyyy hh24:mi:ss' ;
Session altered.
Step : 10 Create a DBMS jobs which execute after 24 hours
declare
jobno number;
begin
dbms_job.submit(
jobno, 'begin insert into db_growth select * from v_db_growth;commit;end;', sysdate, 'SYSDATE+ 1', TRUE);
commit;
end;
/
PL/SQL procedure successfully completed.
Step: 11 View your dbms jobs and it's other information
select * from user_jobs;
-----If you want to execute dbms jobs manually execute following command other wise jobs is executing automatically
exec dbms_job.run(ENTER_JOB_NUMBER)
exec dbms_job.run(23);
PL/SQL procedure successfully completed.
exec dbms_job.remove(21); ------to remove a job
Step: 12 Finally all data populated in db_growth table
select * from db_growth;
Thursday, March 15, 2012
Index Clustering Factor
The clustering_factor measures how synchronized an index is with the data in a table. A table with a high clustering factor is out-of-sequence with the rows and large index range scans will consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned with the table and related rows reside together of each data block, making indexes very desirable for optimal access.
Oracle provides a column called clustering_factor in the dba_indexes view that provides information on how the table rows are synchronized with the index. The table rows are synchronized with the index when the clustering factor is close to the number of data blocks and the column value is not row-ordered when the clustering_factor approaches the number of rows in the table.
select a.index_name, a.num_rows, a.clustering_factor, b.blocks,b.avg_row_len from user_indexes a, user_tables b
where a.num_rows !=0 and a.table_name = b.table_name order by 2 desc,1 desc;
Un-Clustered Table Rows
clustering_factor ~= num_rows
Clustered Table Rows
clustering_factor ~= blocks
------------------------------------------------------------------------------------------------------------------------------------------------
- A good CF is equal (or near) to the values of number of blocks of table.
- A bad CF is equal (or near) to the number of rows of table.
- Rebuilding of index can improve the CF.
Then how to improve the CF?
- To improve the CF, it’s the table that must be rebuilt (and reordered).
- If table has multiple indexes, careful consideration needs to be given by which index to order table.
------------------------------------------------------------------------------------------------------------------------------------------------
Four factors work together to help the CBO decide whether to use an index or a full-table scan: the selectivity of a column value, the db_block_size, the avg_row_len, and the cardinality. An index scan is usually faster if a data column has high selectivity and a low clustering_factor.
when a column has high selectivity, a high clustering_factor, and small avg_row_len, there is still indication that column values are randomly distributed in the table, and an additional I/O will be required to obtain the rows. An index range scan would cause a huge amount of unnecessary I/O as shown in below, thus making a full-table scan more efficient.
---------------------------------------Calculating the Clustering Factor
To calculate the clustering factor of an index during the gathering of index statistics, Oracle does the following.
For each entry in the index Oracle compares the entry's table rowid block with the block of the previous index entry.
If the block is different, Oracle increments the clustering factor by 1.
If the clustering factor is close to the number of entries in the index, then an index range scan of 1000 index entries may require nearly 1000 blocks to be read from the table.
If the clustering factor is close to the number of blocks in the table, then an index range scan of 1000 index entries may require only 50 blocks to be read from the table.
Labels:
performance tuning
CTAS create table as select
Index Hint is best solution
----------------------------------CTAS with ORDER BY
create table transactions14 as select * from transactions;
50 SEC
create table transactions15 as select * from transactions ORDER BY FIRMNUMBER,TRANSACTION,SUBTRANS;
90 SEC
--------------------------------------------Parallel CTAS
create table transactions16 parallel (degree 2) as select * from transactions ORDER BY FIRMNUMBER,TRANSACTION,SUBTRANS;
120 SEC
create table transactions17 parallel (degree 2) as select * from transactions;
40 SEC
create table transactions18 parallel (degree 4) as select * from transactions;
50 SEC
create table transactions20 parallel (degree 8) as select * from transactions;
55 SEC
------------------------------------CTAS using INDEX hint---
SELECT * FROM dba_ind_columns WHERE table_name='TRANSACTIONS';
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;
8 sec
create table transactions23 as select /*+ index(FIRMNUMBER) */ * from transactions;
8 sec
----------------------CTAS WITH PRIMARY KEY
create table transactions24 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;
ALTER TABLE transactions24 ADD constraint pk_SAUDA23 PRIMARY KEY(FIRMNUMBER,TRANSACTION,SUBTRANS)
-----------------------------------------------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions where 1=2;
insert into transactions22 (select * from transactions);
30 sec
insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions);
30sec
insert /*+ parallel(transactions22,2) */ into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions);
60sec
-----------------------------------------------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions where 1=2;
CREATE UNIQUE INDEX "LDBO"."PK_SAUDA1" ON "LDBO"."TRANSACTIONS22" ("FIRMNUMBER", "TRANSACTION", "SUBTRANS") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 1610612736 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDX" ;
analyze table transactions22 compute STATISTICS;
analyze index PK_SAUDA1 compute STATISTICS;
---------------------------------------------no append
insert into dest select * from source1;
189SEC
---------------------------------------------append
insert /*+ append */ into dest select * from source1;
----------------------------------------CTAS, no parallel--------------
insert /*+ append */ into dest select * from source1;
create table dest as select * from source1;
----------------------------------------CTAS, parallel--------------
alter session force parallel ddl parallel 3;
alter session force parallel query parallel 3;
create table transactions22 as select * from transactions;
40SEC
----------------------------------------CTAS, parallel WITH INDEX--------------
alter session force parallel ddl parallel 3;
alter session force parallel query parallel 3;
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from transactions;
----------------------------------GOOD
CTAS INDEX > CTAS PARALLEL DLL > APPEND
---------------------------------------------------------------------------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert into transactions22 (select * from ldbo.transactions@cmldlink);
20 min
---------------------------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);
2 min 10 sec
-----------------------------------
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert /*+ parallel(transactions22) */ into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);
2 min 10 sec
-------------------------------------
create table transactions23 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink;
60 sec
-------------------------------------------------------
create table transactions23 as select /*+ index(TRANSACTIONS PK_SAUDAPRIMARY) */ * from ldbo.transactions@cmldlink;
60 SEC
----------------------
create table transactions23 as select /*+ index(TRANSACTIONS PK_SAUDAPRIMARY,IDXCLIENTSAUDA,IDXCLIENTBRSAUDA) */ * from ldbo.transactions@cmldlink;
10 MIN
--------------------------------------------------------
create table transactions24 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert /*+ append */ into transactions24 select * from transactions23;
40 sec
--------------------------------------------------
alter session force parallel ddl parallel 4;
alter session force parallel query parallel 4;
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);
2min
---------------------------------
alter session force parallel ddl parallel 2;
alter session force parallel query parallel 2;
create table transactions22 as select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink where 1=2;
insert into transactions22 (select /*+ index(FIRMNUMBER,TRANSACTION,SUBTRANS) */ * from ldbo.transactions@cmldlink);
2.5 min
Labels:
performance tuning
Get DDL
GET_DEPENDENT_DDL(object_type, base_object_name, base_object_schema, version, model, transform, object_count)
GET_GRANTED_DDL(object_type, grantee, version, model, transform, object_count)
----------------------------------------------------------------------------------------------------
select DBMS_METADATA.GET_DDL('TABLE','ACCOUNTS')||'/' from dual;
----------GET_DEPENDENT_DDL
select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX','ACCOUNTS') aa from dual;
select DBMS_METADATA.GET_DEPENDENT_DDL('TRIGGER','ACCOUNTS') aa from dual;
select DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','ACCOUNTS') aa from dual;
SELECT DBMS_METADATA.GET_DEPENDENT_DDL('CONSTRAINT','ACCOUNTS') from dual;
SELECT DBMS_METADATA.GET_DEPENDENT_DDL('REF_CONSTRAINT','ACCOUNTS') from dual;
--------------------------------
select DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT','<schema>') from dual;
select DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','<schema>') from dual;
select DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','<schema>') from dual;
select DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','KSH') aa from dual;
-----------------------------------------------------------------------------------------------------------------------------
SET LONG 1000000
select dbms_metadata.get_ddl( 'USER', 'LDBO' ) from dual
UNION ALL
select dbms_metadata.get_granted_ddl( 'SYSTEM_GRANT', 'LDBO' ) from dual
UNION ALL
select dbms_metadata.get_granted_ddl( 'OBJECT_GRANT', 'LDBO' ) from dual
UNION ALL
select dbms_metadata.get_granted_ddl( 'ROLE_GRANT', 'LDBO' ) from dual
UNION ALL
select dbms_metadata.get_granted_ddl( 'TABLESPACE_QUOTA', 'LDBO' ) from dual;
-----------------------------------------------------------------------------------------------------------------------------
CREATE TABLE my_ddl (owner VARCHAR2(30),
table_name VARCHAR2(30),
ddl CLOB);
INSERT INTO my_ddl (owner, table_name, ddl)
SELECT owner, table_name,
DBMS_METADATA.GET_DDL('TABLE', table_name, owner) ddl
FROM DBA_TABLES WHERE OWNER = 'LDBO';
Table Actual Size
SELECT
owner, table_name, TRUNC(sum(bytes)/1024/1024) Meg
FROM
(SELECT segment_name table_name, owner, bytes
FROM dba_segments
WHERE segment_type = 'TABLE'
UNION ALL
SELECT i.table_name, i.owner, s.bytes
FROM dba_indexes i, dba_segments s
WHERE s.segment_name = i.index_name
AND s.owner = i.owner
AND s.segment_type = 'INDEX'
UNION ALL
SELECT l.table_name, l.owner, s.bytes
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.segment_name
AND s.owner = l.owner
AND s.segment_type = 'LOBSEGMENT'
UNION ALL
SELECT l.table_name, l.owner, s.bytes
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.index_name
AND s.owner = l.owner
AND s.segment_type = 'LOBINDEX')
WHERE owner ='LDBO'
GROUP BY table_name, owner
HAVING SUM(bytes)/1024/1024 > 10 /* Ignore really small tables */
ORDER BY SUM(bytes) desc
;
Labels:
capacity planning,
oracle scripts
Schedule Job for Exe file
BEGIN
dbms_scheduler.create_job(
job_name => 'del_archive',
job_type => 'EXECUTABLE',
job_action => 'd:\ld\oracle\del.bat',
start_date => '14-MAR-12 4:52.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
enabled => TRUE,
comments => 'delete old archivelogs');
END;
/
exec DBMS_SCHEDULER.run_job ('del_archive');
BEGIN
DBMS_SCHEDULER.drop_JOB (job_name => 'del_archive');
END;
/
Role Recreation
set heading off verify off feedback off echo off term off linesize 200 wrap on
spool c:\temp\roles_creation.sql
SELECT 'Create Role '|| ROLE ||' ;' from dba_roles;
SELECT 'Grant '|| PRIVILEGE || ' to ' || GRANTEE || ';' FROM DBA_SYS_PRIVS where grantee not in ('SYS','SYSTEM','SYSMAN','TSMSYS','WMSYS','RECOVERY_CATALOG_OWNER','RESOURCE','OUTLN','ORACLE_OCM','OEM_MONITOR','OEM_ADVISOR','MGMT_USER','IMP_FULL_DATABASE','EXP_FULL_DATABASE','DBA','CONNECT','AQ_ADMINISTRATOR_ROLE','DBSNMP','SCHEDULER_ADMIN');
SELECT 'Grant '|| PRIVILEGE ||' on '|| TABLE_NAME || ' to ' || GRANTEE || ';' from dba_tab_privs Where Grantor='LDBO';
SELECT 'Grant update('|| COLUMN_NAME ||') on '|| TABLE_NAME || ' to ' || GRANTEE || ';' from dba_col_privs Where Grantor='LDBO';
spool off
Labels:
oracle scripts,
user management
Shrink Datafile Suggestion
select bytes/1024/1024 real_size,ceil( (nvl(hwm,1)*16384)/1024/1024 ) shrinked_size,
bytes/1024/1024-ceil( (nvl(hwm,1)*16384)/1024/1024 ) released_size
,'alter database datafile '|| ''''||file_name||'''' || ' resize ' || ceil( (nvl(hwm,1)*16384)/1024/1024 ) || ' m;' cmd
from
dba_data_files a,
( select file_id, max(block_id+blocks-1) hwm from dba_extents group by file_id ) b
where
tablespace_name='INDX'
and
a.file_id = b.file_id(+)
and ceil(blocks*16384/1024/1024)- ceil((nvl(hwm,1)* 16384)/1024/1024 ) > 0;
Wednesday, March 14, 2012
ORA-03297 file contains used data beyond requested RESIZE value
select
a.file_name,
a.bytes file_size_in_bytes,
(c.block_id+(c.blocks-1)) * &_BLOCK_SIZE HWM_BYTES,
a.bytes - ((c.block_id+(c.blocks-1)) * &_BLOCK_SIZE) SAVING
from dba_data_files a,
(select file_id,max(block_id) maximum
from dba_extents
group by file_id) b,
dba_extents c
where a.file_id = b.file_id
and c.file_id = b.file_id
and c.block_id = b.maximum
and c.tablespace_name = 'INDX'
ALTER DATABASE DATAFILE 'D:\lard1213\INDEX01.ORA' RESIZE 20000M;
Friday, February 17, 2012
ORA-00997: illegal use of LONG datatype (Migration Data LOB column)
SQL> CREATE GLOBAL TEMPORARY TABLE TMP_scan ON COMMIT PRESERVE ROWS as select FIRMNUMBER,CODE,PSCAN
NEDIMAGE,NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
CREATE GLOBAL TEMPORARY TABLE TMP_scan ON COMMIT PRESERVE ROWS as select FIRMNUMBER,CODE,PSCANNEDIMA
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
SQL> create table CLIENTSCANNEDIMAGE as SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ * from
ldbo.CLIENTSCANNEDIMAGE@cmldlink where 1=0;
create table CLIENTSCANNEDIMAGE as SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ * from ldbo.
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
SQL>
SQL> INSERT INTO CLIENTSCANNEDIMAGE SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ FIRMNUMBER,
CODE,PSCANNEDIMAGE,NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
INSERT INTO CLIENTSCANNEDIMAGE SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ FIRMNUMBER,CODE,
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
SQL> DECLARE
2 CURSOR c IS
3 select FIRMNUMBER, CODE, PSCANNEDIMAGE, NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
4 rc c%ROWTYPE;
5 BEGIN
6 OPEN c;
7 LOOP
8 FETCH c INTO rc;
9 EXIT WHEN c%NOTFOUND;
10 INSERT INTO CLIENTSCANNEDIMAGE
11 ( FIRMNUMBER, CODE, PSCANNEDIMAGE, NFINANCIALYEAR )
12 VALUES ( rc.FIRMNUMBER, rc.CODE, rc.PSCANNEDIMAGE, rc.NFINANCIALYEAR );
13 END LOOP;
14 COMMIT;
15 END;
16 /
DECLARE
*
ERROR at line 1:
ORA-01406: fetched column value was truncated
---------------------Solution----------------------
at sqlplus>
use copy command remember enter after hypen (-) in sqlplus
copy from ldbo/ldbo@nbs1112srv -
create CLIENTSCANNEDIMAGE2 using select * from CLIENTSCANNEDIMAGE;
--------------------or------------------
CREATE TABLE "DPCDSL"."CLIENTSCANNEDIMAGE1"
(
"FIRMNUMBER" CHAR(10 BYTE) NOT NULL ENABLE,
"CODE" CHAR(10 BYTE) NOT NULL ENABLE,
"PSCANNEDIMAGE" LONG RAW,
"NFINANCIALYEAR" NUMBER(4,0) NOT NULL ENABLE
)
PCTFREE 15 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
(
INITIAL 10485760 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
)
TABLESPACE "USR" ;
copy from ldbo/ldbo@nbs1112srv -
insert CLIENTSCANNEDIMAGE1 using select * from CLIENTSCANNEDIMAGE;
----------
Thursday, February 16, 2012
Job schedule compile invalid objects
REQUIRE SYS PRIVILIGES TO EXECUTE UTL_RECOMP
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'compile_invalid',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN UTL_RECOMP.recomp_serial(''LDBO''); END;',
start_date => '08-FEB-12 11:35.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
end_date => NULL,
enabled => TRUE,
comments => 'JOB to compile invalid objects');
END;
/
BEGIN
DBMS_SCHEDULER.drop_JOB (job_name => 'compile_invalid');
END;
/
exec DBMS_SCHEDULER.run_job ('compile_invalid');
select * from dba_scheduler_jobs;
select job_name,job_action,start_date,repeat_interval,end_date,run_count,failure_count from dba_scheduler_jobs where job_name='ANALYZE';
SELECT * FROM dba_scheduler_running_jobs;
Tuesday, February 14, 2012
ORA-00600: internal error code, arguments: [4193], [5396], [2242]
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4193], [5396], [2242], [], [], [],[], []
rename spfile , startup from pfile
shut immediate
startup
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'F:\NBSD1112\UNDOTBS03.ORA' SIZE 500M REUSE AUTOEXTEND ON;
ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2;
shut immediate
change undo_tablespace=UNDOTBS2 into parameter file
startup
DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;
CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE 'F:\NBSD1112\UNDOTBS01.ORA' SIZE 500M REUSE AUTOEXTEND ON;
ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS1;
shut immediate
change undo_tablespace=UNDOTBS1 into parameter file
startup
DROP TABLESPACE UNDOTBS2 INCLUDING CONTENTS AND DATAFILES;
Monday, February 13, 2012
Server Capacity Planning
1) Existing server configuration (Processor, No of CPU, RAM, Disk Capacity, … , …)
2) No. of running databases on server
3) Databases folder size of all years
4) No of Users ( concurrent connections) in the database
5) Weekly or monthly growth of databases.
6) Oracle core license??
----------------------------Memory----------------
select * from dba_high_water_mark_statistics where name in ('SESSIONS','DB_SIZE');
select * from v$resource_limit;
--------maximum amount of memory allocated by the currently connected sessions
SELECT SUM (value) "max memory allocation" FROM v$sesstat ss, v$statname st WHERE st.name = 'session uga memory max' AND ss.statistic# = st.statistic#;
------------------pga requirement------------
select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS');)*(2048576+a.value+b.value) pga_size
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;
-----------------------CPU Benchmark------------------------------
http://www.cpubenchmark.net/multi_cpu.html
-----------------------Space Management---------------
As per database growth weekly /monthly and planning for how many year
SGA PGA measuring
select * from v$sgastat order by 1;
select * from v$pgastat order by 1;
I noticed that you have some Pools in your SGA which are not used:
large pool free memory 209715200
But your PGA could reach about 340 Mo.
So, you may decrease about 160 Mo the large_pool_size parameter (you have 200 Mo free).
It will decrease the SGA (about 160 Mo).
Then you may increase the PGA_AGGREGATE_TARGET to 512 Mo.
The most important is that SGA + PGA remains below 2GB (except if you use /3GB parameter
which may help you to get 1 GB more).
------------------------Used SGA-----------------
select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name <> 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;
------------------------Free SGA-----------------
select name, round(sum(mb),1) mb, round(sum(inuse),1) free
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name = 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;
--------------------
select name,value from v$parameter where name ='sort_area_size';
---------------------------------- maximum PGA usage per process:--
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
/
-----------sum of all current PGA usage per process---------
select
sum(pga_used_mem) sum_pga_used_mem
, sum(pga_alloc_mem) sum_pga_alloc_mem
, sum(pga_max_mem) sum_pga_max_mem
from v$process
/
-----------pga requirement as per high water mark
select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value)/1024/1024 pga_size_MB
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;
Labels:
capacity planning,
memory management
pga requirement as per high watermark
select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value)/1024/1024 pga_size_MB
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;
Thursday, February 9, 2012
Invalid Object Why?????????????
SELECT owner || '.' || object_name invalid_object,'--- ' || object_type || ' ---' likely_reason
FROM dba_objects WHERE status = 'INVALID' AND owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Non-existent referenced db link ' || d.referenced_link_name
FROM dba_dependencies d WHERE NOT EXISTS
(
SELECT 'x'
FROM dba_db_links WHERE owner IN ('PUBLIC', d.owner)
AND db_link = d.referenced_link_name
)
AND d.referenced_link_name IS NOT NULL
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on invalid ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d
WHERE ro.status = 'INVALID' AND ro.owner = d.referenced_owner AND ro.object_name = d.referenced_name
AND ro.object_type = d.referenced_type AND d.referenced_link_name IS NULL
AND (d.owner, d.name, d.type) in
(
SELECT owner, object_name, object_type
FROM dba_objects
WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on newer ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d,dba_objects o
WHERE NVL(ro.last_ddl_time, ro.created) > NVL(o.last_ddl_time, o.created)
AND ro.owner = d.referenced_owner AND ro.object_name = d.referenced_name
AND ro.object_type = d.referenced_type AND d.referenced_link_name IS NULL
AND d.owner = o.owner AND d.name = o.object_name AND d.type = o.object_type
AND o.status = 'INVALID' AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
|| DECODE(d.referenced_link_name,NULL, '','@' || d.referenced_link_name)
FROM dba_dependencies d WHERE d.referenced_owner != 'PUBLIC' -- Public synonyms generate noise
AND d.referenced_type = 'NON-EXISTENT'
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name invalid_object,'No privilege on referenced ' || d.referenced_type || ' '
|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d
WHERE NOT EXISTS
(
SELECT 'x' FROM dba_tab_privs p WHERE p.owner = d.referenced_owner
AND p.table_name = d.referenced_name AND p.grantee IN ('PUBLIC', d.owner)
)
AND ro.status = 'VALID'
AND ro.owner = d.referenced_owner
AND ro.object_name = d.referenced_name
AND d.referenced_link_name IS NOT NULL
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT o.owner || '.' || o.object_name, e.text
FROM dba_errors e, dba_objects o
WHERE e.text LIKE 'PLS-%' AND e.owner = o.owner AND e.name = o.object_name
AND e.type = o.object_type AND o.status = 'INVALID' AND o.owner = 'LDBO'
/
Wednesday, February 8, 2012
Analyze Scheduling using oracle
---------------------------------------------------------------------frequency 1 day-----------------------------------------
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '01-APR-12 11.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics');
END;
/
----------------- frequency 2 hours---------------------------------------
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '16-FEB-12 06.00.00 PM ASIA/CALCUTTA',
repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics every 2 hours');
END;
/
------------------------------------------frequency syntax
FREQ=[YEARLY | MONTHLY | WEEKLY | DAILY | HOURLY | MINUTELY | SECONDLY] ;
-------------------To run a job every Tuesday at 11:25
FREQ=DAILY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;
FREQ=WEEKLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;
FREQ=YEARLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;
------------------ To run a job Tuesday and Thursday at 11, 14 and 22 o'clock
FREQ=WEEKLY; BYDAY=TUE,THUR; BYHOUR=11,14,22;
Labels:
job schedule,
performance tuning
EXPDP Data Pump Job Scheduling with rename dump and remove old files
1) create directory export_auto as 'd:\expdp1213';
create user dba_export_user identified by test123;
grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;
2)
CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
hdl_job NUMBER;
l_cur_scn NUMBER;
l_job_state VARCHAR2 (20);
l_status SYS.ku$_status1010;
l_job_status SYS.ku$_jobstatus1010;
BEGIN
begin
execute immediate 'drop table dba_export_user.AUTO_EXPORT';
exception when others then null;
end;
hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'exp1213.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file);
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'export.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file);
DBMS_DATAPUMP.start_job (handle => hdl_job);
DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);
DBMS_DATAPUMP.detach(handle => hdl_job);
----------------------RENAME BACKUP WITH DATE
begin
UTL_FILE.FRENAME ('EXPORT_AUTO','exp1213.DMP','EXPORT_AUTO','exp1213'||'_'||TO_CHAR(SYSDATE,'DDMMYYYY')||'.DMP');
end;
begin
UTL_FILE.FRENAME ('EXPORT_AUTO','export.log','EXPORT_AUTO','export'||'_'||TO_CHAR(SYSDATE,'DDMMYYYY')||'.LOG');
end;
------------DELETE TWO DAYS BEFORE BACKUP
begin
UTL_FILE.FREMOVE ('EXPORT_AUTO','exp1213'||'_'||TO_CHAR(SYSDATE-2,'DDMMYYYY')||'.DMP');
end;
begin
UTL_FILE.FREMOVE ('EXPORT_AUTO','export'||'_'||TO_CHAR(SYSDATE-2,'DDMMYYYY')||'.log');
end;
END;
/
3) Change the time, Date
begin
dbms_scheduler.create_job(
job_name => 'EXPORT_JOB'
,job_type => 'STORED_PROCEDURE'
,job_action => 'dba_export_user.start_export'
,start_date => '08-FEB-12 06.02.00.00 PM ASIA/CALCUTTA'
,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
,enabled => TRUE
,comments => 'EXPORT_DATABASE_JOB');
end;
/
Note: Rename the dmp file with sysdate on daily basis before next schedule time
manually execute backup job
EXEC dba_export_user.start_export;
check running job status
select * from DBA_datapump_jobs;
drop job
EXEC dbms_scheduler.drop_job('dba_export_user.start_export');
Labels:
data pump,
expdp,
job schedule
Monday, February 6, 2012
ORACLE AUDIT FOR ALTER COMMAND
CREATE TABLE DBA_AUDIT_TAB_KSH (USERNAME VARCHAR2(10), SQL_TEXT VARCHAR2(2000),TIMESTAMP DATE);
CREATE OR REPLACE TRIGGER DBA_AUDIT_KSH
BEFORE ALTER ON SCHEMA
DECLARE
sql_text ora_name_list_t;
stmt VARCHAR2(2000);
n integer;
dt date;
BEGIN
null;
IF (ora_dict_obj_type IN ( 'TABLE') )
then
n:= ora_sql_txt(sql_text);
FOR i IN 1..n LOOP
stmt := stmt || sql_text(i);
END LOOP;
dt:=TO_DATE(SYSDATE,'DD-MM-YYYY HH24:MI:SS');
INSERT INTO DBA_AUDIT_TAB_KSH (username,sql_text,timestamp) VALUES (user,stmt,dt);
END IF;
END DBA_AUDIT_KSH;
/
Saturday, February 4, 2012
Performance Tuning Basic Guidelines
** Redo Log files – ensure that redo log are allocated on the fast disk, with minimum activities.
** Temporary tablespaces – ensure that temporary tablespaces are allocated on the fast disk, with minimum activities.
** Fragmentation of tablespaces – defragmentize tablespaces, equal blocksize for INITIAL and NEXT extents.
** Shared Pool Sizing – 1/3 or more of total physical memory, and check for thrashing/paging/swapping activity.
** DB_BLOCK_BUFFER – to enable buffering of data from datafiles during query and updates/inserts operation.
** Use BIND variables – to minimize parsing of SQL and enable SQL area reuse, and standardize bind-variable naming conventions.
** Identical SQL statements – literally identical – to enable SQL area reuse.
** Initial/Next Extents sizing – ensure initial and next are the same. Should be as small as possible to avoid wastage of spaces, but at the same time large enough to minimize time spent in frequent
allocation.
** PCTINCREASE – zero to ensure minimum fragmentization.
** Small PCTUSED and large PCTFREE – to ensure sufficient spaces for INSERT intensive operation.
** Freelist groups – large values to ensure parallelization of INSERT-intensive operation.
** INITRANS and MAXTRANS – large values to enable large number of concurrent transactions to access tables.
** Readonly tablespaces – to minimize latches/enqueues resources, as well as PINGING in OPS.
** Create indexes for frequently accessed columns – especially for range scanning and equality conditions in “where” clause.
** Use hash indexes if equality conditions is used, and no range scanning involved.
** If joining of tables is used frequently, consider Composite Indexes.
** Use Clustered tables – columns allocated together.
** Create Index-Organized Tables when data is mostly readonly – to localize both the data and indexes together.
** Use PARALLEL hints to make sure Oracle parallel query is used.
** IO slaves – to enable multiple DB writers to write to disks.
** Minextents and Maxextents sizing – ensure as large as possible to enable preallocation.
** Avoid RAID5 – IO intensive (redo log, archivelog, temporary tablespace, RBS etc)
** MTS mode – to optimize OLTP transaction, but not BATCH environment.
** Partition Elimination – to enable unused tablespaces partition to be archived.
** Performance hit seriously when bitmap indexes used in table with heavy DML. Might have to drop and recreate the bitmap indexes.
** Increase LOG_SIMULTANEOUS_COPIES – minimize contention for redo copy latches.
** In SQLLoader - using direct path over conventional path loading.
** Using parallel INSERT... SELECT when inserting data that already exists in another table in the database – faster than parallel direct loader using SQLLoader.
** Create table/index using UNRECOVERABLE option to minimize REDO LOG updating. SQLloading can use unrecoverable features, or ARCHIVELOG disabled.
** Alter index REBUILD parallel 2 – to enable 2 parallel processes to index concurrently.
** Use large redo log files to minimize log switching frequency.
** Loading is faster when using SQLLOADING if data source is presorted in a file.
** Drop the indexes, and disable all the constraints, when using SQLloader. Recreate the indexes after SQLloader has completed.
** Use Star Query for Data Warehousing-like application: /*+ ORDERED USE_NL(facts) INDEX(facts fact_concat) */ or /*+ STAR */.
** Using Parallel DDL statements in:
** CREATE INDEX
** CREATE TABLE ... AS SELECT
** ALTER INDEX ... REBUILD
** The parallel DDL statements for partitioned tables and indexes are:
** CREATE TABLE ... AS SELECT
** CREATE INDEX
** ALTER TABLE ... MOVE PARTITION
** ALTER TABLE ... SPLIT PARTITION
** ALTER INDEX ... REBUILD PARTITION
** ALTER INDEX ... SPLIT PARTITION
** Parallel analyze on partitioned table - ANALYZE {TABLE,INDEX} PARTITION.
** Using Asynchronous Replication instead of Synchrnous replication.
** Create snapshot log to enable fast-refreshing.
** In Replication, use parallel propagation to setup multiple data streams.
** Using ALTER SESSION ….HASHED_JOINED_ENABLED.
** Using ALTER SESSION …. ENABLE PARALLEL DML.
** Use ANALYZE TABLE….ESTIMATE STATISTICS for large tables, and COMPUTE STATISTICS for small table, to create statistics to enable Cost-Based Optimizer to made more accurate decision on
optimization technique for the query.
** To reduce contention on the rollback segments, at most 2 parallel process transactions should reside in the same rollback segment.
** To speed up transaction recovery, the initialization parameter CLEANUP_ROLLBACK_ENTRIES should be set to a high value approximately equal to the number of rollback entries generated for the forward-
going operation.
** Using raw devices/partition instead of file system partition.
** Consider increasing the various sort related parameters:
** sort_area_size
** sort_area_retained_size
** sort_direct_writes
** sort_write_buffers
** sort_write_buffer_size
** sort_spacemap_size
** sort_read_fac
** Tune the database buffer cache parameter BUFFER_POOL_KEEP and BUFFER_POOL_RECYCLE to keep the buffer cache after use, or age out the data blocks to recycle the buffer cache for other data.
** Larger values of LOG_BUFFER reduce log file I/O, particularly if transac-tions are long or numerous. The default setting is four times the maximum data block size for the host operating system.
** DB_BLOCK_SIZE should be multiple of OS block size.
** SHARED_POOL_SIZE –The size in bytes of the area devoted to shared SQL and PL/SQL statements.
** The LOCK_SGA and LOCK_SGA_AREAS parameters lock the entire SGA or particular SGA areas into physical memory.
** You can force Oracle to load the entire SGA into main memory by set ting the PRE_PAGE_SGA=TRUE in the init.ora file. This load slows your startup process slightly, but eliminates cache misses on the
library and data dictionary during normal runs.
** Enable DB_BLOCK_CHECKSUM if automatic checksum on datablocks is needed, performance will be degraded slightly.
** Use EXPLAIN PLAN to understand how Oracle process the query – utlxplan.sql.
** Choose between FIRST_ROWS or ALL_ROWS hint in an individual SQL state-ment to determine the best response time required for returning data.
** Use bitmap indexes for low cardinality data.
** Use full-table scan when the data selected ranged over a large percentage of the tables.
** Use DB_FILE_MULTIBLOCK_READ_COUNT – to enable full table scans by a single multiblock read. Increase this value if full table scan is desired.
** Check if row migration or row chaining has occurred - running utlchain.sql.
** Choose between offline backup or online backup plan.
Labels:
performance tuning
Subscribe to:
Posts (Atom)