Friday, February 17, 2012

ORA-00997: illegal use of LONG datatype (Migration Data LOB column)

SQL> CREATE GLOBAL TEMPORARY TABLE TMP_scan ON COMMIT PRESERVE ROWS as select FIRMNUMBER,CODE,PSCAN
NEDIMAGE,NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
CREATE GLOBAL TEMPORARY TABLE TMP_scan ON COMMIT PRESERVE ROWS as select FIRMNUMBER,CODE,PSCANNEDIMA
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype


SQL> create table CLIENTSCANNEDIMAGE as SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ * from
ldbo.CLIENTSCANNEDIMAGE@cmldlink where 1=0;
create table CLIENTSCANNEDIMAGE as SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ * from ldbo.
*
ERROR at line 1:
ORA-00997: illegal use of LONG datatype


SQL>
SQL> INSERT INTO CLIENTSCANNEDIMAGE SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ FIRMNUMBER,
CODE,PSCANNEDIMAGE,NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
INSERT INTO CLIENTSCANNEDIMAGE SELECT /*+ index(FIRMNUMBER,NFINANCIALYEAR,CODE) */ FIRMNUMBER,CODE,
ERROR at line 1:
ORA-00997: illegal use of LONG datatype

SQL> DECLARE
2 CURSOR c IS
3 select FIRMNUMBER, CODE, PSCANNEDIMAGE, NFINANCIALYEAR from ldbo.CLIENTSCANNEDIMAGE@cmldlink;
4 rc c%ROWTYPE;
5 BEGIN
6 OPEN c;
7 LOOP
8 FETCH c INTO rc;
9 EXIT WHEN c%NOTFOUND;
10 INSERT INTO CLIENTSCANNEDIMAGE
11 ( FIRMNUMBER, CODE, PSCANNEDIMAGE, NFINANCIALYEAR )
12 VALUES ( rc.FIRMNUMBER, rc.CODE, rc.PSCANNEDIMAGE, rc.NFINANCIALYEAR );
13 END LOOP;
14 COMMIT;
15 END;
16 /
DECLARE
*
ERROR at line 1:
ORA-01406: fetched column value was truncated


---------------------Solution----------------------
at sqlplus>
use copy command remember enter after hypen (-) in sqlplus
copy from ldbo/ldbo@nbs1112srv -
create CLIENTSCANNEDIMAGE2 using select * from CLIENTSCANNEDIMAGE;

--------------------or------------------
CREATE TABLE "DPCDSL"."CLIENTSCANNEDIMAGE1"
(
"FIRMNUMBER" CHAR(10 BYTE) NOT NULL ENABLE,
"CODE" CHAR(10 BYTE) NOT NULL ENABLE,
"PSCANNEDIMAGE" LONG RAW,
"NFINANCIALYEAR" NUMBER(4,0) NOT NULL ENABLE
)
PCTFREE 15 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
(
INITIAL 10485760 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
)
TABLESPACE "USR" ;


copy from ldbo/ldbo@nbs1112srv -
insert CLIENTSCANNEDIMAGE1 using select * from CLIENTSCANNEDIMAGE;


----------



Thursday, February 16, 2012

Job schedule compile invalid objects

REQUIRE SYS PRIVILIGES TO EXECUTE UTL_RECOMP

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'compile_invalid',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN UTL_RECOMP.recomp_serial(''LDBO''); END;',
start_date => '08-FEB-12 11:35.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
end_date => NULL,
enabled => TRUE,
comments => 'JOB to compile invalid objects');
END;
/



BEGIN
DBMS_SCHEDULER.drop_JOB (job_name => 'compile_invalid');
END;
/

exec DBMS_SCHEDULER.run_job ('compile_invalid');


select * from dba_scheduler_jobs;
select job_name,job_action,start_date,repeat_interval,end_date,run_count,failure_count from dba_scheduler_jobs where job_name='ANALYZE';

SELECT * FROM dba_scheduler_running_jobs;

Tuesday, February 14, 2012

ORA-00600: internal error code, arguments: [4193], [5396], [2242]

ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4193], [5396], [2242], [], [], [],[], []

rename spfile , startup from pfile

shut immediate
startup

CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE 'F:\NBSD1112\UNDOTBS03.ORA' SIZE 500M REUSE AUTOEXTEND ON;

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2;

shut immediate

change undo_tablespace=UNDOTBS2 into parameter file

startup

DROP TABLESPACE UNDOTBS1 INCLUDING CONTENTS AND DATAFILES;

CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE 'F:\NBSD1112\UNDOTBS01.ORA' SIZE 500M REUSE AUTOEXTEND ON;

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS1;

shut immediate

change undo_tablespace=UNDOTBS1 into parameter file

startup

DROP TABLESPACE UNDOTBS2 INCLUDING CONTENTS AND DATAFILES;

Monday, February 13, 2012

Server Capacity Planning

1) Existing server configuration (Processor, No of CPU, RAM, Disk Capacity, … , …)
2) No. of running databases on server
3) Databases folder size of all years
4) No of Users ( concurrent connections) in the database
5) Weekly or monthly growth of databases.
6) Oracle core license??

----------------------------Memory----------------
select * from dba_high_water_mark_statistics where name in ('SESSIONS','DB_SIZE');
select * from v$resource_limit;
--------maximum amount of memory allocated by the currently connected sessions
SELECT SUM (value) "max memory allocation" FROM v$sesstat ss, v$statname st WHERE st.name = 'session uga memory max' AND ss.statistic# = st.statistic#;

------------------pga requirement------------

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS');)*(2048576+a.value+b.value) pga_size
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;


-----------------------CPU Benchmark------------------------------
http://www.cpubenchmark.net/multi_cpu.html

-----------------------Space Management---------------

As per database growth weekly /monthly and planning for how many year

SGA PGA measuring

select * from v$sgastat order by 1;
select * from v$pgastat order by 1;


I noticed that you have some Pools in your SGA which are not used:

large pool free memory 209715200

But your PGA could reach about 340 Mo.

So, you may decrease about 160 Mo the large_pool_size parameter (you have 200 Mo free).

It will decrease the SGA (about 160 Mo).

Then you may increase the PGA_AGGREGATE_TARGET to 512 Mo.

The most important is that SGA + PGA remains below 2GB (except if you use /3GB parameter
which may help you to get 1 GB more).

------------------------Used SGA-----------------

select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name <> 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;


------------------------Free SGA-----------------

select name, round(sum(mb),1) mb, round(sum(inuse),1) free
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name = 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;

--------------------

select name,value from v$parameter where name ='sort_area_size';
---------------------------------- maximum PGA usage per process:--
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
/

-----------sum of all current PGA usage per process---------
select
sum(pga_used_mem) sum_pga_used_mem
, sum(pga_alloc_mem) sum_pga_alloc_mem
, sum(pga_max_mem) sum_pga_max_mem
from v$process
/

-----------pga requirement as per high water mark

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value)/1024/1024 pga_size_MB
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;

pga requirement as per high watermark

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value)/1024/1024 pga_size_MB
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;

Thursday, February 9, 2012

Invalid Object Why?????????????

SELECT owner || '.' || object_name invalid_object,'--- ' || object_type || ' ---' likely_reason
FROM dba_objects WHERE status = 'INVALID' AND owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Non-existent referenced db link ' || d.referenced_link_name
FROM dba_dependencies d WHERE NOT EXISTS
(
SELECT 'x'
FROM dba_db_links WHERE owner IN ('PUBLIC', d.owner)
AND db_link = d.referenced_link_name
)
AND d.referenced_link_name IS NOT NULL
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on invalid ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d
WHERE ro.status = 'INVALID' AND ro.owner = d.referenced_owner AND ro.object_name = d.referenced_name
AND ro.object_type = d.referenced_type AND d.referenced_link_name IS NULL
AND (d.owner, d.name, d.type) in
(
SELECT owner, object_name, object_type
FROM dba_objects
WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on newer ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d,dba_objects o
WHERE NVL(ro.last_ddl_time, ro.created) > NVL(o.last_ddl_time, o.created)
AND ro.owner = d.referenced_owner AND ro.object_name = d.referenced_name
AND ro.object_type = d.referenced_type AND d.referenced_link_name IS NULL
AND d.owner = o.owner AND d.name = o.object_name AND d.type = o.object_type
AND o.status = 'INVALID' AND d.owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name,'Depends on ' || d.referenced_type || ' '|| d.referenced_owner || '.' || d.referenced_name
|| DECODE(d.referenced_link_name,NULL, '','@' || d.referenced_link_name)
FROM dba_dependencies d WHERE d.referenced_owner != 'PUBLIC' -- Public synonyms generate noise
AND d.referenced_type = 'NON-EXISTENT'
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND owner = 'LDBO'
UNION
SELECT d.owner || '.' || d.name invalid_object,'No privilege on referenced ' || d.referenced_type || ' '
|| d.referenced_owner || '.' || d.referenced_name
FROM dba_objects ro,dba_dependencies d
WHERE NOT EXISTS
(
SELECT 'x' FROM dba_tab_privs p WHERE p.owner = d.referenced_owner
AND p.table_name = d.referenced_name AND p.grantee IN ('PUBLIC', d.owner)
)
AND ro.status = 'VALID'
AND ro.owner = d.referenced_owner
AND ro.object_name = d.referenced_name
AND d.referenced_link_name IS NOT NULL
AND (d.owner, d.name, d.type) IN
(
SELECT owner, object_name, object_type
FROM dba_objects WHERE status = 'INVALID'
)
AND d.owner = 'LDBO'
UNION
SELECT o.owner || '.' || o.object_name, e.text
FROM dba_errors e, dba_objects o
WHERE e.text LIKE 'PLS-%' AND e.owner = o.owner AND e.name = o.object_name
AND e.type = o.object_type AND o.status = 'INVALID' AND o.owner = 'LDBO'
/

Wednesday, February 8, 2012

Analyze Scheduling using oracle




---------------------------------------------------------------------frequency 1 day-----------------------------------------

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '01-APR-12 11.00.00 PM ASIA/CALCUTTA',
repeat_interval => 'freq=DAILY',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics');
END;
/


----------------- frequency 2 hours---------------------------------------

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
start_date => '16-FEB-12 06.00.00 PM ASIA/CALCUTTA',
repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
end_date => '02-APR-13 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics every 2 hours');
END;
/

------------------------------------------frequency syntax

FREQ=[YEARLY | MONTHLY | WEEKLY | DAILY | HOURLY | MINUTELY | SECONDLY] ;


-------------------To run a job every Tuesday at 11:25

FREQ=DAILY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;

FREQ=WEEKLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;

FREQ=YEARLY; BYDAY=TUE; BYHOUR=11; BYMINUTE=25;



------------------ To run a job Tuesday and Thursday at 11, 14 and 22 o'clock

FREQ=WEEKLY; BYDAY=TUE,THUR; BYHOUR=11,14,22;

EXPDP Data Pump Job Scheduling with rename dump and remove old files

1) create directory export_auto as 'd:\expdp1213';

create user dba_export_user identified by test123;

grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;


2)

CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
hdl_job NUMBER;
l_cur_scn NUMBER;
l_job_state VARCHAR2 (20);
l_status SYS.ku$_status1010;
l_job_status SYS.ku$_jobstatus1010;
BEGIN

begin
execute immediate 'drop table dba_export_user.AUTO_EXPORT';
exception when others then null;
end;

hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'exp1213.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file);
DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'export.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file);
DBMS_DATAPUMP.start_job (handle => hdl_job);
DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);

DBMS_DATAPUMP.detach(handle => hdl_job);

----------------------RENAME BACKUP WITH DATE
begin
UTL_FILE.FRENAME ('EXPORT_AUTO','exp1213.DMP','EXPORT_AUTO','exp1213'||'_'||TO_CHAR(SYSDATE,'DDMMYYYY')||'.DMP');
end;

begin
UTL_FILE.FRENAME ('EXPORT_AUTO','export.log','EXPORT_AUTO','export'||'_'||TO_CHAR(SYSDATE,'DDMMYYYY')||'.LOG');
end;

------------DELETE TWO DAYS BEFORE BACKUP
begin
UTL_FILE.FREMOVE ('EXPORT_AUTO','exp1213'||'_'||TO_CHAR(SYSDATE-2,'DDMMYYYY')||'.DMP');
end;

begin
UTL_FILE.FREMOVE ('EXPORT_AUTO','export'||'_'||TO_CHAR(SYSDATE-2,'DDMMYYYY')||'.log');
end;

END;
/


3) Change the time, Date

begin
dbms_scheduler.create_job(
job_name => 'EXPORT_JOB'
,job_type => 'STORED_PROCEDURE'
,job_action => 'dba_export_user.start_export'
,start_date => '08-FEB-12 06.02.00.00 PM ASIA/CALCUTTA'
,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
,enabled => TRUE
,comments => 'EXPORT_DATABASE_JOB');
end;
/


Note: Rename the dmp file with sysdate on daily basis before next schedule time

manually execute backup job
EXEC dba_export_user.start_export;

check running job status
select * from DBA_datapump_jobs;

drop job
EXEC dbms_scheduler.drop_job('dba_export_user.start_export');

Monday, February 6, 2012

ORACLE AUDIT FOR ALTER COMMAND



CREATE TABLE DBA_AUDIT_TAB_KSH (USERNAME VARCHAR2(10), SQL_TEXT VARCHAR2(2000),TIMESTAMP DATE);

CREATE OR REPLACE TRIGGER DBA_AUDIT_KSH
BEFORE ALTER ON SCHEMA
DECLARE
sql_text ora_name_list_t;
stmt VARCHAR2(2000);
n integer;
dt date;
BEGIN
null;
IF (ora_dict_obj_type IN ( 'TABLE') )
then
n:= ora_sql_txt(sql_text);
FOR i IN 1..n LOOP
stmt := stmt || sql_text(i);
END LOOP;
dt:=TO_DATE(SYSDATE,'DD-MM-YYYY HH24:MI:SS');
INSERT INTO DBA_AUDIT_TAB_KSH (username,sql_text,timestamp) VALUES (user,stmt,dt);

END IF;
END DBA_AUDIT_KSH;
/


Saturday, February 4, 2012

Performance Tuning Basic Guidelines

** Redo Log files – ensure that redo log are allocated on the fast disk, with minimum activities.
** Temporary tablespaces – ensure that temporary tablespaces are allocated on the fast disk, with minimum activities.
** Fragmentation of tablespaces – defragmentize tablespaces, equal blocksize for INITIAL and NEXT extents.
** Shared Pool Sizing – 1/3 or more of total physical memory, and check for thrashing/paging/swapping activity.
** DB_BLOCK_BUFFER – to enable buffering of data from datafiles during query and updates/inserts operation.
** Use BIND variables – to minimize parsing of SQL and enable SQL area reuse, and standardize bind-variable naming conventions.
** Identical SQL statements – literally identical – to enable SQL area reuse.
** Initial/Next Extents sizing – ensure initial and next are the same. Should be as small as possible to avoid wastage of spaces, but at the same time large enough to minimize time spent in frequent

allocation.
** PCTINCREASE – zero to ensure minimum fragmentization.
** Small PCTUSED and large PCTFREE – to ensure sufficient spaces for INSERT intensive operation.
** Freelist groups – large values to ensure parallelization of INSERT-intensive operation.
** INITRANS and MAXTRANS – large values to enable large number of concurrent transactions to access tables.
** Readonly tablespaces – to minimize latches/enqueues resources, as well as PINGING in OPS.
** Create indexes for frequently accessed columns – especially for range scanning and equality conditions in “where” clause.
** Use hash indexes if equality conditions is used, and no range scanning involved.
** If joining of tables is used frequently, consider Composite Indexes.
** Use Clustered tables – columns allocated together.
** Create Index-Organized Tables when data is mostly readonly – to localize both the data and indexes together.
** Use PARALLEL hints to make sure Oracle parallel query is used.
** IO slaves – to enable multiple DB writers to write to disks.
** Minextents and Maxextents sizing – ensure as large as possible to enable preallocation.
** Avoid RAID5 – IO intensive (redo log, archivelog, temporary tablespace, RBS etc)
** MTS mode – to optimize OLTP transaction, but not BATCH environment.
** Partition Elimination – to enable unused tablespaces partition to be archived.
** Performance hit seriously when bitmap indexes used in table with heavy DML. Might have to drop and recreate the bitmap indexes.
** Increase LOG_SIMULTANEOUS_COPIES – minimize contention for redo copy latches.
** In SQLLoader - using direct path over conventional path loading.
** Using parallel INSERT... SELECT when inserting data that already exists in another table in the database – faster than parallel direct loader using SQLLoader.
** Create table/index using UNRECOVERABLE option to minimize REDO LOG updating. SQLloading can use unrecoverable features, or ARCHIVELOG disabled.
** Alter index REBUILD parallel 2 – to enable 2 parallel processes to index concurrently.
** Use large redo log files to minimize log switching frequency.
** Loading is faster when using SQLLOADING if data source is presorted in a file.
** Drop the indexes, and disable all the constraints, when using SQLloader. Recreate the indexes after SQLloader has completed.
** Use Star Query for Data Warehousing-like application: /*+ ORDERED USE_NL(facts) INDEX(facts fact_concat) */ or /*+ STAR */.
** Using Parallel DDL statements in:
** CREATE INDEX
** CREATE TABLE ... AS SELECT
** ALTER INDEX ... REBUILD
** The parallel DDL statements for partitioned tables and indexes are:
** CREATE TABLE ... AS SELECT
** CREATE INDEX
** ALTER TABLE ... MOVE PARTITION
** ALTER TABLE ... SPLIT PARTITION
** ALTER INDEX ... REBUILD PARTITION
** ALTER INDEX ... SPLIT PARTITION
** Parallel analyze on partitioned table - ANALYZE {TABLE,INDEX} PARTITION.
** Using Asynchronous Replication instead of Synchrnous replication.
** Create snapshot log to enable fast-refreshing.
** In Replication, use parallel propagation to setup multiple data streams.
** Using ALTER SESSION ….HASHED_JOINED_ENABLED.
** Using ALTER SESSION …. ENABLE PARALLEL DML.
** Use ANALYZE TABLE….ESTIMATE STATISTICS for large tables, and COMPUTE STATISTICS for small table, to create statistics to enable Cost-Based Optimizer to made more accurate decision on

optimization technique for the query.
** To reduce contention on the rollback segments, at most 2 parallel process transactions should reside in the same rollback segment.
** To speed up transaction recovery, the initialization parameter CLEANUP_ROLLBACK_ENTRIES should be set to a high value approximately equal to the number of rollback entries generated for the forward-

going operation.
** Using raw devices/partition instead of file system partition.
** Consider increasing the various sort related parameters:
** sort_area_size
** sort_area_retained_size
** sort_direct_writes
** sort_write_buffers
** sort_write_buffer_size
** sort_spacemap_size
** sort_read_fac
** Tune the database buffer cache parameter BUFFER_POOL_KEEP and BUFFER_POOL_RECYCLE to keep the buffer cache after use, or age out the data blocks to recycle the buffer cache for other data.
** Larger values of LOG_BUFFER reduce log file I/O, particularly if transac-tions are long or numerous. The default setting is four times the maximum data block size for the host operating system.
** DB_BLOCK_SIZE should be multiple of OS block size.
** SHARED_POOL_SIZE –The size in bytes of the area devoted to shared SQL and PL/SQL statements.
** The LOCK_SGA and LOCK_SGA_AREAS parameters lock the entire SGA or particular SGA areas into physical memory.
** You can force Oracle to load the entire SGA into main memory by set ting the PRE_PAGE_SGA=TRUE in the init.ora file. This load slows your startup process slightly, but eliminates cache misses on the

library and data dictionary during normal runs.
** Enable DB_BLOCK_CHECKSUM if automatic checksum on datablocks is needed, performance will be degraded slightly.
** Use EXPLAIN PLAN to understand how Oracle process the query – utlxplan.sql.
** Choose between FIRST_ROWS or ALL_ROWS hint in an individual SQL state-ment to determine the best response time required for returning data.
** Use bitmap indexes for low cardinality data.
** Use full-table scan when the data selected ranged over a large percentage of the tables.
** Use DB_FILE_MULTIBLOCK_READ_COUNT – to enable full table scans by a single multiblock read. Increase this value if full table scan is desired.
** Check if row migration or row chaining has occurred - running utlchain.sql.
** Choose between offline backup or online backup plan.

Followers