Monday, July 30, 2012

Alter database link Password 11g


Our organization requires a regular password change on some database accounts for security compliance. If this account is used in the database link in other database, that database link has to be dropped and recreated with an updated password.

This changes in 11gR2 because it now offers the alter database link to change password. No more drop and recreate database link!

Sample here is on the database where database link is located:

The password of the database link’s account has just been changed.

db11gr2 SQL> select count(*) from tb_test@DL_TEST;
select count(*) from tb_test@DL_TEST
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from DL_TEST

db11gr2 SQL> alter database link DL_TEST connect to dblink_test identified by dblink_test;

Database link altered.

db11gr2 SQL> select count(1) from tb_test@DL_TEST;

COUNT(1)
----------
6304
This option is not available in the pre-11gR2.

db11gr1 SQL > alter database link DL_TEST connect to dblink_test identified by dblink_test;

alter database link DL_TEST connect to dblink_test identified by dblink_test
*
ERROR at line 1:
ORA-02231: missing or invalid option to ALTER DATABASE

Tuesday, July 24, 2012

Finding NULL in column


select column_name, num_nulls from all_tab_columns where table_name = 'SOME_TABLENAME' and owner = 'SOME_OWNER';

---------column contain Null------------------
set serveroutput on
declare
  l_count integer;
begin
  for col in (select table_name, column_name
              from user_tab_columns where table_name='&Enter_Table_Name')
  loop
    execute immediate 'select count(*) from '||col.table_name
                      ||' where '||col.column_name
                      ||' is null and rownum=1'
      into l_count;
    if l_count != 0 then
      dbms_output.put_line ('Column '||col.column_name||' contains nulls');
    end if;
  end loop;
end;
/


----column contain Only Null----------------------------------
set serveroutput on
declare
  l_count integer;
begin
  for col in (select table_name, column_name
              from user_tab_columns where table_name='&Enter_Table_Name')
  loop
    execute immediate 'select count(*) from '||col.table_name
                      ||' where '||col.column_name
                      ||' is not null and rownum=1'
      into l_count;
    if l_count = 0 then
      dbms_output.put_line ('Column '||col.column_name||' contains only nulls');
    end if;
  end loop;
end;
/

----------------------------------------------------------

Finding Any Character / NULL in All tables, Columns in Schema

create or replace function find_in_schema(val varchar2)
return varchar2 is
  v_old_table user_tab_columns.table_name%type;
  v_where     Varchar2(32766);
  v_first_col boolean := true;
  type rc     is ref cursor;
  c           rc;
  v_rowid     varchar2(20);

begin
  for r in (
    select
      t.*
    from
      user_tab_cols t, user_all_tables a
    where t.table_name = a.table_name
      and t.data_type like '%CHAR%'
    order by t.table_name) loop

    if v_old_table is null then
      v_old_table := r.table_name;
    end if;

    if v_old_table <> r.table_name then
      v_first_col := true;

      -- dbms_output.put_line('searching ' || v_old_table);

      open c for 'select rowid from "' || v_old_table || '" ' || v_where;

      fetch c into v_rowid;
      loop
        exit when c%notfound;
        dbms_output.put_line('  rowid: ' || v_rowid || ' in ' || v_old_table);
        fetch c into v_rowid;
      end loop;

      v_old_table := r.table_name;
    end if;

    if v_first_col then
      v_where := ' where ' || r.column_name || ' like ''%' || val || '%''';
      v_first_col := false;
    else
      v_where := v_where || ' or ' || r.column_name || ' like ''%' || val || '%''';
    end if;

  end loop;
  return 'Success';
end;
/


set serveroutput on size 1000000 format wrapped

select find_in_schema('KSHITIJ') from dual;

it provide the rowid with table

select find_in_schema(NULL) from dual;

select find_in_schema(NULL) from TABLE_NAME;


---------------------------------------------------
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "LDBO.FIND_IN_SCHEMA", line 45
06502. 00000 -  "PL/SQL: numeric or value error%s"
*Cause:  
*Action:
--------------------------------------------------------------------
v_where     Varchar2(4000);       to        v_where     Varchar2(32766);
you might try to load a column in a record field that is defined with a smaller size than the data returned from the select.


Monday, July 23, 2012

Standby Database: Forcefully log switch in every 30 min automatically


Controlling Archive Lag

You can force all enabled redo log threads to switch their current logs at regular time intervals. In a primary/standby database configuration, changes are made available to the standby database by archiving redo logs at the primary site and then shipping them to the standby database. The changes that are being applied by the standby database can lag behind the changes that are occurring on the primary database, because the standby database must wait for the changes in the primary database redo log to be archived (into the archived redo log) and then shipped to it. To limit this lag, you can set the ARCHIVE_LAG_TARGET initialization parameter. Setting this parameter lets you specify in seconds how long that lag can be.

Setting the ARCHIVE_LAG_TARGET Initialization Parameter

When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause the database to examine the current redo log of the instance periodically. If the following conditions are met, then the instance will switch the log:
  • The current log was created prior to n seconds ago, and the estimated archival time for the current log is m seconds (proportional to the number of redo blocks used in the current log), where n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter.
  • The current log contains redo records.
In an Oracle Real Application Clusters environment, the instance also causes other threads to switch and archive their logs if they are falling behind. This can be particularly useful when one instance in the cluster is more idle than the other instances (as when you are running a 2-node primary/secondary configuration of Oracle Real Application Clusters).
The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many seconds of redo the standby could lose in the event of a primary shutdown or failure if the Oracle Data Guard environment is not configured in a no-data-loss mode. It also provides an upper limit of how long (in seconds) the current log of the primary database can span. Because the estimated archival time is also considered, this is not the exact log switch time.
The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value).
ARCHIVE_LAG_TARGET = 1800

A value of 0 disables this time-based log switching functionality. This is the default setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived.
ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER SYSTEM SET statement.
Caution:
The ARCHIVE_LAG_TARGET parameter must be set to the same value in all instances of an Oracle Real Application Clusters environment. Failing to do so results in unpredictable behavior.

Factors Affecting the Setting of ARCHIVE_LAG_TARGET

Consider the following factors when determining if you want to set the ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter.
  • Overhead of switching (as well as archiving) logs
  • How frequently normal log switches occur as a result of log full conditions
  • How much redo loss is tolerated in the standby database
Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already occur more frequently than the interval specified. However, in the case of irregularities of redo generation speed, the interval does provide an upper limit for the time range each current log covers.
If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there can be a negative impact on performance. This can force frequent log switches. Set the parameter to a reasonable value so as not to degrade the performance of the primary database.

Tuesday, July 17, 2012

Change Notification not received from Oracle database 11.2.0.1 to client 11.2.0.3


                                      
Server
version   
Client version
11.2.0.1.0
11.2.0.2.0
11.2.0.2.0 + fix
11.2.0.3.0
11.2.0.1.0
works
fails
works
fails
11.2.0.2.0
fails
 works
fails
fails
11.2.0.2.0 + fix
works
fails
works
works
11.2.0.3.0
fails
works
works


Solution

Apply patches or upgrade versions as applicable to obtain a working combination.

A workaround can be used to bypass the authentication phase, which is where the problem behavior occurs.  To do that, set the following event in the database:
  
alter system set events '10867 trace name context forever, level 1';


Tuesday, July 10, 2012

RAID recommendation for Oracle Database


Redo logs are extremely write intensive, and high bandwidth storage such as SSD are ideas for relieving I/O bottlenecks.  Also, online redo logs should always be multiplexed, and RAID1 provides few additional benefits.
RAID
Type of Raid
Control File
Database File
Redo Log File
Archive Log File
0
Striping
Avoid
OK
Avoid
Avoid
1
Shadowing
Best
OK
Best
Best
1+0
Striping and Shadowing
OK
Best
Avoid
Avoid
3
Striping with static parity
OK
OK
Avoid
Avoid
5
Striping with rotating parity
OK
Best if RAID0-1 not available
Avoid
Avoid

RAID Recommendations (From MOSC NOTE: 45635.1)

Tuesday, July 3, 2012

Explain Plan Definitation



Operation Option Description
AND-EQUAL Thes step will have two or more child steps each of which returns a set of ROWID's.The AND-EQUAL operation selects onlyl those ROWIDs that returned by all the child operations.
BITMAP CONVERSION TO ROWIDS Converts a bitmap from a bitmap index to a set of ROWIDs that can be used to retrieve the actual data.
BITMAP CONVERSION FROM ROWIDS Converts a set of ROWIDs into a bitmapped representation.
BITMAP CONVERSION COUNT Counts the number of rows represented by a bitmap.
BITMAP INDEX SINGLE VALUE Looks up the bitmap for a single key value in the index.
BITMAP RANGE SCAN Retrieves bitmaps for a key value range.
BITMAP FULL SCAN Performs a full scan of a bitmap index if there is no start or stop key.
BITMAP MERGE Merges several bitmaps resulting from a range scan into one bitmap.
BITMAP MINUS Subtracts bits of one bitmap from another. Row source is used for negated predicates. Can be used only if there are nonnegated predicates yielding a bitmap from which the subtraction can take place. An example appears in "Viewing Bitmap Indexes with EXPLAIN PLAN".
BITMAP OR Computes the bitwise OR of two bitmaps.
BITMAP AND Computes the bitwise AND of two bitmaps.
BITMAP KEY ITERATION Takes each row from a table row source and finds the corresponding bitmap from a bitmap index. This set of bitmaps are then merged into one bitmap in a following BITMAP MERGE operation.
CONNECT BY Retrieves rows in hierarchical order for a query containing a CONNECT BY clause.
CONCATENATION Operation accepting multiple sets of rows returning the union-all of the sets.
COUNT Operation counting the number of rows selected from a table.
COUNT STOPKEY Count operation where the number of rows returned is limited by the ROWNUM expression in the WHERE clause.
DOMAIN INDEX Retrieval of one or more rowids from a domain index. The options column contain information supplied by a user-defined domain index cost function, if any.
FILTER Operation accepting a set of rows, eliminates some of them, and returns the rest.
FIRST ROW Retrieval of only the first row selected by a query.
FOR UPDATE Operation retrieving and locking the rows selected by a query containing a FOR UPDATE clause.
HASH JOIN Operation joining two sets of rows and returning the result. This join method is useful for joining large data sets of data (DSS, Batch). The join condition is an efficient way of accessing the second table.
   (These are join operations.) CBO uses the smaller of the two tables/data sources to build a hash table on the join key in memory. Then it scans the larger table, probing the hash table to find the joined rows.
HASH JOIN ANTI Hash anti-join.
HASH JOIN SEMI Hash semi-join.
INDEX UNIQUE SCAN Retrieval of a single rowid from an index.
   (These are access methods.)
INDEX RANGE SCAN Retrieval of one or more rowids from an index. Indexed values are scanned in ascending order.
INDEX RANGE SCAN DESCENDING Retrieval of one or more rowids from an index. Indexed values are scanned in descending order.
INDEX FULL SCAN Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in ascending order.
INDEX FULL SCAN DESCENDING Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in descending order.
INDEX FAST FULL SCAN Retrieval of all rowids (and column values) using multiblock reads. No sorting order can be defined. Compares to a full table scan on only the indexed columns. Only available with the cost based optimizer.
INDEX SKIP SCAN Retrieval of rowids from a concatenated index without using the leading column(s) in the index. Introduced in Oracle9i. Only available with the cost based optimizer.
INLIST ITERATOR Iterates over the next operation in the plan for each value in the IN-list predicate.
INTERSECTION Operation accepting two sets of rows and returning the intersection of the sets, eliminating duplicates.
MERGE JOIN Operation accepting two sets of rows, each sorted by a specific value, combining each row from one set with the matching rows from the other, and returning the result.
   (These are join operations.)
MERGE JOIN OUTER Merge join operation to perform an outer join statement.
MERGE JOIN ANIT Merge anti-join.
MERGE JOIN SEMI Merge semi-join.
MERGE JOIN CARTESIAN Can result from 1 or more of the tables not having any join conditions to any other tables in the statement. Can occur even with a join and it may not be flagged as CARTESIAN in the plan.
MINUS Operation accepting two sets of rows and returning rows appearing in the first set but not in the second, eliminating duplicates.
NESTED LOOPS Operation accepting two sets of rows, an outer set and an inner set. Oracle compares each row of the outer set with each row of the inner set, returning rows that satisfy a condition. This join method is useful for joining small subsets of data (OLTP). The join condition is an efficient way of accessing the second table.
   (These are join operations.)
NESTED LOOPS OUTER Nested loops operation to perform an outer join statement.
PARTITION SINGLE Access one partition.
PARTITION ITERATOR Access many partitions (a subset).
PARTITION ALL Access all partitions.
PARTITION INLIST Similar to iterator, but based on an IN-list predicate.
PARTITION INVALID Indicates that the partition set to be accessed is empty.
Iterates over the next operation in the plan for each partition in the range given by the PARTITION_START and PARTITION_STOP columns. PARTITION describes partition boundaries applicable to a single partitioned object (table or index) or to a set of equi-partitioned objects (a partitioned table and its local indexes). The partition boundaries are provided by the values of PARTITION_START and PARTITION_STOP of the PARTITION.
REMOTE Retrieval of data from a remote database.
SEQUENCE Operation involving accessing values of a sequence.
SORT AGGREGATE Retrieval of a single row that is the result of applying a group function to a group of selected rows.
SORT UNIQUE Operation sorting a set of rows to eliminate duplicates.
SORT GROUP BY Operation sorting a set of rows into groups for a query with a GROUP BY clause.
SORT JOIN Operation sorting a set of rows before a merge-join.
SORT ORDER BY Operation sorting a set of rows for a query with an ORDER BY clause.
TABLE ACCESS FULL Retrieval of all rows from a table.
   (These are access methods.)
TABLE ACCESS SAMPLE Retrieval of sampled rows from a table.
TABLE ACCESS CLUSTER Retrieval of rows from a table based on a value of an indexed cluster key.
TABLE ACCESS HASH Retrieval of rows from table based on hash cluster key value.
TABLE ACCESS BY ROWID RANGE Retrieval of rows from a table based on a rowid range.
TABLE ACCESS SAMPLE BY ROWID RANGE Retrieval of sampled rows from a table based on a rowid range.
TABLE ACCESS BY USER ROWID If the table rows are located using user-supplied rowids.
TABLE ACCESS BY INDEX ROWID If the table is nonpartitioned and rows are located using index(es).
TABLE ACCESS BY GLOBAL INDEX ROWID If the table is partitioned and rows are located using only global indexes.
TABLE ACCESS BY LOCAL INDEX ROWID If the table is partitioned and rows are located using one or more local indexes and possibly some global indexes.
Partition Boundaries -- The partition boundaries might have been computed by:
A previous PARTITION step, in which case the PARTITION_START and PARTITION_STOP column values replicate the values present in the PARTITION step, and the PARTITION_ID contains the ID of the PARTITION step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, INVALID.

The TABLE ACCESS or INDEX step itself, in which case the PARTITION_ID contains the ID of the step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, ROW REMOVE_LOCATION (TABLE ACCESS only), and INVALID.
UNION Operation accepting two sets of rows and returns the union of the sets, eliminating duplicates.
VIEW Operation performing a view's query and then returning the resulting rows to another operation.


OTHER_TAG Column Text
Tag Text Meaning Interpretation
blank Serial execution.
SERIAL_FROM_REMOTE (S->R) Serial from remote Serial execution at a remote site.
SERIAL_TO_PARALLEL (S->P) Serial to parallel Serial execution; output of step is partitioned or broadcast to parallel execution servers.
PARALLEL_TO_PARALLEL (P->P) Parallel to parallel Parallel execution; output of step is repartitioned to second set of parallel execution servers.
PARALLEL_TO_SERIAL (P->S) Parallel to serial Parallel execution; output of step is returned to serial "query coordinator" process.
PARALLEL_COMBINED_WITH_PARENT (PWP) Parallel combined with parent Parallel execution; output of step goes to next step in same parallel process. No interprocess communication to parent.
PARALLEL_COMBINED_WITH_CHILD (PWC) Parallel combined with child Parallel execution; input of step comes from prior step in same parallel process. No interprocess communication from child.

Analyze Stale Job


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_stale',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,options=>''GATHER STALE''); END;',
    start_date      => '01-APR-12 09.00.00 AM ASIA/CALCUTTA',
     repeat_interval=> 'freq=minutely;bysecond=0;byminute=00,20,40;byhour=9,10,11,12,13,14,15,16,17;byday=MON,TUE,WED,THU,FRI',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO stale statistics every 20 min');
END;
/

insert update without unod redo


Is it possible to issue an INSERT statement or an UPDATE statement without generating Redo Logs and Undo?

This UNDO information itself generates REDO. There is nothing you can do about this situation: temporary tables need UNDO and that's the end of it.

To minimize the amount of UNDO is quite simple: just insert records and select records. INSERT generates the smallest amount of UNDO, because rolling back an INSERT requires simply the rowid. Conversely DELETE statements generate the most UNDO, because the database has to store the entire record. Basically, to rollback an INSERT issue a DELETE, to rollback a DELETE issue an INSERT. An UPDATE generates a variable amount of UNDO, because we need the old versions of the changed columns; the more columns changed and the bigger they are, the larger the amount of UNDO generated.

UNDO is always protected by redo.

If you direct path the global temporary table ( insert /*+ APPEND */) you can bypass undo ON THE TABLE - but not on the indexes. Hence you can reduce (marginally typically as it is usually indexes that generate the most undo) the amount of redo, but you cannot eliminate it.

The append hint suggests that Oracle should use DIRECT-PATH operations, which can result in faster inserts. If I recall correctly you should have exclusive access to the table. It is important to commit after the insert, so that you can select information from it.


More archives and more UNDOs in direct load on table with indexes.

Putting a primary key index in NOLOGGING mode did not help because NOLOGGING applies only to a limited number of operations.

http://www.dba-oracle.com/oracle_news/news_plsql_forall_performance_insert.htm


1.The undo would normally be used to un-insert the rows in the event of a
failure or rollback - with DIRECT LOAD, undo is not necessary since the new rows are added entirely above the high water mark for the table.
2. When "Direct load" (/*+ append */) is used, Oracle can skip undo generation for the TABLE data - but not on the indexes.
3. In archivelog mode , REDO is normally generated with "Direct load" (/*+ APPEND */), it is UNDO that is skipped and then only for the table itself.
4.If table is placed into "nologging" mode (or use nologging in insert
command), then redo for the table as well as undo can be skipped..Again, only for the table - not for any indexes on the table itself.
5. Small redos which are generated in nologgin/DIRECT LOAD is used to protect the data dictionary.
6. To prevent archivelog generation in "Direct load", database and tablespace should not be in "Force logging mode". (Check v$database and dba_tablespaces).


http://oradbatips.blogspot.in/2007/11/tip-63-direct-load-and-redo-log.html

insert /*+ append */ into tbl1 nologging select * from dba_objects;


select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
select sum(undoblks)*8192/1024/1024 UNDOMB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');


Final Solution:
To use direct load and to take advantage of less archivelog generation and better performance, always consider the followings :
1. Disable indexes during direct load.
2. Make sure to use both /*+ append */ with nologging at the same time.
3. Make sure database and tablespace are not in nologging mode.




Use a combination of the following two features to maximize the speed of insert statements:


Set the table’s logging attribute to NOLOGGING; this minimizes the generation redo
for direct path operations (this feature has no effect on regular DML operations).

Use a direct path loading feature, such as the following:

INSERT /*+ APPEND */ on queries that use a subquery for determining which
records are inserted

INSERT /*+ APPEND_VALUES */ on queries that use a VALUES clause





Analyze Fast



exec  DBMS_SCHEDULER.drop_job ('analyze');

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,degree => DBMS_STATS.DEFAULT_DEGREE); END;',
    start_date      => '01-APR-12 06.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
    end_date        => '02-APR-16 11.00.00 PM ASIA/CALCUTTA',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO statistics every 2 hours');
END;
/

exec  DBMS_SCHEDULER.run_job ('analyze');

SESSION_CACHED_CURSORS


alter system set SESSION_CACHED_CURSORS=500 scope=spfile;


If an application repeatedly issues parse calls on the same set of SQL statements, the reopening of the session cursors can affect system performance. Session cursors can be stored in a

session cursor cache. This feature can be particularly useful for applications designed using Oracle Forms because switching from one form to another closes all session cursors associated with

the first form.
The Oracle Server uses the shared SQL area to determine whether more than three parse requests have been issued on a given statement. If so, the Oracle Server assumes the session cursor

associated with the statement should be cached and moves the cursor into the session cursor cache. Subsequent requests to parse that SQL statement by the same session will then find the

cursor in the session cursor cache.
To enable caching of session cursors, use SESSION_CACHED_CURSORS to specify the maximum number of session cursors kept in the cache. A Least Recently Used (LRU) algorithm

removes entries in the session cursor cache to make room for new entries when required.
To determine whether the session cursor cache is sufficiently large for your instance, examine the session statistic "session cursor cache hits" in the V$SESSTAT view. This statistic counts the

number of times a parse call found a cursor in the session cursor cache. If this statistic is a relatively low percentage of the total parse call count for the session, you should consider setting

SESSION_CACHED_CURSORS to a larger value.

if SESSION_CACHED_CURSORS is not set, it defaults to 0 and no cursors will be cached for your session. (Your cursors will still be cached in the shared pool, but your session will have to find

them there.) If it is set, then when a parse request is issued, Oracle checks the library cache to see whether more than 3 parse requests have been issued for that statement. If so, Oracle moves

the session cursor associated with that statement into the session cursor cache. Subsequent parse requests for that statement by the same session are then filled from the session cursor cache,

thus avoiding even a soft parse. (Technically, a parse can't be completely avoided; a "softer" soft parse is done that's faster and requires less CPU.)
In the session cursor cache, Oracle manages the cached cursors using a LRU list. Once more than SESSION_CACHED_CURSORS closed cursors are cached, Oracle starts dropping cached

cursors off the LRU end of the list whenever it needs to make room to cache a new cursor.


If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing

repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT may help with latch contention and give a slight boost to performance. Note that if your application is not resubmitting the same

queries for parsing repeatedly, then session_cursor_cache_hits will be low and the session cursor cache count may be maxed out, but caching cursors by session won't help at all. For example, if

your application is using a lot of unsharable SQL, raising this parameter won't get you anything.



if this parameter is set to a high value, the amount of fragmentation in the shared pool may be increased. The primary cause of library cache latch contention is fragmentation of the shared pool, a

common symptom of which is the ORA-04031 error.


select 'session_cached_cursors' parameter, lpad(value, 5) value,
   decode(value, 0, ' n/a', to_char(100 * used / value, '9990') || '%') usage
   from ( select max(s.value) used from V$STATNAME n, V$SESSTAT s
  where n.name = 'session cursor cache count' and s.statistic# = n.statistic# ),
   ( select value from V$PARAMETER where name = 'session_cached_cursors' )
  union all
   select 'open_cursors', lpad(value, 5), to_char(100 * used / value, '9990') || '%'
  from ( select max(sum(s.value)) used from V$STATNAME n, V$SESSTAT s
   where n.name in ('opened cursors current', 'session cursor cache count') and s.statistic# = n.statistic# group by s.sid ),
   ( Select Value From V$parameter Where Name = 'open_cursors' );




SELECT a.value curr_cached, p.value max_cached,
 s.username, s.sid, s.serial#
 FROM v$sesstat a, v$statname b, v$session s, v$parameter2 p
 Where A.Statistic# = B.Statistic# And S.Sid=A.Sid 
 ---and a.sid=&sid
 And P.Name='session_cached_cursors'
 AND b.name = 'session cursor cache count';


Above query gave a result that CUR_CACHED=MAX_CACHED . If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that

the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT will help with latch contention and can give boost to performance.



Evaluate cached cursors for sessions as compared to parsing

select a.sid, a.value parse_cnt, 
       (select x.value
        from   v$sesstat x, v$statname y
        where  x.sid = a.sid
        and    y.statistic# = x.statistic#
        and    y.name = 'session cursor cache hits') cache_cnt
from   v$sesstat a, v$statname b
where  b.statistic# = a.statistic#
and    b.name = 'parse count (total)'
and    value > 0;

The CACHE_CNT ('session cursor cache hits') of a session should be compared to the PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the

SESSION_CACHED_CURSORS parameter.


------------------------------------------

If you shoot this query against your DB you’ll see what kind of cursor caching is your system doing.

The sum of all the tree values must be 100%:

select
    to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits,
    to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses,
      to_char(100 * hard / calls, '999990.00') || '%' hard_parses
    from
     ( select value calls from v$sysstat where name = 'parse count (total)' ),
    ( select value hard  from v$sysstat where name = 'parse count (hard)' ),
   ( select value sess  from v$sysstat where name = 'session cursor cache hits' );


If the value of the first column is bellow 80% then you must raise the parameter SESSION_CACHED_CURSORS.

----------------------------------------

steps for execution a SELECT query in Oracle:
1) Parse
  1.1) Syntatical check - Oracle checks whether this is a valid SQL.
  1.2) Semantical check - Oracle checks the SQL in terms of data structures, privileges, etc.
  1.3) Shared Pool check - Oracle checks whether the statement is already parsed and optimized. If yes, goes directly to step (4) - Oracle will do "soft parsing". If not, goes through step (2) and (3)

- Oracle will do "hard parsing".
2) Optimize - the CBO generates different execution plans and chooses the one with the lowest cost;
3) Generate source - a raw-source generator prepares the code to be executed;
4) Execute;
5) Output the result.

Item 1 to 4 form part of the parsing of the query, while that Section 5 is the execution itself. When you run a query, always performed at least steps 1 and 2. After executing these steps, Oracle

transforms the query into a hash and sends it to the Shared Pool Library Cache in sought if there is any query with the same hash value (if any session and used it at some point). If there is, it

compares the query text with the one found in the Library Cache to validate if they are exactly the same (this extra step is performed because it may be several consultations with the same hash

value) in case that they are, it proceeds to execute the query. This is what we call a Soft Parse . If the query does not exist, Oracle performs the steps 3 and 4. This is known as a Hard Parse . The

Hard Parse is very costly for Oracle Server since it involves making a number of latches (loqueos) in the SGA and CPU-intensive. As we know, every query run involves using a cursor (a cursor is a

memory space allocated to the execution of our query). The ideal is that our application opens the cursor to be used, run the sentences x times and then close them. Many applications do not

usually run as Forms cursors thus implying that we can not reuse the cursors and always have to reopen the longer run. To reduce this problem, we can use the initialization parameter
SESSION_CACHED_CURSORS that will allow us make a "Softer" Soft Parse . If the parameter setemos 100, 100 Oracle open cursors remain for us to reuse and avoid us having to open each

time. This memory space for the handling of cursors is maintained with a LRU list. Oracle recommend that the parameter setee in the first instance in 50 and be monitoring it to see if it should

increase its value. This parameter must setearse considering the value of OPEN_CURSORS.




soft parse's reduces the scalability of an system and a SQL statement should be more likely parsed once and executed many times.

soft parse will always consume less resources and less time-memory latches which is a good thing and more over many dba's expect the session connecting to database issue sql statements that

are already present in shared pool, so can you please explain why many soft parses would reduce the scalability of the system.

Followers