Tuesday, July 17, 2012

Change Notification not received from Oracle database 11.2.0.1 to client 11.2.0.3


                                      
Server
version   
Client version
11.2.0.1.0
11.2.0.2.0
11.2.0.2.0 + fix
11.2.0.3.0
11.2.0.1.0
works
fails
works
fails
11.2.0.2.0
fails
 works
fails
fails
11.2.0.2.0 + fix
works
fails
works
works
11.2.0.3.0
fails
works
works


Solution

Apply patches or upgrade versions as applicable to obtain a working combination.

A workaround can be used to bypass the authentication phase, which is where the problem behavior occurs.  To do that, set the following event in the database:
  
alter system set events '10867 trace name context forever, level 1';


Tuesday, July 10, 2012

RAID recommendation for Oracle Database


Redo logs are extremely write intensive, and high bandwidth storage such as SSD are ideas for relieving I/O bottlenecks.  Also, online redo logs should always be multiplexed, and RAID1 provides few additional benefits.
RAID
Type of Raid
Control File
Database File
Redo Log File
Archive Log File
0
Striping
Avoid
OK
Avoid
Avoid
1
Shadowing
Best
OK
Best
Best
1+0
Striping and Shadowing
OK
Best
Avoid
Avoid
3
Striping with static parity
OK
OK
Avoid
Avoid
5
Striping with rotating parity
OK
Best if RAID0-1 not available
Avoid
Avoid

RAID Recommendations (From MOSC NOTE: 45635.1)

Tuesday, July 3, 2012

Explain Plan Definitation



Operation Option Description
AND-EQUAL Thes step will have two or more child steps each of which returns a set of ROWID's.The AND-EQUAL operation selects onlyl those ROWIDs that returned by all the child operations.
BITMAP CONVERSION TO ROWIDS Converts a bitmap from a bitmap index to a set of ROWIDs that can be used to retrieve the actual data.
BITMAP CONVERSION FROM ROWIDS Converts a set of ROWIDs into a bitmapped representation.
BITMAP CONVERSION COUNT Counts the number of rows represented by a bitmap.
BITMAP INDEX SINGLE VALUE Looks up the bitmap for a single key value in the index.
BITMAP RANGE SCAN Retrieves bitmaps for a key value range.
BITMAP FULL SCAN Performs a full scan of a bitmap index if there is no start or stop key.
BITMAP MERGE Merges several bitmaps resulting from a range scan into one bitmap.
BITMAP MINUS Subtracts bits of one bitmap from another. Row source is used for negated predicates. Can be used only if there are nonnegated predicates yielding a bitmap from which the subtraction can take place. An example appears in "Viewing Bitmap Indexes with EXPLAIN PLAN".
BITMAP OR Computes the bitwise OR of two bitmaps.
BITMAP AND Computes the bitwise AND of two bitmaps.
BITMAP KEY ITERATION Takes each row from a table row source and finds the corresponding bitmap from a bitmap index. This set of bitmaps are then merged into one bitmap in a following BITMAP MERGE operation.
CONNECT BY Retrieves rows in hierarchical order for a query containing a CONNECT BY clause.
CONCATENATION Operation accepting multiple sets of rows returning the union-all of the sets.
COUNT Operation counting the number of rows selected from a table.
COUNT STOPKEY Count operation where the number of rows returned is limited by the ROWNUM expression in the WHERE clause.
DOMAIN INDEX Retrieval of one or more rowids from a domain index. The options column contain information supplied by a user-defined domain index cost function, if any.
FILTER Operation accepting a set of rows, eliminates some of them, and returns the rest.
FIRST ROW Retrieval of only the first row selected by a query.
FOR UPDATE Operation retrieving and locking the rows selected by a query containing a FOR UPDATE clause.
HASH JOIN Operation joining two sets of rows and returning the result. This join method is useful for joining large data sets of data (DSS, Batch). The join condition is an efficient way of accessing the second table.
   (These are join operations.) CBO uses the smaller of the two tables/data sources to build a hash table on the join key in memory. Then it scans the larger table, probing the hash table to find the joined rows.
HASH JOIN ANTI Hash anti-join.
HASH JOIN SEMI Hash semi-join.
INDEX UNIQUE SCAN Retrieval of a single rowid from an index.
   (These are access methods.)
INDEX RANGE SCAN Retrieval of one or more rowids from an index. Indexed values are scanned in ascending order.
INDEX RANGE SCAN DESCENDING Retrieval of one or more rowids from an index. Indexed values are scanned in descending order.
INDEX FULL SCAN Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in ascending order.
INDEX FULL SCAN DESCENDING Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in descending order.
INDEX FAST FULL SCAN Retrieval of all rowids (and column values) using multiblock reads. No sorting order can be defined. Compares to a full table scan on only the indexed columns. Only available with the cost based optimizer.
INDEX SKIP SCAN Retrieval of rowids from a concatenated index without using the leading column(s) in the index. Introduced in Oracle9i. Only available with the cost based optimizer.
INLIST ITERATOR Iterates over the next operation in the plan for each value in the IN-list predicate.
INTERSECTION Operation accepting two sets of rows and returning the intersection of the sets, eliminating duplicates.
MERGE JOIN Operation accepting two sets of rows, each sorted by a specific value, combining each row from one set with the matching rows from the other, and returning the result.
   (These are join operations.)
MERGE JOIN OUTER Merge join operation to perform an outer join statement.
MERGE JOIN ANIT Merge anti-join.
MERGE JOIN SEMI Merge semi-join.
MERGE JOIN CARTESIAN Can result from 1 or more of the tables not having any join conditions to any other tables in the statement. Can occur even with a join and it may not be flagged as CARTESIAN in the plan.
MINUS Operation accepting two sets of rows and returning rows appearing in the first set but not in the second, eliminating duplicates.
NESTED LOOPS Operation accepting two sets of rows, an outer set and an inner set. Oracle compares each row of the outer set with each row of the inner set, returning rows that satisfy a condition. This join method is useful for joining small subsets of data (OLTP). The join condition is an efficient way of accessing the second table.
   (These are join operations.)
NESTED LOOPS OUTER Nested loops operation to perform an outer join statement.
PARTITION SINGLE Access one partition.
PARTITION ITERATOR Access many partitions (a subset).
PARTITION ALL Access all partitions.
PARTITION INLIST Similar to iterator, but based on an IN-list predicate.
PARTITION INVALID Indicates that the partition set to be accessed is empty.
Iterates over the next operation in the plan for each partition in the range given by the PARTITION_START and PARTITION_STOP columns. PARTITION describes partition boundaries applicable to a single partitioned object (table or index) or to a set of equi-partitioned objects (a partitioned table and its local indexes). The partition boundaries are provided by the values of PARTITION_START and PARTITION_STOP of the PARTITION.
REMOTE Retrieval of data from a remote database.
SEQUENCE Operation involving accessing values of a sequence.
SORT AGGREGATE Retrieval of a single row that is the result of applying a group function to a group of selected rows.
SORT UNIQUE Operation sorting a set of rows to eliminate duplicates.
SORT GROUP BY Operation sorting a set of rows into groups for a query with a GROUP BY clause.
SORT JOIN Operation sorting a set of rows before a merge-join.
SORT ORDER BY Operation sorting a set of rows for a query with an ORDER BY clause.
TABLE ACCESS FULL Retrieval of all rows from a table.
   (These are access methods.)
TABLE ACCESS SAMPLE Retrieval of sampled rows from a table.
TABLE ACCESS CLUSTER Retrieval of rows from a table based on a value of an indexed cluster key.
TABLE ACCESS HASH Retrieval of rows from table based on hash cluster key value.
TABLE ACCESS BY ROWID RANGE Retrieval of rows from a table based on a rowid range.
TABLE ACCESS SAMPLE BY ROWID RANGE Retrieval of sampled rows from a table based on a rowid range.
TABLE ACCESS BY USER ROWID If the table rows are located using user-supplied rowids.
TABLE ACCESS BY INDEX ROWID If the table is nonpartitioned and rows are located using index(es).
TABLE ACCESS BY GLOBAL INDEX ROWID If the table is partitioned and rows are located using only global indexes.
TABLE ACCESS BY LOCAL INDEX ROWID If the table is partitioned and rows are located using one or more local indexes and possibly some global indexes.
Partition Boundaries -- The partition boundaries might have been computed by:
A previous PARTITION step, in which case the PARTITION_START and PARTITION_STOP column values replicate the values present in the PARTITION step, and the PARTITION_ID contains the ID of the PARTITION step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, INVALID.

The TABLE ACCESS or INDEX step itself, in which case the PARTITION_ID contains the ID of the step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, ROW REMOVE_LOCATION (TABLE ACCESS only), and INVALID.
UNION Operation accepting two sets of rows and returns the union of the sets, eliminating duplicates.
VIEW Operation performing a view's query and then returning the resulting rows to another operation.


OTHER_TAG Column Text
Tag Text Meaning Interpretation
blank Serial execution.
SERIAL_FROM_REMOTE (S->R) Serial from remote Serial execution at a remote site.
SERIAL_TO_PARALLEL (S->P) Serial to parallel Serial execution; output of step is partitioned or broadcast to parallel execution servers.
PARALLEL_TO_PARALLEL (P->P) Parallel to parallel Parallel execution; output of step is repartitioned to second set of parallel execution servers.
PARALLEL_TO_SERIAL (P->S) Parallel to serial Parallel execution; output of step is returned to serial "query coordinator" process.
PARALLEL_COMBINED_WITH_PARENT (PWP) Parallel combined with parent Parallel execution; output of step goes to next step in same parallel process. No interprocess communication to parent.
PARALLEL_COMBINED_WITH_CHILD (PWC) Parallel combined with child Parallel execution; input of step comes from prior step in same parallel process. No interprocess communication from child.

Analyze Stale Job


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_stale',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,options=>''GATHER STALE''); END;',
    start_date      => '01-APR-12 09.00.00 AM ASIA/CALCUTTA',
     repeat_interval=> 'freq=minutely;bysecond=0;byminute=00,20,40;byhour=9,10,11,12,13,14,15,16,17;byday=MON,TUE,WED,THU,FRI',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO stale statistics every 20 min');
END;
/

insert update without unod redo


Is it possible to issue an INSERT statement or an UPDATE statement without generating Redo Logs and Undo?

This UNDO information itself generates REDO. There is nothing you can do about this situation: temporary tables need UNDO and that's the end of it.

To minimize the amount of UNDO is quite simple: just insert records and select records. INSERT generates the smallest amount of UNDO, because rolling back an INSERT requires simply the rowid. Conversely DELETE statements generate the most UNDO, because the database has to store the entire record. Basically, to rollback an INSERT issue a DELETE, to rollback a DELETE issue an INSERT. An UPDATE generates a variable amount of UNDO, because we need the old versions of the changed columns; the more columns changed and the bigger they are, the larger the amount of UNDO generated.

UNDO is always protected by redo.

If you direct path the global temporary table ( insert /*+ APPEND */) you can bypass undo ON THE TABLE - but not on the indexes. Hence you can reduce (marginally typically as it is usually indexes that generate the most undo) the amount of redo, but you cannot eliminate it.

The append hint suggests that Oracle should use DIRECT-PATH operations, which can result in faster inserts. If I recall correctly you should have exclusive access to the table. It is important to commit after the insert, so that you can select information from it.


More archives and more UNDOs in direct load on table with indexes.

Putting a primary key index in NOLOGGING mode did not help because NOLOGGING applies only to a limited number of operations.

http://www.dba-oracle.com/oracle_news/news_plsql_forall_performance_insert.htm


1.The undo would normally be used to un-insert the rows in the event of a
failure or rollback - with DIRECT LOAD, undo is not necessary since the new rows are added entirely above the high water mark for the table.
2. When "Direct load" (/*+ append */) is used, Oracle can skip undo generation for the TABLE data - but not on the indexes.
3. In archivelog mode , REDO is normally generated with "Direct load" (/*+ APPEND */), it is UNDO that is skipped and then only for the table itself.
4.If table is placed into "nologging" mode (or use nologging in insert
command), then redo for the table as well as undo can be skipped..Again, only for the table - not for any indexes on the table itself.
5. Small redos which are generated in nologgin/DIRECT LOAD is used to protect the data dictionary.
6. To prevent archivelog generation in "Direct load", database and tablespace should not be in "Force logging mode". (Check v$database and dba_tablespaces).


http://oradbatips.blogspot.in/2007/11/tip-63-direct-load-and-redo-log.html

insert /*+ append */ into tbl1 nologging select * from dba_objects;


select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
select sum(undoblks)*8192/1024/1024 UNDOMB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');


Final Solution:
To use direct load and to take advantage of less archivelog generation and better performance, always consider the followings :
1. Disable indexes during direct load.
2. Make sure to use both /*+ append */ with nologging at the same time.
3. Make sure database and tablespace are not in nologging mode.




Use a combination of the following two features to maximize the speed of insert statements:


Set the table’s logging attribute to NOLOGGING; this minimizes the generation redo
for direct path operations (this feature has no effect on regular DML operations).

Use a direct path loading feature, such as the following:

INSERT /*+ APPEND */ on queries that use a subquery for determining which
records are inserted

INSERT /*+ APPEND_VALUES */ on queries that use a VALUES clause





Analyze Fast



exec  DBMS_SCHEDULER.drop_job ('analyze');

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,degree => DBMS_STATS.DEFAULT_DEGREE); END;',
    start_date      => '01-APR-12 06.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
    end_date        => '02-APR-16 11.00.00 PM ASIA/CALCUTTA',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO statistics every 2 hours');
END;
/

exec  DBMS_SCHEDULER.run_job ('analyze');

SESSION_CACHED_CURSORS


alter system set SESSION_CACHED_CURSORS=500 scope=spfile;


If an application repeatedly issues parse calls on the same set of SQL statements, the reopening of the session cursors can affect system performance. Session cursors can be stored in a

session cursor cache. This feature can be particularly useful for applications designed using Oracle Forms because switching from one form to another closes all session cursors associated with

the first form.
The Oracle Server uses the shared SQL area to determine whether more than three parse requests have been issued on a given statement. If so, the Oracle Server assumes the session cursor

associated with the statement should be cached and moves the cursor into the session cursor cache. Subsequent requests to parse that SQL statement by the same session will then find the

cursor in the session cursor cache.
To enable caching of session cursors, use SESSION_CACHED_CURSORS to specify the maximum number of session cursors kept in the cache. A Least Recently Used (LRU) algorithm

removes entries in the session cursor cache to make room for new entries when required.
To determine whether the session cursor cache is sufficiently large for your instance, examine the session statistic "session cursor cache hits" in the V$SESSTAT view. This statistic counts the

number of times a parse call found a cursor in the session cursor cache. If this statistic is a relatively low percentage of the total parse call count for the session, you should consider setting

SESSION_CACHED_CURSORS to a larger value.

if SESSION_CACHED_CURSORS is not set, it defaults to 0 and no cursors will be cached for your session. (Your cursors will still be cached in the shared pool, but your session will have to find

them there.) If it is set, then when a parse request is issued, Oracle checks the library cache to see whether more than 3 parse requests have been issued for that statement. If so, Oracle moves

the session cursor associated with that statement into the session cursor cache. Subsequent parse requests for that statement by the same session are then filled from the session cursor cache,

thus avoiding even a soft parse. (Technically, a parse can't be completely avoided; a "softer" soft parse is done that's faster and requires less CPU.)
In the session cursor cache, Oracle manages the cached cursors using a LRU list. Once more than SESSION_CACHED_CURSORS closed cursors are cached, Oracle starts dropping cached

cursors off the LRU end of the list whenever it needs to make room to cache a new cursor.


If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing

repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT may help with latch contention and give a slight boost to performance. Note that if your application is not resubmitting the same

queries for parsing repeatedly, then session_cursor_cache_hits will be low and the session cursor cache count may be maxed out, but caching cursors by session won't help at all. For example, if

your application is using a lot of unsharable SQL, raising this parameter won't get you anything.



if this parameter is set to a high value, the amount of fragmentation in the shared pool may be increased. The primary cause of library cache latch contention is fragmentation of the shared pool, a

common symptom of which is the ORA-04031 error.


select 'session_cached_cursors' parameter, lpad(value, 5) value,
   decode(value, 0, ' n/a', to_char(100 * used / value, '9990') || '%') usage
   from ( select max(s.value) used from V$STATNAME n, V$SESSTAT s
  where n.name = 'session cursor cache count' and s.statistic# = n.statistic# ),
   ( select value from V$PARAMETER where name = 'session_cached_cursors' )
  union all
   select 'open_cursors', lpad(value, 5), to_char(100 * used / value, '9990') || '%'
  from ( select max(sum(s.value)) used from V$STATNAME n, V$SESSTAT s
   where n.name in ('opened cursors current', 'session cursor cache count') and s.statistic# = n.statistic# group by s.sid ),
   ( Select Value From V$parameter Where Name = 'open_cursors' );




SELECT a.value curr_cached, p.value max_cached,
 s.username, s.sid, s.serial#
 FROM v$sesstat a, v$statname b, v$session s, v$parameter2 p
 Where A.Statistic# = B.Statistic# And S.Sid=A.Sid 
 ---and a.sid=&sid
 And P.Name='session_cached_cursors'
 AND b.name = 'session cursor cache count';


Above query gave a result that CUR_CACHED=MAX_CACHED . If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that

the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT will help with latch contention and can give boost to performance.



Evaluate cached cursors for sessions as compared to parsing

select a.sid, a.value parse_cnt, 
       (select x.value
        from   v$sesstat x, v$statname y
        where  x.sid = a.sid
        and    y.statistic# = x.statistic#
        and    y.name = 'session cursor cache hits') cache_cnt
from   v$sesstat a, v$statname b
where  b.statistic# = a.statistic#
and    b.name = 'parse count (total)'
and    value > 0;

The CACHE_CNT ('session cursor cache hits') of a session should be compared to the PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the

SESSION_CACHED_CURSORS parameter.


------------------------------------------

If you shoot this query against your DB you’ll see what kind of cursor caching is your system doing.

The sum of all the tree values must be 100%:

select
    to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits,
    to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses,
      to_char(100 * hard / calls, '999990.00') || '%' hard_parses
    from
     ( select value calls from v$sysstat where name = 'parse count (total)' ),
    ( select value hard  from v$sysstat where name = 'parse count (hard)' ),
   ( select value sess  from v$sysstat where name = 'session cursor cache hits' );


If the value of the first column is bellow 80% then you must raise the parameter SESSION_CACHED_CURSORS.

----------------------------------------

steps for execution a SELECT query in Oracle:
1) Parse
  1.1) Syntatical check - Oracle checks whether this is a valid SQL.
  1.2) Semantical check - Oracle checks the SQL in terms of data structures, privileges, etc.
  1.3) Shared Pool check - Oracle checks whether the statement is already parsed and optimized. If yes, goes directly to step (4) - Oracle will do "soft parsing". If not, goes through step (2) and (3)

- Oracle will do "hard parsing".
2) Optimize - the CBO generates different execution plans and chooses the one with the lowest cost;
3) Generate source - a raw-source generator prepares the code to be executed;
4) Execute;
5) Output the result.

Item 1 to 4 form part of the parsing of the query, while that Section 5 is the execution itself. When you run a query, always performed at least steps 1 and 2. After executing these steps, Oracle

transforms the query into a hash and sends it to the Shared Pool Library Cache in sought if there is any query with the same hash value (if any session and used it at some point). If there is, it

compares the query text with the one found in the Library Cache to validate if they are exactly the same (this extra step is performed because it may be several consultations with the same hash

value) in case that they are, it proceeds to execute the query. This is what we call a Soft Parse . If the query does not exist, Oracle performs the steps 3 and 4. This is known as a Hard Parse . The

Hard Parse is very costly for Oracle Server since it involves making a number of latches (loqueos) in the SGA and CPU-intensive. As we know, every query run involves using a cursor (a cursor is a

memory space allocated to the execution of our query). The ideal is that our application opens the cursor to be used, run the sentences x times and then close them. Many applications do not

usually run as Forms cursors thus implying that we can not reuse the cursors and always have to reopen the longer run. To reduce this problem, we can use the initialization parameter
SESSION_CACHED_CURSORS that will allow us make a "Softer" Soft Parse . If the parameter setemos 100, 100 Oracle open cursors remain for us to reuse and avoid us having to open each

time. This memory space for the handling of cursors is maintained with a LRU list. Oracle recommend that the parameter setee in the first instance in 50 and be monitoring it to see if it should

increase its value. This parameter must setearse considering the value of OPEN_CURSORS.




soft parse's reduces the scalability of an system and a SQL statement should be more likely parsed once and executed many times.

soft parse will always consume less resources and less time-memory latches which is a good thing and more over many dba's expect the session connecting to database issue sql statements that

are already present in shared pool, so can you please explain why many soft parses would reduce the scalability of the system.

Thursday, June 7, 2012


Following Steps to perform. SYSAUX tablespace


SELECT *  FROM dba_jobs WHERE SCHEMA_USER='SYSMAN';

SELECT *  FROM SYSMAN.mgmt_metrics_1hour;

SELECT *  FROM SYSMAN.MGMT_SEVERITY WHERE LOAD_TIMESTAMP<TRUNC(ADD_MONTHS(SYSDATE,-2));

DELETE FROM SYSMAN.MGMT_SEVERITY WHERE LOAD_TIMESTAMP<TRUNC(ADD_MONTHS(SYSDATE,-2));
COMMIT;

SELECT trigger_name FROM dba_triggers WHERE trigger_name LIKE '%SEV%DEL%';

spool c:\del_severity.sql
SELECT 'exec sysman.em_severity.delete_current_severity('''|| target_guid || ''','''|| metric_guid ||''','''|| key_value ||''');' FROM sysman.MGMT_SEVERITY;
spool off
@c:\del_severity.sql
commit;

-------------
EXEC em_severity.delete_current_severity('142E273EE3BDA54ECF9C42EF7CCB7616','6E65075DA52ACA744B4B8C3FCB018289','/billing01');

EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','10633143D11A8FCF6CB1A732CEE5352A','Sat Dec 31 20:03:38 2011/115843');
COMMIT;

EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','10633143D11A8FCF6CB1A732CEE5352A','Sat Dec 31 20:47:33 2011/134506');

EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','3E6F70DB22758B7B9756EF342180E7BB','SYSAUX');

EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','F95BA0D95585002889E1ABF92B2DA7C3','ARBORAD');

EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','F95BA0D95585002889E1ABF92B2DA7C3','SYSMAN');

COMMIT;
-----------------------------------
SELECT s.target_guid,
           s.metric_guid,
           s.key_value
       FROM mgmt_targets t JOIN mgmt_current_severity s
           ON s.target_guid != t.target_guid
      WHERE LOWER(t.target_name) LIKE '%myhost%';
     

SELECT DBMS_STATS.get_stats_history_retention FROM dual;
EXEC DBMS_STATS.alter_stats_history_retention(15);
EXEC DBMS_STATS.PURGE_STATS(SYSDATE-15);

col Mb FORM 9,999,999
col SEGMENT_NAME FORM a40
col SEGMENT_TYPE FORM a6
SET lines 120
SELECT SUM(bytes/1024/1024)Mb,segment_name,segment_type FROM dba_segments
 WHERE  tablespace_name='SYSAUX'
AND segment_name LIKE 'WRI$_OPTSTAT%'
AND segment_type='TABLE'
GROUP BY segment_name,segment_type ORDER BY 1 ASC;


spool c:\rebuild_index.sql
SELECT 'alter index '||segment_name||' rebuild online parallel (degree 14);' FROM dba_segments WHERE tablespace_name='SYSAUX'
AND segment_name LIKE '%OPT%' AND segment_type='INDEX';
spool off
@c:\rebuild_index.sql


SELECT *  FROM dba_indexes WHERE owner='SYS' AND status!='VALID';




Clear Table records after startup database


If user does not logout properly / kill task / services restarted then record of that user will present in tblrklogindetails table.

We can create trigger that will delete all records if database service will be restarted

create or replace trigger tg_clear_logindetails
    after startup
     on database
  declare
     j   integer;
    pragma autonomous_transaction;
  begin
    dbms_job.submit (j,  'begin execute immediate ''truncate table ldbo.tblrklogindetails''; end;');
   commit;
 end tg_clear_logindetails;
 /


If there is no records in the table then following Error message will popup.

Please change this message to some user friendly message

“Session is Disconnected from a server. Please Login again”

Or something else



In web application , client makes connection with server using IIS manager.

1)      User login into rk application and fire report
2)      Oracle service is restart in between. Oracle Not connected Error has started because connection is break between client and server.
3)      Error is coming until we have not logout and login again into application.
4)      Iisreset help to reconnect client server without logout application.


Resolution

When database is shutdown, application should automatically shutdown when showing oracle not connected .

Tblrklogindetails records should be clear.

Wednesday, June 6, 2012

Schedule Job for specific time interval


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'Test',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN sp_rkexecutejob(13,'' ''); END;',
    start_date      => '01-APR-12 09.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'freq=minutely;bysecond=0;byminute=00,15,30,45;byhour=9,10,11,12,13,14,15,16;byday=MON,TUE,WED,THU,FRI',
    enabled         => TRUE,
    comments        => 'JOB for rk test');
END;
/


The repaet_interval paramter of the scheduler configuration should have a freq value of minutely interval value of 5 and byhourly value of 9-17.


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_queue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLRKQUEUE'' ,force=>TRUE); END;',
    start_date      => '01-APR-12 10.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'freq=minutely;interval=5;byhour=9,10,11,12,13,14,15,16,17;byday=MON,TUE,WED,THU,FRI',
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/

Friday, June 1, 2012

IIS Performance Tuning


1)     Recycling Application Pool

Daily they have to restart IIS to clear the application pool.

An application pool is a set of one or more applications assigned to a set of one or more worker processes.

It is useful to recycle periodically because web apps might have a memory leaks or other conditions that get worse over time.

Schedule a recycle application pool once a day on off-peak hours. 



cid:image002.jpg@01CD3FFB.20EDA9C0

cid:image007.jpg@01CD3FFB.20EDA9C0

2)     Dedicated Application Pool for different applications

cid:image008.jpg@01CD3FFB.20EDA9C0

3)     Web Garden on IIS (Max Worker process) Allocate worker process to different CPU core
By default each Application Pool runs with a Single Worker Process (W3Wp.exe). We can assign multiple Worker Processes With a Single Application Pool. An Application Poll with multiple Worker process is called "Web Gardens". Many worker processes with the same Application Pool can sometimes provide better throughput performance and application response time. And each worker process should have their own Thread and Own Memory space.

Application Pool > Max Worker process

cid:image003.jpg@01CD400F.ADF00D90

 IIS has a Maximum Worker Process setting per App Pool. For my AppPool, I increased this value to 16 on our 8 core (16 w/ hyperthreading) Win2008 Xeon server in hopes of getting more parallel processing of client requests for our IIS hosted WCF Service (net.tcp, .net 4.0). I am not sure that this is having the desired affect as I don't see 16 worker processes in task manager despite a heavy load from my multi-threaded test client.

Web gardens was designed for one single reason – Offering applications that are not CPU-bound but execute long running requests the ability to scale and not use up all threads available in the worker process.
The examples might be things like -
Applications that make long running database requests (e.g. high computational database transaction)
Applications that have threads occupied by long-running synchronous and network intensive transactions

Max Worker Process=10
Max Worker Process=no of CPU


4)     Limit the queue length for application pools

When running an IIS server in worker process isolation mode, limit the queue length for application pools to prevent large numbers of requests from queuing and overloading the IIS web server.
Adding a new request to the queue can result in exceeding the maximum queue length. In turn, the IIS server may reject the requests and send a 503 error to the client. Keep the queue length shorter to boost IIS performance.


5)     IIS HTTP compression

Enable HTTP compression on the Microsoft IIS server to use bandwidth more effectively. This results in faster response times between the IIS web server and compression-enabled browsers.
IIS can provide three compression options, including static files only, dynamic application responses, and both static and dynamic application responses.
Enable dynamic compression for the server when bandwidth is an issue as indicated by IIS server monitoring, but remember that compression consumes CPU time and memory resources significantly.
Sites > Compression>

6)      Use the Expires header in IIS7

Configure the HTTP Expires header to improve IIS server performance. Generally, continuous HTTP requests sent from new website visitors to display pages creates a massive load on the IIS server.
Configuring the Expires header allows caching pages (and page elements such as images, scripts, css, etc.) on the client side and helps avoid unnecessary HTTP requests.

In IIS 7, Expires headers can be enabled in the system.webServer section of the web.config file. Using the Expires header, you can determine how long files can be cached on the client side. This may be a certain number of hours or days in the future or a fixed date.
7)     Windows 2008 Network Settings

8)     Change max concurrent requests per CPU setting

Max Concurent Request is calculated by following formula
(MaxWorkerThreads * no Of CPU) – (MinFreeThreads * no Of CPU)
This is 12 by default on a single-proc machine.
So performance does improve by adding more CORE/CPU(hardware) to your system
So If you have Two Processor with below configuration
maxWorkerThreads               100
minFreeThreads                      88
Than MaxConcurent Request would be
(100*2) - (88*2) =24 Concurent Request
If you add two more Core than
Than MaxConcurent Request would be
(100*4) - (88*4) =48 Concurent Request.


HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727\MaxConcurrentRequestsPerCPU

Set this to a value higher than 12 (12 is the default)
[DWORD]
Change the maxConcurrentRequestsPerCPU setting from 12 to 5000. This allowed IIS to process more requests concurrently.


Followers