Tuesday, July 3, 2012
Explain Plan Definitation
Operation Option Description
AND-EQUAL Thes step will have two or more child steps each of which returns a set of ROWID's.The AND-EQUAL operation selects onlyl those ROWIDs that returned by all the child operations.
BITMAP CONVERSION TO ROWIDS Converts a bitmap from a bitmap index to a set of ROWIDs that can be used to retrieve the actual data.
BITMAP CONVERSION FROM ROWIDS Converts a set of ROWIDs into a bitmapped representation.
BITMAP CONVERSION COUNT Counts the number of rows represented by a bitmap.
BITMAP INDEX SINGLE VALUE Looks up the bitmap for a single key value in the index.
BITMAP RANGE SCAN Retrieves bitmaps for a key value range.
BITMAP FULL SCAN Performs a full scan of a bitmap index if there is no start or stop key.
BITMAP MERGE Merges several bitmaps resulting from a range scan into one bitmap.
BITMAP MINUS Subtracts bits of one bitmap from another. Row source is used for negated predicates. Can be used only if there are nonnegated predicates yielding a bitmap from which the subtraction can take place. An example appears in "Viewing Bitmap Indexes with EXPLAIN PLAN".
BITMAP OR Computes the bitwise OR of two bitmaps.
BITMAP AND Computes the bitwise AND of two bitmaps.
BITMAP KEY ITERATION Takes each row from a table row source and finds the corresponding bitmap from a bitmap index. This set of bitmaps are then merged into one bitmap in a following BITMAP MERGE operation.
CONNECT BY Retrieves rows in hierarchical order for a query containing a CONNECT BY clause.
CONCATENATION Operation accepting multiple sets of rows returning the union-all of the sets.
COUNT Operation counting the number of rows selected from a table.
COUNT STOPKEY Count operation where the number of rows returned is limited by the ROWNUM expression in the WHERE clause.
DOMAIN INDEX Retrieval of one or more rowids from a domain index. The options column contain information supplied by a user-defined domain index cost function, if any.
FILTER Operation accepting a set of rows, eliminates some of them, and returns the rest.
FIRST ROW Retrieval of only the first row selected by a query.
FOR UPDATE Operation retrieving and locking the rows selected by a query containing a FOR UPDATE clause.
HASH JOIN Operation joining two sets of rows and returning the result. This join method is useful for joining large data sets of data (DSS, Batch). The join condition is an efficient way of accessing the second table.
(These are join operations.) CBO uses the smaller of the two tables/data sources to build a hash table on the join key in memory. Then it scans the larger table, probing the hash table to find the joined rows.
HASH JOIN ANTI Hash anti-join.
HASH JOIN SEMI Hash semi-join.
INDEX UNIQUE SCAN Retrieval of a single rowid from an index.
(These are access methods.)
INDEX RANGE SCAN Retrieval of one or more rowids from an index. Indexed values are scanned in ascending order.
INDEX RANGE SCAN DESCENDING Retrieval of one or more rowids from an index. Indexed values are scanned in descending order.
INDEX FULL SCAN Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in ascending order.
INDEX FULL SCAN DESCENDING Retrieval of all rowids from an index when there is no start or stop key. Indexed values are scanned in descending order.
INDEX FAST FULL SCAN Retrieval of all rowids (and column values) using multiblock reads. No sorting order can be defined. Compares to a full table scan on only the indexed columns. Only available with the cost based optimizer.
INDEX SKIP SCAN Retrieval of rowids from a concatenated index without using the leading column(s) in the index. Introduced in Oracle9i. Only available with the cost based optimizer.
INLIST ITERATOR Iterates over the next operation in the plan for each value in the IN-list predicate.
INTERSECTION Operation accepting two sets of rows and returning the intersection of the sets, eliminating duplicates.
MERGE JOIN Operation accepting two sets of rows, each sorted by a specific value, combining each row from one set with the matching rows from the other, and returning the result.
(These are join operations.)
MERGE JOIN OUTER Merge join operation to perform an outer join statement.
MERGE JOIN ANIT Merge anti-join.
MERGE JOIN SEMI Merge semi-join.
MERGE JOIN CARTESIAN Can result from 1 or more of the tables not having any join conditions to any other tables in the statement. Can occur even with a join and it may not be flagged as CARTESIAN in the plan.
MINUS Operation accepting two sets of rows and returning rows appearing in the first set but not in the second, eliminating duplicates.
NESTED LOOPS Operation accepting two sets of rows, an outer set and an inner set. Oracle compares each row of the outer set with each row of the inner set, returning rows that satisfy a condition. This join method is useful for joining small subsets of data (OLTP). The join condition is an efficient way of accessing the second table.
(These are join operations.)
NESTED LOOPS OUTER Nested loops operation to perform an outer join statement.
PARTITION SINGLE Access one partition.
PARTITION ITERATOR Access many partitions (a subset).
PARTITION ALL Access all partitions.
PARTITION INLIST Similar to iterator, but based on an IN-list predicate.
PARTITION INVALID Indicates that the partition set to be accessed is empty.
Iterates over the next operation in the plan for each partition in the range given by the PARTITION_START and PARTITION_STOP columns. PARTITION describes partition boundaries applicable to a single partitioned object (table or index) or to a set of equi-partitioned objects (a partitioned table and its local indexes). The partition boundaries are provided by the values of PARTITION_START and PARTITION_STOP of the PARTITION.
REMOTE Retrieval of data from a remote database.
SEQUENCE Operation involving accessing values of a sequence.
SORT AGGREGATE Retrieval of a single row that is the result of applying a group function to a group of selected rows.
SORT UNIQUE Operation sorting a set of rows to eliminate duplicates.
SORT GROUP BY Operation sorting a set of rows into groups for a query with a GROUP BY clause.
SORT JOIN Operation sorting a set of rows before a merge-join.
SORT ORDER BY Operation sorting a set of rows for a query with an ORDER BY clause.
TABLE ACCESS FULL Retrieval of all rows from a table.
(These are access methods.)
TABLE ACCESS SAMPLE Retrieval of sampled rows from a table.
TABLE ACCESS CLUSTER Retrieval of rows from a table based on a value of an indexed cluster key.
TABLE ACCESS HASH Retrieval of rows from table based on hash cluster key value.
TABLE ACCESS BY ROWID RANGE Retrieval of rows from a table based on a rowid range.
TABLE ACCESS SAMPLE BY ROWID RANGE Retrieval of sampled rows from a table based on a rowid range.
TABLE ACCESS BY USER ROWID If the table rows are located using user-supplied rowids.
TABLE ACCESS BY INDEX ROWID If the table is nonpartitioned and rows are located using index(es).
TABLE ACCESS BY GLOBAL INDEX ROWID If the table is partitioned and rows are located using only global indexes.
TABLE ACCESS BY LOCAL INDEX ROWID If the table is partitioned and rows are located using one or more local indexes and possibly some global indexes.
Partition Boundaries -- The partition boundaries might have been computed by:
A previous PARTITION step, in which case the PARTITION_START and PARTITION_STOP column values replicate the values present in the PARTITION step, and the PARTITION_ID contains the ID of the PARTITION step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, INVALID.
The TABLE ACCESS or INDEX step itself, in which case the PARTITION_ID contains the ID of the step. Possible values for PARTITION_START and PARTITION_STOP are NUMBER(n), KEY, ROW REMOVE_LOCATION (TABLE ACCESS only), and INVALID.
UNION Operation accepting two sets of rows and returns the union of the sets, eliminating duplicates.
VIEW Operation performing a view's query and then returning the resulting rows to another operation.
OTHER_TAG Column Text
Tag Text Meaning Interpretation
blank Serial execution.
SERIAL_FROM_REMOTE (S->R) Serial from remote Serial execution at a remote site.
SERIAL_TO_PARALLEL (S->P) Serial to parallel Serial execution; output of step is partitioned or broadcast to parallel execution servers.
PARALLEL_TO_PARALLEL (P->P) Parallel to parallel Parallel execution; output of step is repartitioned to second set of parallel execution servers.
PARALLEL_TO_SERIAL (P->S) Parallel to serial Parallel execution; output of step is returned to serial "query coordinator" process.
PARALLEL_COMBINED_WITH_PARENT (PWP) Parallel combined with parent Parallel execution; output of step goes to next step in same parallel process. No interprocess communication to parent.
PARALLEL_COMBINED_WITH_CHILD (PWC) Parallel combined with child Parallel execution; input of step comes from prior step in same parallel process. No interprocess communication from child.
Analyze Stale Job
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze_stale',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,options=>''GATHER STALE''); END;',
start_date => '01-APR-12 09.00.00 AM ASIA/CALCUTTA',
repeat_interval=> 'freq=minutely;bysecond=0;byminute=00,20,40;byhour=9,10,11,12,13,14,15,16,17;byday=MON,TUE,WED,THU,FRI',
enabled => TRUE,
comments => 'JOB to gather LDBO stale statistics every 20 min');
END;
/
insert update without unod redo
Is it possible to issue an INSERT statement or an UPDATE statement without generating Redo Logs and Undo?
This UNDO information itself generates REDO. There is nothing you can do about this situation: temporary tables need UNDO and that's the end of it.
To minimize the amount of UNDO is quite simple: just insert records and select records. INSERT generates the smallest amount of UNDO, because rolling back an INSERT requires simply the rowid. Conversely DELETE statements generate the most UNDO, because the database has to store the entire record. Basically, to rollback an INSERT issue a DELETE, to rollback a DELETE issue an INSERT. An UPDATE generates a variable amount of UNDO, because we need the old versions of the changed columns; the more columns changed and the bigger they are, the larger the amount of UNDO generated.
UNDO is always protected by redo.
If you direct path the global temporary table ( insert /*+ APPEND */) you can bypass undo ON THE TABLE - but not on the indexes. Hence you can reduce (marginally typically as it is usually indexes that generate the most undo) the amount of redo, but you cannot eliminate it.
The append hint suggests that Oracle should use DIRECT-PATH operations, which can result in faster inserts. If I recall correctly you should have exclusive access to the table. It is important to commit after the insert, so that you can select information from it.
More archives and more UNDOs in direct load on table with indexes.
Putting a primary key index in NOLOGGING mode did not help because NOLOGGING applies only to a limited number of operations.
http://www.dba-oracle.com/oracle_news/news_plsql_forall_performance_insert.htm
1.The undo would normally be used to un-insert the rows in the event of a
failure or rollback - with DIRECT LOAD, undo is not necessary since the new rows are added entirely above the high water mark for the table.
2. When "Direct load" (/*+ append */) is used, Oracle can skip undo generation for the TABLE data - but not on the indexes.
3. In archivelog mode , REDO is normally generated with "Direct load" (/*+ APPEND */), it is UNDO that is skipped and then only for the table itself.
4.If table is placed into "nologging" mode (or use nologging in insert
command), then redo for the table as well as undo can be skipped..Again, only for the table - not for any indexes on the table itself.
5. Small redos which are generated in nologgin/DIRECT LOAD is used to protect the data dictionary.
6. To prevent archivelog generation in "Direct load", database and tablespace should not be in "Force logging mode". (Check v$database and dba_tablespaces).
http://oradbatips.blogspot.in/2007/11/tip-63-direct-load-and-redo-log.html
insert /*+ append */ into tbl1 nologging select * from dba_objects;
select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
select sum(undoblks)*8192/1024/1024 UNDOMB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');
Final Solution:
To use direct load and to take advantage of less archivelog generation and better performance, always consider the followings :
1. Disable indexes during direct load.
2. Make sure to use both /*+ append */ with nologging at the same time.
3. Make sure database and tablespace are not in nologging mode.
Use a combination of the following two features to maximize the speed of insert statements:
Set the table’s logging attribute to NOLOGGING; this minimizes the generation redo
for direct path operations (this feature has no effect on regular DML operations).
•
Use a direct path loading feature, such as the following:
•
INSERT /*+ APPEND */ on queries that use a subquery for determining which
records are inserted
•
INSERT /*+ APPEND_VALUES */ on queries that use a VALUES clause
Labels:
performance tuning
Analyze Fast
exec DBMS_SCHEDULER.drop_job ('analyze');
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE,degree => DBMS_STATS.DEFAULT_DEGREE); END;',
start_date => '01-APR-12 06.00.00 AM ASIA/CALCUTTA',
repeat_interval=> 'FREQ=HOURLY;INTERVAL=2',
end_date => '02-APR-16 11.00.00 PM ASIA/CALCUTTA',
enabled => TRUE,
comments => 'JOB to gather LDBO statistics every 2 hours');
END;
/
exec DBMS_SCHEDULER.run_job ('analyze');
Labels:
performance tuning
SESSION_CACHED_CURSORS
alter system set SESSION_CACHED_CURSORS=500 scope=spfile;
If an application repeatedly issues parse calls on the same set of SQL statements, the reopening of the session cursors can affect system performance. Session cursors can be stored in a
session cursor cache. This feature can be particularly useful for applications designed using Oracle Forms because switching from one form to another closes all session cursors associated with
the first form.
The Oracle Server uses the shared SQL area to determine whether more than three parse requests have been issued on a given statement. If so, the Oracle Server assumes the session cursor
associated with the statement should be cached and moves the cursor into the session cursor cache. Subsequent requests to parse that SQL statement by the same session will then find the
cursor in the session cursor cache.
To enable caching of session cursors, use SESSION_CACHED_CURSORS to specify the maximum number of session cursors kept in the cache. A Least Recently Used (LRU) algorithm
removes entries in the session cursor cache to make room for new entries when required.
To determine whether the session cursor cache is sufficiently large for your instance, examine the session statistic "session cursor cache hits" in the V$SESSTAT view. This statistic counts the
number of times a parse call found a cursor in the session cursor cache. If this statistic is a relatively low percentage of the total parse call count for the session, you should consider setting
SESSION_CACHED_CURSORS to a larger value.
if SESSION_CACHED_CURSORS is not set, it defaults to 0 and no cursors will be cached for your session. (Your cursors will still be cached in the shared pool, but your session will have to find
them there.) If it is set, then when a parse request is issued, Oracle checks the library cache to see whether more than 3 parse requests have been issued for that statement. If so, Oracle moves
the session cursor associated with that statement into the session cursor cache. Subsequent parse requests for that statement by the same session are then filled from the session cursor cache,
thus avoiding even a soft parse. (Technically, a parse can't be completely avoided; a "softer" soft parse is done that's faster and requires less CPU.)
In the session cursor cache, Oracle manages the cached cursors using a LRU list. Once more than SESSION_CACHED_CURSORS closed cursors are cached, Oracle starts dropping cached
cursors off the LRU end of the list whenever it needs to make room to cache a new cursor.
If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing
repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT may help with latch contention and give a slight boost to performance. Note that if your application is not resubmitting the same
queries for parsing repeatedly, then session_cursor_cache_hits will be low and the session cursor cache count may be maxed out, but caching cursors by session won't help at all. For example, if
your application is using a lot of unsharable SQL, raising this parameter won't get you anything.
if this parameter is set to a high value, the amount of fragmentation in the shared pool may be increased. The primary cause of library cache latch contention is fragmentation of the shared pool, a
common symptom of which is the ORA-04031 error.
select 'session_cached_cursors' parameter, lpad(value, 5) value,
decode(value, 0, ' n/a', to_char(100 * used / value, '9990') || '%') usage
from ( select max(s.value) used from V$STATNAME n, V$SESSTAT s
where n.name = 'session cursor cache count' and s.statistic# = n.statistic# ),
( select value from V$PARAMETER where name = 'session_cached_cursors' )
union all
select 'open_cursors', lpad(value, 5), to_char(100 * used / value, '9990') || '%'
from ( select max(sum(s.value)) used from V$STATNAME n, V$SESSTAT s
where n.name in ('opened cursors current', 'session cursor cache count') and s.statistic# = n.statistic# group by s.sid ),
( Select Value From V$parameter Where Name = 'open_cursors' );
SELECT a.value curr_cached, p.value max_cached,
s.username, s.sid, s.serial#
FROM v$sesstat a, v$statname b, v$session s, v$parameter2 p
Where A.Statistic# = B.Statistic# And S.Sid=A.Sid
---and a.sid=&sid
And P.Name='session_cached_cursors'
AND b.name = 'session cursor cache count';
Above query gave a result that CUR_CACHED=MAX_CACHED . If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that
the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT will help with latch contention and can give boost to performance.
Evaluate cached cursors for sessions as compared to parsing
select a.sid, a.value parse_cnt,
(select x.value
from v$sesstat x, v$statname y
where x.sid = a.sid
and y.statistic# = x.statistic#
and y.name = 'session cursor cache hits') cache_cnt
from v$sesstat a, v$statname b
where b.statistic# = a.statistic#
and b.name = 'parse count (total)'
and value > 0;
The CACHE_CNT ('session cursor cache hits') of a session should be compared to the PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the
SESSION_CACHED_CURSORS parameter.
------------------------------------------
If you shoot this query against your DB you’ll see what kind of cursor caching is your system doing.
The sum of all the tree values must be 100%:
select
to_char(100 * sess / calls, '999999999990.00') || '%' cursor_cache_hits,
to_char(100 * (calls - sess - hard) / calls, '999990.00') || '%' soft_parses,
to_char(100 * hard / calls, '999990.00') || '%' hard_parses
from
( select value calls from v$sysstat where name = 'parse count (total)' ),
( select value hard from v$sysstat where name = 'parse count (hard)' ),
( select value sess from v$sysstat where name = 'session cursor cache hits' );
If the value of the first column is bellow 80% then you must raise the parameter SESSION_CACHED_CURSORS.
----------------------------------------
steps for execution a SELECT query in Oracle:
1) Parse
1.1) Syntatical check - Oracle checks whether this is a valid SQL.
1.2) Semantical check - Oracle checks the SQL in terms of data structures, privileges, etc.
1.3) Shared Pool check - Oracle checks whether the statement is already parsed and optimized. If yes, goes directly to step (4) - Oracle will do "soft parsing". If not, goes through step (2) and (3)
- Oracle will do "hard parsing".
2) Optimize - the CBO generates different execution plans and chooses the one with the lowest cost;
3) Generate source - a raw-source generator prepares the code to be executed;
4) Execute;
5) Output the result.
Item 1 to 4 form part of the parsing of the query, while that Section 5 is the execution itself. When you run a query, always performed at least steps 1 and 2. After executing these steps, Oracle
transforms the query into a hash and sends it to the Shared Pool Library Cache in sought if there is any query with the same hash value (if any session and used it at some point). If there is, it
compares the query text with the one found in the Library Cache to validate if they are exactly the same (this extra step is performed because it may be several consultations with the same hash
value) in case that they are, it proceeds to execute the query. This is what we call a Soft Parse . If the query does not exist, Oracle performs the steps 3 and 4. This is known as a Hard Parse . The
Hard Parse is very costly for Oracle Server since it involves making a number of latches (loqueos) in the SGA and CPU-intensive. As we know, every query run involves using a cursor (a cursor is a
memory space allocated to the execution of our query). The ideal is that our application opens the cursor to be used, run the sentences x times and then close them. Many applications do not
usually run as Forms cursors thus implying that we can not reuse the cursors and always have to reopen the longer run. To reduce this problem, we can use the initialization parameter
SESSION_CACHED_CURSORS that will allow us make a "Softer" Soft Parse . If the parameter setemos 100, 100 Oracle open cursors remain for us to reuse and avoid us having to open each
time. This memory space for the handling of cursors is maintained with a LRU list. Oracle recommend that the parameter setee in the first instance in 50 and be monitoring it to see if it should
increase its value. This parameter must setearse considering the value of OPEN_CURSORS.
soft parse's reduces the scalability of an system and a SQL statement should be more likely parsed once and executed many times.
soft parse will always consume less resources and less time-memory latches which is a good thing and more over many dba's expect the session connecting to database issue sql statements that
are already present in shared pool, so can you please explain why many soft parses would reduce the scalability of the system.
Labels:
parameter,
performance tuning
Thursday, June 7, 2012
Following Steps to perform. SYSAUX tablespace
SELECT * FROM dba_jobs WHERE SCHEMA_USER='SYSMAN';
SELECT * FROM SYSMAN.mgmt_metrics_1hour;
SELECT * FROM SYSMAN.MGMT_SEVERITY WHERE LOAD_TIMESTAMP<TRUNC(ADD_MONTHS(SYSDATE,-2));
DELETE FROM SYSMAN.MGMT_SEVERITY WHERE LOAD_TIMESTAMP<TRUNC(ADD_MONTHS(SYSDATE,-2));
COMMIT;
SELECT trigger_name FROM dba_triggers WHERE trigger_name LIKE '%SEV%DEL%';
spool c:\del_severity.sql
SELECT 'exec sysman.em_severity.delete_current_severity('''|| target_guid || ''','''|| metric_guid ||''','''|| key_value ||''');' FROM sysman.MGMT_SEVERITY;
spool off
@c:\del_severity.sql
commit;
-------------
EXEC em_severity.delete_current_severity('142E273EE3BDA54ECF9C42EF7CCB7616','6E65075DA52ACA744B4B8C3FCB018289','/billing01');
EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','10633143D11A8FCF6CB1A732CEE5352A','Sat Dec 31 20:03:38 2011/115843');
COMMIT;
EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','10633143D11A8FCF6CB1A732CEE5352A','Sat Dec 31 20:47:33 2011/134506');
EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','3E6F70DB22758B7B9756EF342180E7BB','SYSAUX');
EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','F95BA0D95585002889E1ABF92B2DA7C3','ARBORAD');
EXEC em_severity.delete_current_severity('85DE1181E5B760BBB8F70F97FFBD7070','F95BA0D95585002889E1ABF92B2DA7C3','SYSMAN');
COMMIT;
-----------------------------------
SELECT s.target_guid,
s.metric_guid,
s.key_value
FROM mgmt_targets t JOIN mgmt_current_severity s
ON s.target_guid != t.target_guid
WHERE LOWER(t.target_name) LIKE '%myhost%';
SELECT DBMS_STATS.get_stats_history_retention FROM dual;
EXEC DBMS_STATS.alter_stats_history_retention(15);
EXEC DBMS_STATS.PURGE_STATS(SYSDATE-15);
col Mb FORM 9,999,999
col SEGMENT_NAME FORM a40
col SEGMENT_TYPE FORM a6
SET lines 120
SELECT SUM(bytes/1024/1024)Mb,segment_name,segment_type FROM dba_segments
WHERE tablespace_name='SYSAUX'
AND segment_name LIKE 'WRI$_OPTSTAT%'
AND segment_type='TABLE'
GROUP BY segment_name,segment_type ORDER BY 1 ASC;
spool c:\rebuild_index.sql
SELECT 'alter index '||segment_name||' rebuild online parallel (degree 14);' FROM dba_segments WHERE tablespace_name='SYSAUX'
AND segment_name LIKE '%OPT%' AND segment_type='INDEX';
spool off
@c:\rebuild_index.sql
SELECT * FROM dba_indexes WHERE owner='SYS' AND status!='VALID';
Labels:
performance tuning
Clear Table records after startup database
If user
does not logout properly / kill task / services restarted then record of that
user will present in tblrklogindetails table.
We can
create trigger that will delete all records if database service will be
restarted
create
or replace trigger tg_clear_logindetails
after startup
on database
declare
j integer;
pragma autonomous_transaction;
begin
dbms_job.submit (j, 'begin execute immediate ''truncate table
ldbo.tblrklogindetails''; end;');
commit;
end
tg_clear_logindetails;
/
If there is no records in the
table then following Error message will popup.
Please change this message to
some user friendly message
“Session
is Disconnected from a server. Please Login again”
Or
something else
In web application , client makes
connection with server using IIS manager.
1)
User login into rk application and fire report
2)
Oracle service is restart in between. Oracle Not connected Error has
started because connection is break between client and server.
3)
Error is coming until we have not logout and login again into
application.
4)
Iisreset help to reconnect client server without logout application.
Resolution
When database is shutdown,
application should automatically shutdown when showing oracle not connected .
Tblrklogindetails records should
be clear.
Wednesday, June 6, 2012
Schedule Job for specific time interval
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'Test',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN sp_rkexecutejob(13,'' ''); END;',
start_date => '01-APR-12 09.00.00 AM ASIA/CALCUTTA',
repeat_interval=> 'freq=minutely;bysecond=0;byminute=00,15,30,45;byhour=9,10,11,12,13,14,15,16;byday=MON,TUE,WED,THU,FRI',
enabled => TRUE,
comments => 'JOB for rk test');
END;
/
The repaet_interval paramter of the scheduler configuration should have a freq value of minutely interval value of 5 and byhourly value of 9-17.
BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'analyze_queue',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLRKQUEUE'' ,force=>TRUE); END;',
start_date => '01-APR-12 10.00.00 AM ASIA/CALCUTTA',
repeat_interval=> 'freq=minutely;interval=5;byhour=9,10,11,12,13,14,15,16,17;byday=MON,TUE,WED,THU,FRI',
enabled => TRUE,
comments => 'JOB to gather Queue Table statistics');
END;
/
Friday, June 1, 2012
IIS Performance Tuning
1) Recycling
Application Pool
Daily they have to restart IIS to clear the application pool.
An
application pool is a set of one or more applications assigned to a set of one
or more worker processes.
It is useful to recycle periodically because web apps might have
a memory leaks or other conditions that get worse over time.
Schedule a recycle application pool once a day on off-peak
hours.
2) Dedicated Application
Pool for different applications
3)
Web Garden on IIS (Max Worker process)
Allocate worker process to different CPU core
By default each Application Pool runs with a Single
Worker Process (W3Wp.exe). We can assign multiple Worker Processes With a
Single Application Pool. An Application Poll with multiple Worker process is
called "Web Gardens". Many worker processes with the same Application
Pool can sometimes provide better throughput performance and application
response time. And each worker process should have their own Thread and Own
Memory space.
Application Pool > Max Worker process
IIS has a Maximum Worker Process setting per App Pool. For my
AppPool, I increased this value to 16 on our 8 core (16 w/ hyperthreading)
Win2008 Xeon server in hopes of getting more parallel processing of client
requests for our IIS hosted WCF Service (net.tcp, .net 4.0). I am not sure that
this is having the desired affect as I don't see 16 worker processes in task
manager despite a heavy load from my multi-threaded test client.
Web gardens was designed for one single reason –
Offering applications that are not CPU-bound but execute long running requests
the ability to scale and not use up all threads available in the worker
process.
The examples might be things like -
Applications that make long running database requests
(e.g. high computational database transaction)
Applications that have threads occupied by
long-running synchronous and network intensive transactions
Max Worker Process=10
Max Worker Process=no of CPU
4)
Limit the queue length for application pools
When running an IIS server in worker process
isolation mode, limit the queue length for application pools to prevent large
numbers of requests from queuing and overloading the IIS web server.
Adding a new request to the queue can result in exceeding the maximum queue length. In turn, the IIS server may reject the requests and send a 503 error to the client. Keep the queue length shorter to boost IIS performance.
Adding a new request to the queue can result in exceeding the maximum queue length. In turn, the IIS server may reject the requests and send a 503 error to the client. Keep the queue length shorter to boost IIS performance.
5)
IIS HTTP compression
Enable HTTP
compression on the Microsoft IIS server to use bandwidth more effectively. This
results in faster response times between the IIS web server and
compression-enabled browsers.
IIS
can provide three compression options, including static files only, dynamic
application responses, and both static and dynamic application responses.
Enable
dynamic compression for the server when bandwidth is an issue as indicated by
IIS server monitoring, but remember that compression consumes CPU time and
memory resources significantly.
Sites >
Compression>
6)
Use the Expires header in IIS7
Configure the HTTP Expires header to improve
IIS server performance. Generally, continuous HTTP requests sent from new
website visitors to display pages creates a massive load on the IIS server.
Configuring the Expires header allows caching
pages (and page elements such as images, scripts, css, etc.) on the client side
and helps avoid unnecessary HTTP requests.
In IIS 7, Expires headers can be enabled in the system.webServer section of the web.config file. Using the Expires header, you can determine how long files can be cached on the client side. This may be a certain number of hours or days in the future or a fixed date.
7)
Windows 2008 Network
Settings
8)
Change max
concurrent requests per CPU setting
Max Concurent Request is calculated by
following formula
(MaxWorkerThreads * no Of
CPU) – (MinFreeThreads * no Of CPU)
This is 12 by default on a single-proc
machine.
So performance does improve by adding more
CORE/CPU(hardware) to your system
So If you have Two Processor with below
configuration
maxWorkerThreads
100
minFreeThreads
88
Than MaxConcurent Request would be
(100*2) - (88*2) =24 Concurent Request
If you add two more Core than
Than MaxConcurent Request would be
(100*4) - (88*4) =48 Concurent Request.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\2.0.50727\MaxConcurrentRequestsPerCPU
Set this to a value higher than
12 (12 is the default)
[DWORD]
Change the maxConcurrentRequestsPerCPU setting from 12 to
5000. This allowed IIS to process more requests concurrently.
Labels:
Windows System Admin
Thursday, May 31, 2012
Improve windows networking
Improve windows networking (especially if hosting webservers)
First of all, I forgive you for having to use windows to host your web server :)
Now that thats out of the way, here's how you speed up networking performance on windows (especially if you happen to host Apache, IBM HTTP Server, or any other web server on a windows machine)
Create a .reg file (eg: perf.reg) with the contents that follow:
Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters] "MaxUserPort"=dword:00008000 "TcpTimedWaitDelay"=dword:0000001e "TcpMaxDataRetransmissions"=dword:00000005 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces] "TcpAckFrequency"=dword:00000001 "TcpDelAckTicks"=dword:00000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters] "EnableDynamicBacklog"=dword:00000001 "MinimumDynamicBacklog"=dword:00000020 "MaximumDynamicBacklog"=dword:00001000 "DynamicBacklogGrowthDelta"=dword:00000010 "KeepAliveInterval"=dword:00000001 Double click on the file to merge its contents into the registry. Restart the machine. You're done.
1)HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ParametersValue Name: MaxUserPortValue Type: REG_DWORDValid Range: 5000-65534 (decimal) Default: 0×1388 (5000 decimal)New Value: 65534Description: This parameter controls the maximum port number used when an application requests any available user port from the system. Normally, ephemeral (that is, short-lived) ports are allocated between the values of 1024 and 5000 inclusive.2)HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\ParametersValue Name: TcpTimedWaitDelayValue Type: REG_DWORDValid Range: 30-300 (decimal)Default: 0xF0 (240 decimal)New Value: 30Description: This parameter determines the time that must elapse before TCP can release a closed connection and reuse its resources. This interval between closure and release is known as the TIME_WAIT state or 2MSL state. During this time, the connection can be reopened at much less cost to the client and server than establishing a new connection.Normally, TCP does not release closed connections until the value of this entry expires. However, TCP can release connections before this value expires if it is running out of TCP control blocks (TCBs).A lower TcpTimedWaitDelay setting means that the sockets wait in the TIME_WAIT state for less time. A higher MaxUserPort setting means that you can have more sockets in the TIME_WAIT state.
Labels:
Windows System Admin
Tuesday, May 29, 2012
Login Oracle database using Windows User
Create Oracle External users in Windows
1. Create the Windows user:
start --> settings --> control panel --> administrative tools --> computer management --> user
2. Add the new user to the Oracle group:
start --> settings --> control panel --> administrative tools --> computer management --> groups
3. Add user to OS Database Administrator:
start --> programs --> oracle home --> configuration and migration tools --> administration for windows NT --> OS database administrator
4. Add user to OS Database Operators:
start --> programs --> oracle home --> configuration and migration tools --> administration for windows NT --> OS database operators
5. Add user to OS Administrator:
start --> programs --> oracle home --> configuration and migration tools --> administration for windows NT --> OS administrator
6. Add user to OS Operators:
start --> programs --> oracle home --> configuration and migration tools --> administration for windows NT --> OS operators
7. Add os_authent_prefix=OPS$ to your initialization parameters (pfile or spfile) and bounce Oracle database, if necessary.
remote_os_authent FALSE
os_authent_prefix OPS$
SQLNET.AUTHENTICATION_SERVICES in file sqlnet.ora contains NTS
CREATE USER "OPS$WIN-RGWOS2P8G1H\LDUSER" IDENTIFIED EXTERNALLY;
GRANT CONNECT TO "OPS$WIN-RGWOS2P8G1H\LDUSER";
connect to client machine with OS user
sqlplusw /@nbs1112srv
Friday, May 25, 2012
SQL Session Tracing
1. Start a sqlplus session to the database, and enable sql tracing with this command:
alter session set events '10046 trace name context forever, level 12';
The resulting trace file will be put in the USER_DUMP_DEST directory.
To find the location of this directory, as system, do:
SQL>show parameters user_dump_dest
2. Execute your code to reproduce the error/issue.
3. Turn off the trace using this command:
alter session set events '10046 trace name context off';
4. Please upload the trace file and the alert.log file
A 10046 trace creates an output dump file. Therefore, before you enable the 10046 trace event, you will need to set some database parameters that control the output of the dump file. These parameters include:
TIMED_STATISTICS – Must be set to TRUE to get timing information in your trace files.
MAX_DUMP_FILE_SIZE – Controls the maximum size of the trace file. For 10046 trace files, the default setting is generally too small.
USER_DUMP_DEST – This is the location the 10046 trace file(s) are written to.
STATISTICS_LEVEL – This should be set, at a minimum, to TYPICAL. When set to ALL, more information will be collected in the resulting trace files. All examples in this white paper are with STATISTICS_LEVEL set to TYPICAL.
Listener Tracing
We will need to trace a connection on both CLIENT ,SERVER and Listener endpoints to see what is happening.
Please collect traces for the good and the slow connections:
Please follow these steps:
1. Add the following parameters to the listener.ora:
TRACE_LEVEL_<listener_name>=16
TRACE_DIRECTORY_<listener_name>=<some_known_directory> (but not the root directory)
TRACE_FILE_<listener_name>=listener
TRACE_TIMESTAMP_<listener_name>=ON
Replace <listener_name> with the name of your listener, the default name is LISTENER.
2. Reload the listener (execute "lsnrctl reload <listener_name>").
3. Add the following parameters in the sqlnet.ora file on the CLIENT workstation:
TRACE_LEVEL_CLIENT=16
TRACE_DIRECTORY_CLIENT=<some_known_directory> (but not the root directory)
TRACE_FILE_CLIENT=client
TRACE_UNIQUE_CLIENT=ON
TRACE_TIMESTAMP_CLIENT=ON
4. Add the following parameters in the sqlnet.ora file on the SERVER:
TRACE_LEVEL_SERVER=16
TRACE_DIRECTORY_SERVER=<some_known_directory> (but not the root directory)
TRACE_FILE_SERVER=server
TRACE_TIMESTAMP_SERVER=ON
5. Attempt to reproduce the issue (connect from a client workstation) and verify if trace files were created.
>>> Please note that we need to group of traces, one when the connection is fine and one when there a delay.
6. Compress (in .zip or .tar.gz format) and send the trace files. Also please clarify in the name if it the good trace or the bad one.
7. Turn off tracing by setting all TRACE_LEVEL_<listener_name> = OFF and reload the listener.
8. Disable tracing by removing the TRACE lines from sqlnet.ora on the client and server
Tuesday, May 22, 2012
Database Link Creation Script
set pagesize 0
set linesize 30000
set long 500000
set longchunksize 500000
set trimspool on
set feed off
------------Drop Database link
Select 'DROP '||Decode(U.Name,'PUBLIC','public ')||'database link '||Chr(10)
||Decode(U.Name,'PUBLIC',Null, U.Name||'.')|| L.Name||Chr(10) ||';' TEXT
From Sys.Link$ L, Sys.User$ U
Where L.Owner# = U.User#;
------------Create Database link
SELECT 'create '||DECODE(U.NAME,'PUBLIC','public ')||'database link '||CHR(10)
||Decode(U.Name,'PUBLIC',Null, U.Name||'.')|| L.Name||Chr(10)
||' connect to ' || L.Userid || ' identified by values '''
||L.PASSWORDX||''' using ''' || L.host || ''''
||chr(10)||';' TEXT
From Sys.Link$ L, Sys.User$ U
WHERE L.OWNER# = U.USER#;
------------------------CHECK LINK
select 'SELECT INSTANCE_NAME,HOST_NAME FROM V$INSTANCE@' ||Db_Link || ';' from dba_db_links;
ORACLE 11G work with IPv6 addresses in addition to IPv4
ORACLE 11G work with IPv6 addresses in addition to IPv4
These are the simple steps that will make your DB 11g instance work with IPV6 environment.
Configure Linux
To configure Linux with this specific IPv6 address, you modify the /etc/sysconfig/network-scripts/ifcfg-{device name} file.
In my case I modified the ifcfg-eth0 file. I had previously added an IPv6 address and will add a secondary IPv6 address for the IPv4 Mapped Address.
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
HWADDR=00:50:43:ac:11:a2
TYPE=Ethernet
USERCTL=no
IPV6INIT=yes
PEERDNS=yes
NETMASK=255.255.0.0
IPADDR=192.168.110.77
GATEWAY=192.168.255.254
IPV6ADDR=2001:DB8::200C:417A/64
IPV6ADDR_SECONDARIES=::ffff:192.168.110.77/64
Configure the Database Listener
The next step is to configure the Listener for this IPv6 address.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 2001:db8::200c:417a)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.110.77)(PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)
)
Configuring the Client
The last step is to create an appropriate TNS entry for your application
AVI =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 2001:db8::200c:417a)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB11g)
)
)
AVI2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =192.168.110.77)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB11g)
)
)
Test the Connection
use the utility TNSPING
database-server :/u/app/oracle/v11.2.0/network/admin > tnsping AVI
database-server :/u/app/oracle/v11.2.0/network/admin > tnsping AVI2
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST =192.168.110.77)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DB11g)))
OK (10 msec)
database-server :/u/app/oracle/v11.2.0/network/admin
in 11g UTL_INADDR.GET_HOST_ADDRESS gives me IPv6, while I need IPv4
in 11g UTL_INADDR.GET_HOST_ADDRESS gives me IPv6, while I need IPv4
SELECT UTL_INADDR.GET_HOST_ADDRESS FROM DUAL;
disable ipv6 on OS level
c:\Windows\System32\drivers\etc\hosts
SELECT UTL_INADDR.GET_HOST_ADDRESS FROM DUAL;
disable ipv6 on OS level
c:\Windows\System32\drivers\etc\hosts
---------------
ORA_CLIENT_IP_ADDRESS Returns the IP address of the client in a LOGON event, when the underlying protocol is TCP/IP
select sys_context('userenv','ip_address') from dual;
Monday, May 21, 2012
ORA-12839: cannot modify an object in parallel after modifying it
ORA-12839: cannot modify an object in parallel after modifying it
Affects:
Product (Component) Oracle Server (Rdbms)
Range of versions believed to be affected Versions >= 11.1 but BELOW 12.1
Versions confirmed as being affected
11.2.0.2
11.1.0.7
Platforms affected Generic (all / most platforms affected)
Fixed:
This issue is fixed in
12.1 (Future Release)
11.2.0.3
Symptoms:
Related To:
Error May Occur
Internal Error May Occur (ORA-600)
ORA-12839
ORA-600 [qerltcFetch_20]
Constraint Related
Parallel Query (PQO)
_disable_parallel_conventional_load
DML
Description
If you have a parent table and child table with a parent referential constraint
then running Parallel DML again this may fail with an unexpected ORA-12839
or even fail with an ORA-600.
Workaround
Setting "_disable_parallel_conventional_load" = true can help avoid this.
Please note: The above is a summary description only. Actual symptoms can vary. Matching to any symptoms here does not confirm that you are encountering this problem. For questions about this bug please consult Oracle Support.
References
Bug:9831227 (This link will only work for PUBLISHED bugs)
Note:245840.1 Information on the sections in this article
Subscribe to:
Posts (Atom)