Thursday, February 14, 2013

AWR Analysis


http://www.toadworld.com/Experts/GuyHarrisonsImprovingOraclePerformance/ResolvingOracleContention/tabid/257/Default.aspx
http://www.dba-oracle.com/art_sql_tune.htm
http://www.pafumi.net/Tips_for_Effective_Queries.html


Rows processed—Queries that process a large number of rows will have high I/O and may also have impact on the TEMP tablespace.

Buffer gets—High buffer gets may indicate a resource-intensive query.

Disk reads—High disk reads indicate a query that is causing excessive I/O.

Memory KB—The memory allocation of a SQL statement is useful for identifying statements that are doing in-memory table joins.

CPU secs—This identifies the SQL statements that use the most processor resources.

Sorts—Sorts can be a huge slowdown, especially if they’re being done on a disk in the TEMP tablespace.

Executions—The more frequently executed SQL statements should be tuned first, since they will have the greatest impact on overall performance.

http://docs.oracle.com/cd/B10500_01/server.920/a96536/apc2.htm
http://savvinov.com/2012/04/06/awr-reports-interpreting-cpu-usage/
http://dbaregistry.blogspot.in/2010/01/oracle-wait-events.html
http://www.dba-oracle.com/t_online_redo_logs_raid.htm
http://samadhandba.wordpress.com/


STATISTICS_LEVEL should be TYPICAL or ALL for getting AWR report

---------------How to reduce buffer gets
Buffer gets—High buffer gets may indicate a resource-intensive query.

you should be able to reduce buffer gets by reducing read operations This can sometimes be done by indexed lookups, sometimes not. It depends on the situation.

Buffer gets means that the data block is in memory and Oracle is returning data from the buffer cache and NOT having to go to disk, which is bad

when the value of buffer gets for SQL statement increased ,The amount of Oracle CPU has been increased .
-------------------

RAM is indirectly proportional to disk IO
if RAM has high memory then disk IO will be low

RAID 1 is best for Oracle DB.
----------------
Check DB Time
If DB time is significantly higher than the elapsed time then things are waiting.
 example, the Elapsed Time is around 60 minutes while the DB Time is around 700 minutes. This means that 700 minutes of time is spent by the sessions on waiting.
---------------------

Look for the Top 5 Timed Foreground Events. Investigate what is triggering these events? (Ignore idle events)

SQL ordered by Elapsed Time => Always helps to find the time consuming sql (Observe the elapsed time as well as number of executions also).
SQL ordered by Reads should also be noticed for I/O intensive queries.


------------------------
PGA Memory Advisory
When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0
when Estd PGA Cache Hit %=99% starts means this is sufficient pga for DB.
------------------------
Tablespace IO Stats

5-10 msec is normal
not exceed from 20msec

------------------------
File IO Stats depends on disk storage
8-10 msec is normal

------------------------
Enqueue Activity
Enqueue Type (Request Reason)
 TM lock
this could be caused by missing indexes on foreign key columns.
Problem with standard edition, fine with enterprise edition
The foreign-key issue is that DML on the child table locks the parent table
-----------------

“Starting with Oracle 10.2,  you may notice a significant degradation in relative performance when you combine a GROUP BY with an ORDER BY on the same columns.

When a GROUP BY is associated with an ORDER BY on the same columns,  the Oracle optimizer may choose a SORT GROUP BY rather than the usually more efficient HASH GROUP BY.

Using the SORT GROUP BY avoids adding a SORT ORDER BY to the plan, but the overall result is usually disappointing.

To get a better result, you can perform the GROUP BY in an in-line view and perform the ORDER BY in the outer query.  Use the NO_MERGE hint to prevent the two operations from being combined.”
-------------

Performance issue is caused by soft parsing, which can be solved by holding cursors (session_cached_cursors). It may be due to oversized shared pool, which is causing delay in searching child cursors. My second thought is, there is large number of reloads, which can be due to under-sized shared pool, if invalidation activities are not going (CBO statistics collection, DDL etc), cursors are being flushed frequently.

-------------
Recursive calls are also generated due to the inavailability of dictionary info in the dictionary cache, firing of database triggers, execution of DDL, execution of SQL within PL/SQL blocks, functions or stored procedures and enforcement of referential integrity constraints.

If the recursive CPU usage percentage is a large proportion of total CPU usage, you may want to make sure the shared pool memory allocation is adequate. However, a PL/SQL-based application will always have a significant amount of recursive CPU usage.

To reduce the recursive CPU usage, make sure that shared pool is sized correctly. Also you can check the parameters like

CURSOR_SHARING – values can be EXACT, SIMILAR, FORCE
SESSION_CACHED_CURSORS – value can be anything numeric like 500.
-------------------
Remove unnecessary large-table full table scans Unnecessary full table scans cause a huge amount of unnecessary I/O, and can drag down an entire database.
---------------------
Load Profile

Parse should be near 0

Parse activity statistics should be checked carefully because they can immediately indicate a problem within the application. For example, a database has been running several days with a fixed set of applications, it should, within a course of time, parse most SQLs issued by the applications, and these statistics should be near zero.

If there are high values of Soft Parses  or especially Hard Parses statistics, such values should be taken as an indication that the applications make little use of bind variables and produce large numbers of unique SQLs. However, if the database serves developmental purposes, high vales of these statistics are not bad.

% Blocks changed per Read:
The % Blocks changed per Read statistic indicates that only 4.85 percent of all blocks are retrieved for update, and in this example, the Recursive Call % statistic is extremely high with about 90 percent. However, this fact does not mean that nearly all SQL statements executed by the database are caused by parsing activity, data dictionary management, space management, and so on.

It is also useful to check the value of the Rollback per transaction % statistic. This statistic reports the percent of transactions rolled back. In a production system, this value should be low. If the output indicates a high percentage of transactions rolled back, the database expends a considerable amount of work to roll back changes made.  This should be further investigated in order to see why the applications roll back so often.

Rollback per transaction %  should be low.

---------------------

CPU time represents time spent on CPU and does not include time waiting for CPU.

CPU usage is described by “CPU time” (or “DB CPU”) statistics. Somewhat counterintuitively, AWR report showing CPU time close to 100% in the top timed events section does not necessarily indicate a problem. It simply means that database is busy using CPU to do work for its users. However, if CPU time (expressed in CPU seconds) becomes commensurate to the total CPU power available on the host (or shows consistent growth patterns), then it becomes a problem, and a serious one: this means that at best, Oracle processes will wait lots of time to get on CPU runqueue. In the worst case scenario, the host OS won’t have adequate resources to run and may eventually hang.

DB CPU usage (% of CPU power available) = CPU time / NUM_CPUS / elapsed time

If DB CPU usage is at 80-90% of the capacity (or 70-80% and growing) then you try to reduce CPU usage or if not possible, buy more CPU power before the system freezes.

----------------------Instance Efficiency Percentage
 these all will be above 90%

Parse CPU to Parse Elapsd %: 0.00 % shows how much time the CPU is parsing SQL statments, so the lower the better.

The Instance Efficiency Percentage report section contains ratios or calculations that may provide information regarding different structures and operations in the Oracle instance. Database tuning must never be driven by hit ratios. Hit ratios only provide additional information to help the DBA understand how the instance is operating.

http://www.dba-oracle.com/oracle10g_tuning/t_instance_efficiency_percentage.htm


The following list includes the meanings of particular hit ratios:
Buffer Hit Ratio: Measures how many times a required block was found in memory rather than having to execute an expensive read operation on disk to get the block.
Buffer Nowait %: Shows the percentage of times when data buffers were accessed directly without any wait time.
Library Hit %: Shows the percentage of times when SQL statements and PL/SQL packages were found in the shared pool.
Execute to Parse %: Shows how often parsed SQL statements are reused without reparsing.
Parse CPU to Parse Elapsd %: Gives the ratio of CPU time spent to parse SQL statements.
Redo NoWait %: Shows whether the redo log buffer has sufficient size.
In-memory Sort %: Shows the percentage of times when sorts are performed in memory instead of using temporary tablespaces.
Soft Parse %: Shows how often sessions issued a SQL statement that is already in the shared pool and how it can use an existing version of that statement.
Latch Hit %: Shows how often latches were acquired without having to wait.
% Non-Parse CPU: Shows the percentage of how much CPU resources were spent on the actual SQL execution.

In the above list of statistics, special attention should be paid to parse-related statistics. The Instance Efficiency Percentage report provided previously shows that about 95 percent of the parses are soft as indicated by the Soft Parse %. This is good enough, indicating that the SQL statements are actively reused by Oracle.


The next interesting item to review is the Parse CPU to Parse Elapsd % statistic. In this case, it is about three percent, which is very low. This fact reveals that Oracle waits for some resources during parsing of SQL statements. This should be investigated further to find the cause.

In this case, % Non-Parse CPU statistic is about 97 percent, which is quite high.  This indicates Oracle utilizes the CPU mostly for statement execution but not for parsing.

As a rule of thumb, one should always minimize the number of hard parses in your production database. This reduction yields the benefit of minimizing CPU overhead spent performing costly parse work.


------------------Shared Pool Statistics ------------------------

Idealy this should be lesser. If it is very high like beyond 90, this shows the contention in the shared pool.

This following sample report section shows shared pool related statistics:

Shared Pool Statistics        Begin    End
                              ------  ------
             Memory Usage %:   92.70   92.49
    % SQL with executions>1:   86.73   84.20
  % Memory for SQL w/exec>1:   84.12   71.86

In this example, the Memory Usage % statistic shows that almost all, approximately 92 percent, of the shared pool memory is consumed. This could indicate that the system experiences some overhead while aging out old shared memory structures like cursors, PL/SQL programs, and so on. This places additional overhead on the CPU to perform reparsing aging-out. The size of the shared pool should be increased appropriately to eliminate such overhead. In general, this statistic should be near 70 percent after the database has been running a long time.
If it is quite low, memory is being wasted.

Memory Usage % >: The sweet point for the shared pool Memory Usage % is at about ~70%-80% because of aging out. You are at ~90%. It would be advisable to increase the shared pool, but if you're using 10g SGA_TARGET and SGA_MAX_SIZE, then that might require increasing those parameters.

Memory Usage % >90% then increase sga

The % SQL with executions>1 statistic indicates how many SQL statements are executed more than one time. This measures how well production applications are tuned and how well they make use of bind variables.

% Memory for SQL w/exec>1: the percentage of SQLs executed more then once, that were in Memory


-------------------------------------------Buffer Pool Advisory-----------
If you check the column Size for Est(M) for Size Factor 1.0 it is 596M which means 596MB of memory has been allocated by Oracle either automatically if sga_target > 0 OR manually in the parameter file.

Simultaneously we need to check the column “Estimated Physical Reads” which has a value of 37,612,741.

You will notice that increasing buffer cache size will reduce the number of “Estimated Physical Reads”.

Size Factor ”1.0” also has Estimated Phys Read Fator ”1.0” with “Estimated Physical Reads” as37,612,741.

Note how ‘Size Factor’ increases and ‘Estimated Phys Read Factor’ decreases.


 In cases when sga_target > 0 and db_cache_size is determined automatically, similar data would mean that allocated by Oracle size of the buffer cache is fairly balanced, i.e. not too small to have excessive number of physical reads and not too big to have unnecessary overhead while serving a bigger buffer cache.


--------------------------------Top 5 Timed Events
Sometimes, a major external problem such as bad disk RAID can cause the entire database to run slowly, so it is always a good idea to start with a listing of the AWR top five wait event listing.

This tell you where the highest resources are being consumed.

db file sequential read would indicate that there where full table scans being performed.

 Interpretation of CPU Time(s) 1033

We have 60*60=3600 CPU Seconds to use in a particular interval for single CPU in 1 hour snap


In the example we have 8 CPU (Num_CPUs under the Operating System Statistics of AWR) which relates to    60*60*8= 28800 CPU seconds to use in 1 hr interval. ( Single Database Machine is running on machine)

(1033/28800)*100 = 3.58% of Total CPU

Network Bottleneck
A slow or un-tuned network can be a nightmare.  Network overload can also be caused by Oracle TNS issues or by an inefficient application that makes too many trips to Oracle to fetch screen data.

A CPU-bound database may be due to a real overload of the CPU banks, in which case the problem can be remedied with more or faster processors.  High CPU consumption can also be due to un-tuned SQL causing excessive logical I/O in the form of consistent gets or by contention within the shared pool, such as the total lack of SQL bind variables.



----------------------------Tablespace IO Stats
This shows the IO Statistics for each tablespaces in the database.

As the thumb rule, the Av Rd(ms) [Average Reads in milliseconds] should not cross beyond 30, add myself(not greater that 30) which is considered to be IO bottleneck.

Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
TEMP 3,316,0 4.91 1 28,840 8 0 0
DAT1 520,120 146 16.06 1.21 185,846 52 902 13
DAT3 93,411 26 42.82 2.98 13,442 4 16 23.13
DAT2 98,171 28 91.97 7.97 5,333 2 325 34.89

In the above example, the Av Rd(ms) is high in all tablespaces indicating the IO contention.

------------------------------------------------Wait Events
---------------db file sequential reads
Reason:
Use of an unselective index
Fragmented Indexes
High I/O on a particular disk or mount point
Bad application design
Index reads performance can be affected by  slow I/O subsystem and/or poor database files layout, which result in a higher average  wait time

Action:
Check indexes on the table to ensure that the right index is being used
Check the column order of the index with the WHERE clause of the Top SQL statements
Rebuild indexes with a high clustering factor
Use partitioning to reduce the amount of blocks being visited
Make sure optimizer statistics are up to date
Relocate ‘hot’ datafiles
Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool

Suggestion
The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
Significant db file sequential read wait time is most likely an application issue.
If the DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the number of blocks in the table, then most of the rows in the table are ordered. This is desirable.
However, if the clustering factor approaches the number of rows in the table, it means the rows in the table are randomly ordered and thus it requires more I/Os to complete the operation. You can improve the index’s clustering factor by rebuilding the table so that rows are ordered according to the index key and rebuilding the index thereafter.

----------------------db file scattered reads
Reason:
The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT) to be  read into the SGA from disk.
Full Table scans
Fast Full Index Scans

Action:
Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT
Partition pruning to reduce number of blocks visited
Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical and logical reads.
The objective is to reduce the demands for both the logical and physical I/Os, and this is best achieved through SQL and application tuning.
Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

----------------log file parallel write
Reason:
LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
I/O wait on sub system holding the online redo log files

Action:
Reduce the amount of redo being generated
Do not leave tablespaces in hot backup mode for longer than necessary
Do not use RAID 5 for redo log files
Use faster disks for redo log files
Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention
Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

----------------------------------------------log file sync
Reason:
Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
 Oracle metric indicates the process is waiting for LGWR to finish flushing the log buffer to disk.

Action:
Tune LGWR to get good throughput to disk eg: Do not put redo logs on RAID5 Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations
Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).

See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options in order to reduce the amount of redo being written.
Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time.


Log File Sync waits occur when sessions wait for redo data to be written to disk
typically this is caused by slow writes
or committing too frequently in the application
CPU overburning(very high demand => LGWR on run queue)
improper Operating System configuration(check 169706.1)
BUGs in Oracle(especially with RAC option) and 3rd Party software(like ODM/DISM)


Recommendations:
tune LGWR process to get good throughput, especially when ‘log file parallel write‘ high too:
do not put redo logs on RAID 5 without good write cache
do not put redo logs on Solid State Disk (SSD)
if CPUs are overburned(check runqueue with vmstat):
check for non-oracle system activity, like GZIP or BZIP2 running in business hours…
lower instance’s CPU usage(for example, tune SQL for LIOs)
increase LGWR priority(renice or _high_priority_processes),
decrease COMMITs count for applications with many short transactions
use COMMIT [BATCH] NOWAIT(10g+) when possible
do some processing with NOLOGGING(or may be even with _disable_logging=TRUE if just testing performance benchmark/impact), but think about database recoverability
lower system’s CPU usage or increase LGWR priority
check if there is some 3rd party software, or utilities like RMAN, activity on the same disks as redo logs placed, like trace/systemstate dump files, e.t.c
trace LGWR as the last option for troubleshooting OS/3rd party issues


---------------buffer busy waits
Reason:
Buffer busy waits are common in an I/O-bound Oracle system.
The two main cases where this can occur are:
Another session is reading the block into the buffer
Another session holds the buffer in an incompatible mode to our request
These waits indicate read/read, read/write, or write/write contention.
The Oracle session is waiting to pin a buffer.
A buffer must be pinned before it can be read or modified. Only one process can pin a buffer at any one time.
This wait can be intensified by a large block size as more rows can be contained within the block
This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is "busy
It is also often due to several processes repeatedly reading the same blocks (eg: if lots of people scan the same index or data block)

Action:
The main way to reduce buffer busy waits is to reduce the total I/O on the system
Depending on the block type, the actions will differ
Eliminate HOT blocks from the application.
Check for repeatedly scanned / unselective indexes.
Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
 Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes).
 Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .
Reduce the number of rows per block
Increase of number of FREELISTs and FREELIST GROUPs

----------------------------------------------free buffer waits
Reason:
This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in the cache
Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk
DBWR is unable to keep up to the write requests
Checkpoints happening too fast – maybe due to high database activity and under-sized online redo log files
Large sorts and full table scans are filling the cache with modified blocks faster than the DBWR is able to write to disk
If the  number of dirty buffers that need to be written to disk is larger than the number that DBWR can write per batch, then these waits can be observed

Action:
Reduce checkpoint frequency  - increase the size of the online redo log files
Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA
Set disk_asynch_io = true set If not using asynchronous I/O increase the number of db writer processes or dbwr slaves
Pre-sorting or reorganizing data can help

------------------------enqueue waits
Reason:
This wait event indicates a wait for a lock that is held by another session (or sessions) in an incompatible mode to the requested mode.
TX Transaction Lock
Generally due to table or application set up issues
This indicates contention for row-level lock. This wait occurs when a transaction tries to update or delete rows that are currently locked by another transaction.
This usually is an application issue.

TM DML enqueue lock
Generally due to application issues, particularly if foreign key constraints have not been indexed.

ST lock
Database actions that modify the UET$ (usedextent) and FET$ (free extent) tables require the ST lock, which includes actions such as drop, truncate, and coalesce.
Contention for the ST lock indicates there are multiple sessions actively performing dynamic disk space allocation or deallocation in dictionary managed tablespaces

Action:
Reduce waits and wait times
Whenever you see an enqueue wait event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource
Waits for TM enqueue in Mode 3 are primarily due to unindexed foreign key columns
Create indexes on foreign keys  < 10g
Following are some of the things you can do to minimize ST lock contention in your database:
Use locally managed tablespaces
Recreate all temporary tablespaces using the CREATE TEMPORARY TABLESPACE TEMPFILE… command.

--------------------------------Cache buffer chain latch
Reason:
This latch is acquired when searching for data blocks Buffer cache is a chain of blocks and each chain is protected by a child latch when it needs to be scanned
Hot blocks are another common cause of cache buffers chains latch contention. This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch.
SQL statements with high BUFFER_GETS (logical reads) per EXECUTIONS are the main culprits
Multiple concurrent sessions are executing the same inefficient SQL that is going after the same data set

Action:
Reducing contention for the cache buffer chains latch will usually require reducing logical I/O rates by tuning and minimizing the I/O requirements of  the SQL involved. High I/O rates could be a sign of a hot block (meaning a block highly accessed).
Exporting the table, increasing the PCTFREE significantly, and importing the data. This minimizes the number of rows per block, spreading them over many blocks. Of course, this is at the expense of storage and full table scans operations will be slower
Minimizing the number of records per block in the table
For indexes, you can rebuild them with higher PCTFREE values, bearing in mind that this may increase the height of the index.


--------------------Direct Path Reads
Reason:
These waits are associated with direct read operations which read data directly into the sessions PGA bypassing the SGA
The "direct path read" and "direct path write" wait events are related to operations that are performed in PGA like sorting, group by operation, hash join
In DSS type systems, or during heavy batch periods, waits on "direct path read" are quite normal
However, for an OLTP system these waits are significant
These wait events can occur during sorting operations which is not surprising as direct path reads and writes usually occur in connection with temporary tsegments
SQL statements with functions that require sorts, such as ORDER BY, GROUP BY, UNION, DISTINCT, and ROLLUP, write sort runs to the temporary tablespace when the input size is larger than the work area in the PGA

Action:
Ensure the OS asynchronous IO is configured correctly.
Check for IO heavy sessions / SQL and see if the amount of IO can be reduced.
Ensure no disks are IO bound.
Set your PGA_AGGREGATE_TARGET to appropriate value (if the parameter WORKAREA_SIZE_POLICY = AUTO)
Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of SORT MERGE and NESTED LOOPS instead of HASH JOIN.
Make sure the optimizer selects the right driving table. Check to see if the composite index’s columns can be rearranged to match the ORDER BY clause to avoid sort entirely.


------------------------Direct Path  Writes
These are waits that are associated with direct write operations that write data from users’ PGAs to data files or temporary tablespaces
Direct load operations (eg: Create Table as Select (CTAS) may use this)
Parallel DML operations
Sort IO (when a sort does not fit in memory

Action:
If the file indicates a temporary tablespace check for unexpected disk sort operations.
Ensure <Parameter:DISK_ASYNCH_IO> is TRUE . This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is configured correctly.
Ensure no disks are IO bound


-------------------------Latch Free Waits
This wait indicates that the process is waiting for a latch that is currently busy (held by another process).
A latch is a fast, inexpensive and non-sophisticated lock. The latch is used when you need to serialize access to operations, functions and data structures in Oracle.
We could check the shared pool tuning to determine whether that might be a valid suggestion to increase the size of the shared pool. Cursor sharing can be implemented to prevent thrashing within the shared pool and the session_cached_cursors can be used to reduce the soft parses. The following Oracle initialization parameter changes resolved the performance degradation that resulted from the latch free wait event.


alter system set session_cached_cursors=150 scope=spfile;
alter system set shared_pool_size=900M scope=spfile;
alter system set cursor_sharing='FORCE' scope=spfile;

--------------------Library cache latch
The library cache latches protect the cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache
Application is making heavy use of literal SQL- use of bind variables will reduce this latch considerably
Action:
You can reduce the library cache latch hold time by properly setting the SESSION_CACHED_CURSORS parameter
Note:
Larger shared pools tend to have long free lists and processes that need to allocate space in them must spend extra time scanning the long free lists while holding the shared pool latch
Oversize shared pool is also problem.

--------------------Shared pool latch
The shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool

Action:
Ways to reduce the shared pool latch are, avoid hard parses when possible, parse once, execute many.
The workaround is to set the initialization parameter CURSOR_SHARING to FORCE. This allows statements that differ in literal values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and hard parse.

-------------Log Buffer Space
Log Buffer Space wait event occurs when server processes write data into the log buffer faster than the LGWR process can write it out. The LGWR process begins writing entries to the online redo log file if any of the following conditions are true:

The log buffer reaches the _log_io_size threshold. By default, this parameter is set to one third of the log buffer size.
A server process performing a COMMIT o ROLLBACK posts to the LGWR process.
The DBWR process posts to the LGWR process before it begins writing.


Solution
Put log files on faster disks.
Look at tuning checkpoint or archive processes (log files switches).
Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.
Finally, check refresh methods when using Materialized Views to reduce logging activity.


----------------------------------------------------------------------Wait Event Meaning----------------------------------------------------------------


db file sequential read  => tune indexing, tune SQL (to do less I/O), tune disks, increase buffer cache. This event is indicative of disk contention on index reads. Make sure all objects are analyzed. Redistribute I/O across disks. The wait that comes from the physical side of the database. It related to memory starvation and non selective index use. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to.
db file scattered read => disk contention on full table scans. Add indexes, tune SQL, tune disks, refresh statistics, and create materialized view. caused due to full table scans may be because of insufficient indexes or unavailability of updated statistics.
Oracle monitoring tools Make money blogging Web hosting servers Oracle Dba Job Interview
db file parallel read  => tune SQL, tune indexing, tune disk I/O, increase buffer cache. if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition.
db file parallel write  => if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition.
db file single write  => if you see this event than probably you have a lot of data files in your database.

control file sequential read
control file parallel write

log file sync    => committing too often, archive log generation is more. Tune applications to commit less, tune disks where redo logs exist, try using nologging/unrecoverable options, log buffer could be too large.
log file switch completion => May need more log files per group.
log file parallel write  => Deals with flushing out the redo log buffer to disk. Disks may be too slow or have an I/O bottleneck. Look for log file contention.
log buffer space   => Increase LOG_BUFFER parameter or move log files to faster disks. Tune application, use NOLOGGING, and look for poor behavior that updates an entire row when only a few columns change.
log file switch (checkpoint incomplete) => May indicate excessive db files or slow IO subsystem.
log file switch (archiving needed)    => Indicates archive files are written too slowly.
redo buffer allocation retries  => shows the number of times a user process waited for space in the redo log buffer.
redo log space wait time  => shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer.

buffer busy waits/ read by other session  => Increase DB_CACHE_SIZE. Tune SQL, tune indexing, we often see this event along with full table scans, if the SQL is inserting data, consider increasing FREELISTS and/or INITRANS, if the waits are on segment header blocks, consider increasing extent sizes.
free buffer waits  => insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
cache buffers lru chain  => Freelist issues, hot blocks.
no free buffers   => Insufficient buffers, dbwr contention.

latch free
latch: session allocation
latch: in memory undo latch  => If excessive could be bug, check for your version, may have to turn off in memory undo.
latch: cache buffer chains  => check hot objects.
latch: cache buffer handles  => Freelist issues, hot blocks.

direct path write => You wont see them unless you are doing some appends or data loads.
direct path reads => could happen if you are doing a lot of parallel query activity.
direct path read temp or direct path write temp => this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them.

library cache load lock
library cache pin => if many sessions are waiting, tune shared pool, if few sessions are waiting, lock is session specific.
library cache lock  => need to find the session holding the lock, look for DML manipulating an object being accessed, if the session is trying to recompile PL/SQL, look for other sessions executing the code.

undo segment extension  => If excessive, tune undo.
wait for a undo record   => Usually only during recovery of large transactions, look at turning off parallel undo recovery.

enque wait events   => Look at V$ENQUEUE_STAT





Trace Connect Hang at startup


 http://www.oracleangels.com/2011/05/useful-tracing-commands-oradebug-oracle.html
-- There is an option that is helpful when the databse is hanging and we can't connect
-- to database but we need to execute commands like oradebug

-- With sqlplus -prelim
-- we can connect to the sga but not to the database, in this case no session is created.


1) sqlplus -prelim / as sysdba

2) sqlplus /nolog
set _prelim on
conn / as sysdba

Example:

$sqlplus -prelim /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Oct 26 19:35:42 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL> connect / as sysdba
Prelim connection established
SQL> oradebug setmypid
Statement processed.
SQL> oradebug hanganalyze 12
Hang Analysis in C:\oracle\admin\orcl\udump\orcl_ora_5564.trc
SQL> oradebug dump systemstate 10
Statement processed.
SQL> oradebug tracefile_name
C:\oracle\admin\orcl\udump\orcl_ora_5598.trc



sqlplus '/ as sysdba'
oraclebug unlimit
oradebug dump systemstate 266
wait 90 seconds
oradebug dump systemstate 266
wait 90 seconds
oradebug dump systemstate 266

oraclebug tracefile_name




-----------------------------------------Identify the process id consuming 100% CPU and take errorstack as below

connect / as sysdba
oradebug setospid 1234 << replace 1234 with actual process id
oradebug unlimit
oradebug dump errorstack 3
wait 1 min
oradebug dump errorstack 3
wait 1 min
oradebug dump errorstack 3
oradebug tracefile_name

Killed Session Hung Consuming 100% CPU
Doc ID: Note:458881.1
215858.1



Migration using database link Performance


alter database noarchivelog;

NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated



DB1 : LD
DB2: DPNSDL
Database link created at LD
Processes are running on DPNSDL DB.

Process:

Old procedure we just Select the data from DB1,DB2 and manipulate and insert into DB1. IT takes 5-6 hours

INSERT INTO LEDGER@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK
INSERT INTO FALEDDESCRIPT@DPLDLINK
INSERT INTO JOURNDC@DPLDLINK

New procedure, we create temp table on DPNSDL same as LD where database will have to inserted. Manipulate and store the data into temp table and in last insert the data into LD using database link
It takes 20 min.

INSERT INTO TBLTEMPLEDGERSUM
          (COOWNCODE, NBALANCE)
             SELECT OOWNCODE, NVL(SUM(CAMOUNT - DAMOUNT), 0) AS BAL
             FROM LEDGER@DPLDLINK
WHERE FIRMNUMBER = ILDFIRMNO
AND DTOFTRAN <= TO_DATE(ILDBALDATE, 'dd/mm/yyyy')
GROUP BY OOWNCODE;

INSERT INTO TBLTEMPLEDGERSUM
INSERT INTO TBLTEMPLDLEDGER
INSERT INTO TBLTEMPJOURNDC
INSERT INTO TBLTEMPFALEDDESCRIPT




Changes

Create LD Ledger table into NSDL database and store it at NSDL and later insert into LD db



               NSDL Financial Transmission which was taking around 5 hours has been brought down to only 20 minutes at Aditya Birla. All programmers need remember one thing

1) Problem if it arises at clients place need not necessarily mean that there was some recent change in the program. The problem could be because the data size has increased considerably and there is a change required in query or pl/sql execution to complete the same.

The above is very important as once LD moves to Silverlight the onus would be on programming team. I would be coming out of programming. Remember LD is the flagship product of the company.

Kudos to the programming and support team to tackle the same and solve it on priority basis.




Generally speaking dblink performance limited by network speed, but there are some pitfalls, leading to performance issues:

unnecessary joins between local and remote tables that leads to transferring large amounts of data;
lack of parallelism built into the query (unions help in this case);
implicit sorting on remote database side;
failure to comply with Oracle recommendations such as using of collocated views and hints (mainly DRIVING_SITE and NO_MERGE).




NO_UNNEST hint


USE_NL


http://www.unitask.com/oracledaily/2012/03/01/tuning-query-with-database-link-using-use_nl-hint/
https://cn.forums.oracle.com/forums/thread.jspa?threadID=2256982


create sequence to get import speed

select * from dba_sequences where sequence_name



I was in a very large volume shop pulling very large volumes of data from a collector system to be rolled into reporting tables. They selected from materized view on remote collector system as it was more efficient way to build the result set, which was then inserted noappend into local temp table with no constraints from which data was rolled into local tables.


http://msutic.blogspot.in/2012/03/tuning-distributed-query-using-usenl.html
http://msutic.blogspot.in/2009/08/sql-tuning-using-usehash-hint-dblink.html


I once solved similar issue using USE_HASH hint (SQL Tuning - using USE_HASH hint - dblink issue) so I've tried with USE_NL hint in this case.
It helped! I've received result under a second.

Using USE_HASH hint I resolved network bottleneck as much less data was being sent over the network.

SELECT /*+ use_nl(cc cfp) */ *
  FROM TAB1 cc, TAB2@DB2 cfp
 WHERE cc.C_ID = cfp.B_ID AND cc.CODE = '1234567890'

use both local, remote db table in hint


following may slow

/*+ use_nl(CLDP LDFIBS) OPTIMIZER_FEATURES_ENABLE('10.2.0.4') */


----------What is OPAQUE_TRANSFORM Hint and how to Control it (Doc ID 780503.1)


-----------------------------------------Remote tables have a number of performance implications:

If all of the tables from which you are selecting are on the same remote database, then Oracle will get the remote database to run the query and send back the results. The only problems are the cost of shipping the results back, and the fact that you cannot see how the remote database is executing the query in Explain Plan (from Oracle 10g onwards, Explain Plan will get the remote database to plan its bit of the query and pass back the results to your Explain Plan). Make sure that the whole query is being performed remotely - the Explain Plan output should just have the one step - "REMOTE" - or in 10g+, every line of the plan contains the word "REMOTE".

If some tables are local - or you are joining tables over two or more remote databases - then Oracle will need to ship all of the data to one database before it can perform a join.

If you want to use an Indexed Nested Loop (low volume) join, then the outer (2nd) table cannot be remote - if it is you will get a full table scan and a hash join. You could use the DRIVING SITE hint to make the remote database the driving site of the query. Then the inner (1st) table will be sent over there, and an index can be used on the outer table. The remote database will then send back the results.

Even better, speak to your DBA about having the remote table(s) replicated on your database so that you can avoid the remote join altogether.



Public synonym performance



How to reference a different schema without hard coding it


Oracle resolves all names while parsing, and the query execution plan generated is the same whether or not those names were resolved via synonyms. So there can be no difference in query execution performance. There is a small difference in parsing, but it goes the other way. A reference via a public synonym requires additional library cache and dictionary cache lookups. However, the performance impact should not be noticeable.

What may be more significant is that public synonym usage clutters the library cache and dictionary cache with information needed to track the negative dependencies on non-existent objects. If JONES and KING both refer to SCOTT.EMP and SCOTT.DEPT via public synonyms, then cache entries are needed to represent the non-existent tables JONES.EMP, KING.EMP, JONES.DEPT and KING.DEPT, and all dependent SQL statements must have negative dependencies on all these objects. With say 1000 users, 200 tables, and 1000 SQL statements each accessing 2 tables on average, you would have 200,000 non-existent object records and 2,000,000 negative dependency records. This clutter can cause latch contention in the library cache, dictionary cache and shared pool.


http://www.techrepublic.com/article/choose-alternatives-for-public-and-private-synonyms/5693565

In many large applications, a common way of separating the end user from the schema owner is to use private synonyms for application schema objects. For example, user A and user B both have their own login accounts. Schema user X has two tables, T1 and T2, so the application designers set up private synonyms for both A and B that reference X.T1 and X.T2, like this:

connect X/X
 create table T1(…);
 create table T2(…);
 connect A/A
 create synonym T1 for X.T1;
 create synonym T2 for X.T2;
 connect B/B
 create synonym T1 for X.T1;
 create synonym T2 for X.T2;
With two schema objects and two users, you have only four private synonyms. When you have a large number of users and application objects, you'll probably see a slow degradation of database performance. The performance problem will mostly be during the parse phase.

Every time a user wants to query T1, the parser must query across a large set of synonyms and put each synonym in the library cache along with its dependency structure. This would even affect queries such as select * from dual. You should avoid synonyms and use alternatives that depend on your application design.

The best alternative is to always fully qualify a table with its schema name. This alternative is only effective if you can guarantee the schema name will not change between installations and that there's only one schema with the given name in the database. In most applications where the designers carefully hide the SQL from the end user, there is no real benefit to coding select * from T1 in the code when select * from X.T1 would work in your situation.

In cases where there may be multiple schemas but each end user only accesses one schema at a time, the ALTER SESSION SET CURRENT_SCHEMA command is better than a synonym. This command makes the default schema for unqualified tables go to a particular schema. For example:

connect A/A
alter session set current_schema = X;
select * from T1;

This final query will select values from X.T1 and doesn't require any synonyms, while still obeying granted privileges on database objects. Even when it isn't possible to update an application so it always issues the alter session after a connect, you can code a database LOGON trigger to automatically set the current schema for a user.
==========================
create or replace trigger logon_trg
   after logon on database
 begin
    if user in ('A','B') then
        execute immediate 'alter session set current_schema=X';
    end if;
 end;
=======================

CREATE OR REPLACE TRIGGER LOGON_TRG
  AFTER LOGON ON SCHEMA
BEGIN
     EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA = LDBO';
EXCEPTION
  when others
    then null;
END;
/

============================

Following Alex's suggestion, here is a logon trigger that checks the role rather than a username:

CREATE OR REPLACE TRIGGER LOGON_TRG
  AFTER LOGON ON DATABASE
declare
  has_role boolean;
BEGIN

    has_role := dbms_session.is_role_enabled('FOOBAR_ROLE');

    if (has_role) then
      EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA = foobar';
    end if;
exception
   when others
      then null; -- prevent a login failure due to an exception  
END logon_trg;
/



============================
When user A or B logs in, his default schema will set automatically to X. The query select * from T1 will choose X.T1 automatically without requiring a synonym lookup or lock in the library.

Even in cases where the end user is accessing multiple schemas, or the names of database objects change between the end user and the application, you can still use views, which users can share in a central account:

create view T1 as select * from X.T1;
All of these alternatives rely on privileges the user gets to access a specific table. However, this may be more access than an application needs to give. A user who gets SELECT on a table in another schema can view all the columns and rows in that table or view any utility that can execute generic SQL.

An alternative that restricts exactly what data and operations the user can perform against the data is to define PL/SQL functions, procedure, and packages to wrap up specific access to an object. The PL/SQL code runs with the owner's privilege, and unqualified names resolve according to the code owner's schema. If the code owner is also the schema object owner, this eliminates the need for synonyms or even hard-coded schema names.

create or replace function t1_get_count return integer
 as
    l_count integer;
 begin
    select count(*) into l_count from t1;
    return l_count;
 exception
    when others then return 0;
 end;
 /
If you use "invoker rights" with AUTHID CURRENT_USER, then unqualified tables will resolve in the current user's schema, and you'll have to use another way of resolving those names at runtime.

Top Buffer Gets


set serverout on size 1000000

declare
top5 number;
text1 varchar2(4000);
x number;
len1 number;
Cursor c1 is
select buffer_gets,substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line('Reads'||'  '||'                          Text');
dbms_output.put_line ('-----'||'  '||'---------------------------------------------------');
dbms_output.put_line('      ');
open c1;
for i in 1 .. 5 loop
fetch c1 into top5, text1;
dbms_output.put_line(rpad(to_char(top5),9)|| ' '||substr(text1,1,66));
len1 :=length(text1);
x := 66;
while len1 > x-1 loop
dbms_output.put_line('"         '||substr(text1,x,64));
x := x+64;
end loop;
end loop;
end;
/

Update on Large Table


http://jonathanlewis.wordpress.com/2008/07/28/big-update/
UPDATE is the heaviest DML operation and it will take a long time, regardless of whether you COMMIT after each update operation (for one table) or at end only.



=============

1. create table tabc select * from tabb;

2. truncate table tabb;

2.1 drop indexes from tabb;

3. insert into tabb (new_id,old_id, col1 ..... col 99)
select decode(a.old_id,null,c.new_id,a.new_id) as new_id, old_id, col1
........ col99
from tabc c, (select old_id,new_id from taba group by old_id,
new_id) a
where c.old_id = a.old_id(+);


4. create indexes on tabb;


===========

At a high level…
Create a new table by selecting everything from the old table and performing the calculation in the select clause
Create table TABLE_NEW nologging as select <perform calculation here> from TABLE_OLD;
Apply all constraints, indexes, grants, etc. to the new table
Drop the old table
Rename TABLE_NEW to TABLE_OLD;
Note the nologging option bypasses generating any redo and will result in significant performance improvement.





Use CTAS in lieu of large updates

When you are updating the majority of rows in a table, using Create Table As Select (CTAS) is often more efficient performance than a standard update.  For example, assume that the following update changed 75% of the table rows:

update
   mytab
set
   status = 'new'
where
   status = 'old;

In this case, a parallelized CTAS may perform far faster (Note: Make sure that you have an SMP server before using the parallel degree option):

create table new_mytab NOLOGGING as
select  /*+ full parallel(mytab,35)*/
   decode (status,'new','old',status,
   col2, col3, col4
from mytab;

-- rebuild indexes, triggers and constraints to new_mytab

rename mytab to bkup_mytab;
rename new_mytab to mytab;


CPU taking SQL


If you noticed that a session is using too much CPU, you can identify the actions performed by that session using top and Explain Plan.
So first, use TOP to identify the session using high CPU and take a note of the PID.

set linesize 140
set pagesize 100
col username format a15
col machine  format a20
ACCEPT Process_ID prompt 'Pid : '
select s.inst_id,p.spid,s.sid,s.serial#,s.username,s.machine
from gv$session s, gv$process p
where s.paddr=p.addr
and p.spid=&proceso;

Once you got the SID associated to that PID, then you can use it with explain plan:
set lines 140
set pages 10000
set long 1000000
ACCEPT Process_SID prompt 'Sid : '
SELECT a.sql_id, a.sql_fulltext
FROM v$sqlarea a, v$session s
WHERE a.address = s.sql_address
 AND s.sid = &proceso;


set lines 150
set pages 40000
col operation format a55
col object format a25
ACCEPT sentencia prompt 'Identificador de SQL ejecutado : '
select lpad(' ',2*depth)||operation||' '||options||decode(id, 0, substr(optimizer,1, 6)||'
                   Cost='||to_char(cost)) operation,
object_name object, cpu_cost, io_cost
from v$sql_plan where sql_id='&sentencia';



synonym - after dba privileges user is not able to access the option. only schema owner can access


create or replace trigger logon_trg
   after logon on database
 begin
    if user in ('A','B') then
        execute immediate 'alter session set current_schema=X';
    end if;
 end;
/
----------------------

create or replace trigger logon_trg
   after logon on database
declare
usr varchar2(1000);
 begin
select wm_concat(''''|| username || '''') into usr from dba_users;
    if user in (usr) then
        execute immediate 'alter session set current_schema=LDBO';
    end if;
 end;
/

SYS@NBS11G> select wm_concat(''''|| username || '''') from dba_users;
select wm_concat(''''|| username || '''') from dba_users
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "WMSYS.WM_CONCAT_IMPL", line 30

Who is using database link


Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
   decode(bitand(ksuseidl,11),
      1,'ACTIVE',
      0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
      2,'SNIPED',
      3,'SNIPED',
      'KILLED'
   ),1,1
) "S",
substr(w.event,1,10) "WAITING"
from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where  g.K2GTDXCB =t.ktcxbxba
and   g.K2GTDSES=t.ktcxbses
and  s.addr=g.K2GTDSES
And  W.Sid=S.Indx
and s2.sid = W.Sid;


partition in a partitioned table truncated by mistake


Import partition data
A partition in a partitioned table was truncated by mistake and data needs to be imported from an export dump.

However instead of importing the data directly into the original table, the plan is to import the partition data into the temporary user and ratify the data before importing it to the original table using partition exchange.

Steps -

1. Create a temporary user.
2. Import partition data only into table in the temporary user
3. Ratify data.
4. Move segments in temporary table into new non-partitioned table.
5. Move this non-partitioned table into the original users tablespace.
6. Exchange partition between temporary and original partition and clean up.

NOTE -

ORIGINAL OWNER is PART_OWNER
TABLESPACE_NAME IS TEST_TBS
TABLE_NAME is TEST
PARTITION IS TEST_PART_DEC

1. Create temporary user with separate tablespace -

Temporary user called PART_RESTORE.

2. Import partition data only into table in the temporary user

imp userid/password file=test_export.dmp log=test_import.log fromuser=PART_OWNER touser=PART_RESTORE tables=TEST:TEST_PART_DEC feedback=10000 buffer = 64000 ignore=y &

3. RATIFY DATA IN PART_RESTORE SCHEMA.

4. Move segments fron temporary table into new non-partitioned table.

CREATE TABLE PART_RESTORE.TEST_TEMP
AS SELECT * FROM PART_RESTORE.TEST
WHERE ROWNUM<1;

5. Move this non-partitioned table into the original users tablespace.

ALTER TABLE PART_RESTORE.TEST_TEMP MOVE TABLESPACE TEST_TBS;

6. Exchange partition between temporary and original partition and clean up.

– set it as a nologging table

ALTER TABLE PART_RESTORE.TEST_TEMP NOLOGGING;

– move the data into the temp table.

INSERT /*+ APPEND */INTO PART_RESTORE.TEST_TEMP
SELECT *
FROM PART_RESTORE.TEST;
COMMIT;

— exchange the partition into the final PART_OWNER TABLE

ALTER TABLE PART_OWNER.TEST
EXCHANGE PARTITION TEST_PART_DEC
WITH TABLE PART_RESTORE.TEST_TEMP
UPDATE GLOBAL INDEXES;

– rebuild any unusable local indexes
ALTER TABLE PART_OWNER.TEST
MODIFY PARTITION TEST_PART_DEC
REBUILD UNUSABLE LOCAL INDEXES;

– gather stats on the new partition…
begin
DBMS_STATS.GATHER_TABLE_STATS (ownname => PART_OWNER, tabname => ‘TEST’, partname => ‘TEST_PART_DEC’, estimate_percent => 5, degree => 1, granularity => ‘ALL’, cascade => FALSE);
end;
/

– drop the ofsa_restored temp table…

DROP TABLE PART_RESTORE.TEST_TEMP ;

temporary tablespace usage


– Listing of temp segments.–

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total – SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

– Temp segment usage per session.–

SELECT S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;

remove two char from string


select substr(oowncode,1,length(oowncode)-2) from ldfibs where oowncode='25TAV014M';

Top 30 I/O Intensive SQL Statements Identification


-- Top 30 I/O Intensive SQL Statements Identification : top30iosql.sql
set linesize 80
set pagesize 58
set heading on
set serveroutput on
spool c:\temp\top30iosql.txt
declare cursor curs1 is
select executions, disk_reads, buffer_gets, first_load_time, sql_text from v$sqlarea order by disk_reads / decode(executions,0,1,executions) desc;
stmnt_ctr number(3);
wrt1 number(3);
begin dbms_output.enable(50000); stmnt_ctr := 0;
for inrec in curs1 loop stmnt_ctr := stmnt_ctr + 1;
if stmnt_ctr >= 31 then
exit;
end if;
 dbms_output.put_line('--------------------------------------' || '--------------------------------------');
dbms_output.put_line('SQL Stmnt Number: ' || to_char(stmnt_ctr)); dbms_output.put_line('--------------------------------------' || '--------------------------------------');
 dbms_output.put_line('Executions : ' || to_char(inrec.executions));
 dbms_output.put_line('Disk Reads : ' || to_char(inrec.disk_reads) || ' Buffer Gets : ' || to_char(inrec.buffer_gets));
dbms_output.put_line('First Load Time: ' || inrec.first_load_time);
dbms_output.put_line('SQL Statement-------->');
end loop;
end;
/

Who called me


http://asktom.oracle.com/tkyte/who_called_me
How Can I find out who called me or what my name is

Many times we are asked "in a procedure/function, can I find out who called me" or "can I dynamically figure out the name of the procedure or package that is currently executing".

You can find it in the call stack returned by dbms_utility.format_call_stack. I wrote a small routine called who_called_me that returns this sort of information (it doesn't tell you who you are, it lets you know who called you). If you wrap who_called_me with a function who_am_i, you'll get what you need. If you create the who_called_me/who_am_i routines, you'll be able to:



SQL> create or replace procedure demo
2  as
3  begin
4     dbms_output.put_line( who_am_i );
5  end;
6  /

Procedure created.

SQL> exec demo;
TKYTE.DEMO



In current releases of the database, this code has been incorporated into the OWA_UTIL package - you probably already have it in your database. If not, you can use this really old version that might need a tweak or two to work in your database release:


create or replace procedure who_called_me( owner      out varchar2,
                        name       out varchar2,
                        lineno     out number,
                        caller_t   out varchar2 )
as
   call_stack  varchar2(4096) default dbms_utility.format_call_stack;
   n           number;
   found_stack BOOLEAN default FALSE;
   line        varchar2(255);
   cnt         number := 0;
begin
--
   loop
       n := instr( call_stack, chr(10) );
       exit when ( cnt = 3 or n is NULL or n = 0 );
--
       line := substr( call_stack, 1, n-1 );
       call_stack := substr( call_stack, n+1 );
--
       if ( NOT found_stack ) then
           if ( line like '%handle%number%name%' ) then
               found_stack := TRUE;
           end if;
       else
           cnt := cnt + 1;
           -- cnt = 1 is ME
           -- cnt = 2 is MY Caller
           -- cnt = 3 is Their Caller
           if ( cnt = 3 ) then
               lineno := to_number(substr( line, 13, 6 ));
               line   := substr( line, 21 );
               if ( line like 'pr%' ) then
                   n := length( 'procedure ' );
               elsif ( line like 'fun%' ) then
                   n := length( 'function ' );
               elsif ( line like 'package body%' ) then
                   n := length( 'package body ' );
               elsif ( line like 'pack%' ) then
                   n := length( 'package ' );
               elsif ( line like 'anonymous%' ) then
                   n := length( 'anonymous block ' );
               else
                   n := null;
               end if;
               if ( n is not null ) then
                  caller_t := ltrim(rtrim(upper(substr( line, 1, n-1 ))));
               else
                  caller_t := 'TRIGGER';
               end if;

               line := substr( line, nvl(n,1) );
               n := instr( line, '.' );
               owner := ltrim(rtrim(substr( line, 1, n-1 )));
               name  := ltrim(rtrim(substr( line, n+1 )));
           end if;
       end if;
   end loop;
end;
/

create or replace function who_am_i return varchar2
is
   l_owner        varchar2(30);
   l_name      varchar2(30);
   l_lineno    number;
   l_type      varchar2(30);
begin
  who_called_me( l_owner, l_name, l_lineno, l_type );
  return l_owner || '.' || l_name;
end;
/

Find string into Database


create or replace procedure sp_ldsearchengine
(val varchar2)
is
  v_old_table user_tab_columns.table_name%type;
  v_where     Varchar2(32766);
  v_first_col boolean := true;
  type rc     is ref cursor;
  c           rc;
  v_rowid     varchar2(20);

begin
  for r in (
    select
      t.*
    from
      user_tab_cols t, user_all_tables a
    where t.table_name = a.table_name
    and t.data_type like '%CHAR%'
    order by t.table_name) loop

    if v_old_table is null then
      v_old_table := r.table_name;
    end if;

    if v_old_table <> r.table_name then
      v_first_col := true;

      -- dbms_output.put_line('searching ' || v_old_table);

      open c for 'select rowid from "' || v_old_table || '" ' || v_where;

      fetch c into v_rowid;
      loop
        exit when c%notfound;
        dbms_output.put_line('  select * from  ' || v_old_table || ' where rowid=''' || v_rowid ||''';');
        fetch c into v_rowid;
      end loop;

      v_old_table := r.table_name;
    end if;

    if v_first_col then
        v_where := ' where ' || r.column_name || ' like ''%' || val || '%''';
      v_first_col := false;
    else
      v_where := v_where || ' or ' || r.column_name || ' like ''%' || val || '%''';
    end if;

  end loop;
end;
/

UTL_FILE_DIR Checking


I found some sample code provided by Oracle for testing utl_file_dir

Metalink Note 45327.1 gives some good procedures to test whether pl/sql code calling utl_file package is able to write to the directories defnied in utl_file_dir

Metalink Note 1016653.4 gives this code to test to verify setup for UTL_FILE Package



-----------Following will create a file and write into it and read from it and create another file and write into it--------------------


SET SERVEROUTPUT ON
DECLARE
fid UTL_FILE.FILE_TYPE;
v VARCHAR2(32767);
PROCEDURE recNgo (str IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE ('UTL_FILE error ' || str);

UTL_FILE.FCLOSE (fid);
END;
BEGIN
/* Change the directory name to one to which you at least
|| THINK you have read/write access.
*/

fid:= UTL_FILE.FOPEN('d:\ldoutput', 'utl_file_test','W');
UTL_FILE.put_line(fid,' Attempt to write to test the file...Kshitij');
UTL_FILE.FCLOSE(fid);

fid := UTL_FILE.FOPEN ('d:\ldoutput', 'utl_file_test', 'R');
UTL_FILE.GET_LINE (fid, v);
dbms_output.put_line (v);
UTL_FILE.FCLOSE (fid);

fid := UTL_FILE.FOPEN ('d:\ldoutput', 'utl_file_test_10', 'W');
UTL_FILE.PUT_LINE (fid, v);
UTL_FILE.FCLOSE (fid);
EXCEPTION
WHEN UTL_FILE.INVALID_PATH
THEN recNgo ('invalid_path');
WHEN UTL_FILE.INVALID_MODE
THEN recNgo ('invalid_mode');
WHEN UTL_FILE.INVALID_FILEHANDLE
THEN recNgo ('invalid_filehandle');
WHEN UTL_FILE.INVALID_OPERATION
THEN recNgo ('invalid_operation');
WHEN UTL_FILE.READ_ERROR
THEN recNgo ('read_error');
WHEN UTL_FILE.WRITE_ERROR
THEN recNgo ('write_error');
WHEN UTL_FILE.INTERNAL_ERROR
THEN recNgo ('internal_error');
END;
/

Purge Queue


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLDIGITALBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLMAINBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.30.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/

Job Analyze Temp tables


CREATE OR REPLACE PROCEDURE Analyzetemp AS
BEGIN
 FOR CUR_REC IN (SELECT DISTINCT OWNER,TABLE_NAME
                  FROM   DBA_TABLES where table_name like '%TEMP%') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE TABLE '  || CUR_REC.OWNER || '.' || CUR_REC.TABLE_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR CUR_REC IN (SELECT DISTINCT OWNER,INDEX_NAME
                  FROM   DBA_INDEXES where table_name like '%TEMP%') LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE INDEX ' || CUR_REC.OWNER || '.' || CUR_REC.INDEX_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


Analyze Full


CREATE OR REPLACE PROCEDURE AnalyzeFull AS
BEGIN
 FOR CUR_REC IN (SELECT DISTINCT OWNER,TABLE_NAME
                  FROM   DBA_TABLES) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE TABLE '  || CUR_REC.OWNER || '.' || CUR_REC.TABLE_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR CUR_REC IN (SELECT DISTINCT OWNER, INDEX_NAME
                  FROM   DBA_INDEXES) LOOP
    BEGIN
      EXECUTE IMMEDIATE 'ANALYZE INDEX ' || CUR_REC.OWNER || '.' || CUR_REC.INDEX_NAME ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
END;
/


User Creation With job


host mkdir c:\yep1314

spool c:\yep1314\yep1314.log

--------------User Creation

CREATE USER LDBO PROFILE "DEFAULT" IDENTIFIED BY ldbo DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;


BEGIN
DBMS_WM.GrantSystemPriv('ACCESS_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/

GRANT ADMINISTER ANY SQL TUNING SET TO "LDBO" ;
GRANT ADMINISTER DATABASE TRIGGER TO "LDBO" ;

BEGIN
dbms_resource_manager_privs.grant_system_privilege(privilege_name=>'ADMINISTER_RESOURCE_MANAGER', grantee_name=>'LDBO', admin_option=>FALSE);
END;
/
GRANT ADMINISTER SQL TUNING SET TO "LDBO" ;
GRANT ADVISOR TO "LDBO" ;
GRANT ALTER ANY CLUSTER TO "LDBO" ;
GRANT ALTER ANY DIMENSION TO "LDBO" ;
GRANT ALTER ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT ALTER ANY INDEX TO "LDBO" ;
GRANT ALTER ANY INDEXTYPE TO "LDBO" ;
GRANT ALTER ANY LIBRARY TO "LDBO" ;
GRANT ALTER ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT ALTER ANY OUTLINE TO "LDBO" ;
GRANT ALTER ANY PROCEDURE TO "LDBO" ;
GRANT ALTER ANY ROLE TO "LDBO" ;
GRANT ALTER ANY RULE TO "LDBO" ;
GRANT ALTER ANY RULE SET TO "LDBO" ;
GRANT ALTER ANY SEQUENCE TO "LDBO" ;
GRANT ALTER ANY SQL PROFILE TO "LDBO" ;
GRANT ALTER ANY TABLE TO "LDBO" ;
GRANT ALTER ANY TRIGGER TO "LDBO" ;
GRANT ALTER ANY TYPE TO "LDBO" ;
GRANT ALTER DATABASE TO "LDBO" ;
GRANT ALTER PROFILE TO "LDBO" ;
GRANT ALTER RESOURCE COST TO "LDBO" ;
GRANT ALTER ROLLBACK SEGMENT TO "LDBO" ;
GRANT ALTER SESSION TO "LDBO" ;
GRANT ALTER SYSTEM TO "LDBO" ;
GRANT ALTER TABLESPACE TO "LDBO" ;
GRANT ALTER USER TO "LDBO" ;
GRANT ANALYZE ANY TO "LDBO" ;
GRANT ANALYZE ANY DICTIONARY TO "LDBO" ;
GRANT AUDIT ANY TO "LDBO" ;
GRANT AUDIT SYSTEM TO "LDBO" ;
GRANT BACKUP ANY TABLE TO "LDBO" ;
GRANT BECOME USER TO "LDBO" ;
GRANT CHANGE NOTIFICATION TO "LDBO" ;
GRANT COMMENT ANY TABLE TO "LDBO" ;
GRANT CREATE ANY CLUSTER TO "LDBO" ;
GRANT CREATE ANY CONTEXT TO "LDBO" ;
GRANT CREATE ANY DIMENSION TO "LDBO" ;
GRANT CREATE ANY DIRECTORY TO "LDBO" ;
GRANT CREATE ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT CREATE ANY INDEX TO "LDBO" ;
GRANT CREATE ANY INDEXTYPE TO "LDBO" ;
GRANT CREATE ANY JOB TO "LDBO" ;
GRANT CREATE ANY LIBRARY TO "LDBO" ;
GRANT CREATE ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT CREATE ANY OPERATOR TO "LDBO" ;
GRANT CREATE ANY OUTLINE TO "LDBO" ;
GRANT CREATE ANY PROCEDURE TO "LDBO" ;
GRANT CREATE ANY RULE TO "LDBO" ;
GRANT CREATE ANY RULE SET TO "LDBO" ;
GRANT CREATE ANY SEQUENCE TO "LDBO" ;
GRANT CREATE ANY SQL PROFILE TO "LDBO" ;
GRANT CREATE ANY SYNONYM TO "LDBO" ;
GRANT CREATE ANY TABLE TO "LDBO" ;
GRANT CREATE ANY TRIGGER TO "LDBO" ;
GRANT CREATE ANY TYPE TO "LDBO" ;
GRANT CREATE ANY VIEW TO "LDBO" ;
GRANT CREATE CLUSTER TO "LDBO" ;
GRANT CREATE DATABASE LINK TO "LDBO" ;
GRANT CREATE DIMENSION TO "LDBO" ;
GRANT CREATE EVALUATION CONTEXT TO "LDBO" ;
GRANT CREATE EXTERNAL JOB TO "LDBO" ;
GRANT CREATE INDEXTYPE TO "LDBO" ;
GRANT CREATE JOB TO "LDBO" ;
GRANT CREATE LIBRARY TO "LDBO" ;
GRANT CREATE MATERIALIZED VIEW TO "LDBO" ;
GRANT CREATE OPERATOR TO "LDBO" ;
GRANT CREATE PROCEDURE TO "LDBO" ;
GRANT CREATE PROFILE TO "LDBO" ;
GRANT CREATE PUBLIC DATABASE LINK TO "LDBO" ;
GRANT CREATE PUBLIC SYNONYM TO "LDBO" ;
GRANT CREATE ROLE TO "LDBO" ;
GRANT CREATE ROLLBACK SEGMENT TO "LDBO" ;
GRANT CREATE RULE TO "LDBO" ;
GRANT CREATE RULE SET TO "LDBO" ;
GRANT CREATE SEQUENCE TO "LDBO" ;
GRANT CREATE SESSION TO "LDBO" ;
GRANT CREATE SYNONYM TO "LDBO" ;
GRANT CREATE TABLE TO "LDBO" ;
GRANT CREATE TABLESPACE TO "LDBO" ;
GRANT CREATE TRIGGER TO "LDBO" ;
GRANT CREATE TYPE TO "LDBO" ;
GRANT CREATE USER TO "LDBO" ;
GRANT CREATE VIEW TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('CREATE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT DEBUG ANY PROCEDURE TO "LDBO" ;
GRANT DEBUG CONNECT SESSION TO "LDBO" ;
GRANT DELETE ANY TABLE TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'DEQUEUE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT DROP ANY CLUSTER TO "LDBO" ;
GRANT DROP ANY CONTEXT TO "LDBO" ;
GRANT DROP ANY DIMENSION TO "LDBO" ;
GRANT DROP ANY DIRECTORY TO "LDBO" ;
GRANT DROP ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT DROP ANY INDEX TO "LDBO" ;
GRANT DROP ANY INDEXTYPE TO "LDBO" ;
GRANT DROP ANY LIBRARY TO "LDBO" ;
GRANT DROP ANY MATERIALIZED VIEW TO "LDBO" ;
GRANT DROP ANY OPERATOR TO "LDBO" ;
GRANT DROP ANY OUTLINE TO "LDBO" ;
GRANT DROP ANY PROCEDURE TO "LDBO" ;
GRANT DROP ANY ROLE TO "LDBO" ;
GRANT DROP ANY RULE TO "LDBO" ;
GRANT DROP ANY RULE SET TO "LDBO" ;
GRANT DROP ANY SEQUENCE TO "LDBO" ;
GRANT DROP ANY SQL PROFILE TO "LDBO" ;
GRANT DROP ANY SYNONYM TO "LDBO" ;
GRANT DROP ANY TABLE TO "LDBO" ;
GRANT DROP ANY TRIGGER TO "LDBO" ;
GRANT DROP ANY TYPE TO "LDBO" ;
GRANT DROP ANY VIEW TO "LDBO" ;
GRANT DROP PROFILE TO "LDBO" ;
GRANT DROP PUBLIC DATABASE LINK TO "LDBO" ;
GRANT DROP PUBLIC SYNONYM TO "LDBO" ;
GRANT DROP ROLLBACK SEGMENT TO "LDBO" ;
GRANT DROP TABLESPACE TO "LDBO" ;
GRANT DROP USER TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'ENQUEUE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT EXECUTE ANY CLASS TO "LDBO" ;
GRANT EXECUTE ANY EVALUATION CONTEXT TO "LDBO" ;
GRANT EXECUTE ANY INDEXTYPE TO "LDBO" ;
GRANT EXECUTE ANY LIBRARY TO "LDBO" ;
GRANT EXECUTE ANY OPERATOR TO "LDBO" ;
GRANT EXECUTE ANY PROCEDURE TO "LDBO" ;
GRANT EXECUTE ANY PROGRAM TO "LDBO" ;
GRANT EXECUTE ANY RULE TO "LDBO" ;
GRANT EXECUTE ANY RULE SET TO "LDBO" ;
GRANT EXECUTE ANY TYPE TO "LDBO" ;
GRANT EXPORT FULL DATABASE TO "LDBO" ;
GRANT FLASHBACK ANY TABLE TO "LDBO" ;
GRANT FORCE ANY TRANSACTION TO "LDBO" ;
GRANT FORCE TRANSACTION TO "LDBO" ;
GRANT GRANT ANY OBJECT PRIVILEGE TO "LDBO" ;
GRANT GRANT ANY PRIVILEGE TO "LDBO" ;
GRANT GRANT ANY ROLE TO "LDBO" ;
GRANT IMPORT FULL DATABASE TO "LDBO" ;
GRANT INSERT ANY TABLE TO "LDBO" ;
GRANT LOCK ANY TABLE TO "LDBO" ;
GRANT EXECUTE ON DBMS_LOCK TO "LDBO" ;
GRANT MANAGE ANY FILE GROUP TO "LDBO" ;
BEGIN
dbms_aqadm.grant_system_privilege(privilege=>'MANAGE_ANY', grantee=>'LDBO', admin_option=>FALSE);
COMMIT;
END;
/
GRANT MANAGE FILE GROUP TO "LDBO" ;
GRANT MANAGE SCHEDULER TO "LDBO" ;
GRANT MANAGE TABLESPACE TO "LDBO" ;
GRANT MERGE ANY VIEW TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('MERGE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT ON COMMIT REFRESH TO "LDBO" ;
GRANT QUERY REWRITE TO "LDBO" ;
GRANT READ ANY FILE GROUP TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('REMOVE_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT RESTRICTED SESSION TO "LDBO" ;
GRANT RESUMABLE TO "LDBO" ;
BEGIN
DBMS_WM.GrantSystemPriv('ROLLBACK_ANY_WORKSPACE', 'LDBO', 'NO');
END;
/
GRANT SELECT ANY DICTIONARY TO "LDBO" ;
GRANT SELECT ANY SEQUENCE TO "LDBO" ;
GRANT SELECT ANY TABLE TO "LDBO" ;
GRANT SELECT ANY TRANSACTION TO "LDBO" ;
GRANT SYSDBA TO "LDBO" ;
GRANT SYSOPER TO "LDBO" ;
GRANT UNDER ANY TABLE TO "LDBO" ;
GRANT UNDER ANY TYPE TO "LDBO" ;
GRANT UNDER ANY VIEW TO "LDBO" ;
GRANT UNLIMITED TABLESPACE TO "LDBO" ;
GRANT UPDATE ANY TABLE TO "LDBO" ;

GRANT "AQ_ADMINISTRATOR_ROLE" TO "LDBO" ;
GRANT "AQ_USER_ROLE" TO "LDBO" ;
GRANT EXECUTE ON DBMS_AQ TO "LDBO";
GRANT EXECUTE ON DBMS_AQADM TO "LDBO";


GRANT EXECUTE ON SYS.DBMS_DEFER_IMPORT_INTERNAL TO "LDBO";
GRANT EXECUTE ON SYS.DBMS_EXPORT_EXTENSION TO "LDBO";


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(privilege => 'ENQUEUE_ANY',grantee => 'LDBO',admin_option => FALSE);
END;
/

BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(privilege => 'DEQUEUE_ANY',grantee => 'LDBO',admin_option => FALSE);
END;
/

GRANT "CONNECT" TO "LDBO" ;
GRANT "DBA" TO "LDBO" ;
GRANT "EXP_FULL_DATABASE" TO "LDBO" ;
GRANT "IMP_FULL_DATABASE" TO "LDBO" ;

ALTER USER LDBO QUOTA UNLIMITED on USR;
ALTER USER LDBO QUOTA UNLIMITED on INDX;



GRANT execute ON utl_recomp TO "LDBO";

alter system set job_queue_processes=1000;
alter system set sec_case_sensitive_logon=False;
alter system set open_cursors=1000 scope=spfile;
alter system set session_cached_cursors=500 scope=spfile;


CREATE USER CMUSER PROFILE "DEFAULT" IDENTIFIED BY cmuser DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;
GRANT CREATE SESSION TO cmuser;
GRANT CONNECT TO cmuser;

create or replace directory LDDIGITAL as 'd:\ldoutput\lddigital';

grant all on directory LDDIGITAL to public;



-----------------Auditing

Audit user,system grant,role,alter system,profile whenever successful ;
audit create session by access;
audit audit system by access;
audit grant any privilege by access;
audit grant any object privilege by access;
audit grant any role by access;
audit system grant by access;
audit create user by access;
audit create any table by access;
audit create public database link by access;
audit create any procedure by access;
audit alter user by access;
audit alter any table by access;
audit alter any procedure by access;
audit alter database by access;
audit alter system by access;
audit alter profile by access;
audit drop user by access;
audit drop any procedure by access;
audit drop any table by access;
audit drop profile by access;
audit drop any index by access;

-----------------Profile Setting

ALTER PROFILE DEFAULT LIMIT
FAILED_LOGIN_ATTEMPTS UNLIMITED
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_GRACE_TIME UNLIMITED;

-----------------Network Access

Exec dbms_network_acl_admin.create_acl ('utl_http_access.xml','Normal Access','LDBO',TRUE,'connect',NULL,NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_http_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_http_access.xml', '*',NULL,NULL);
commit ;
               
Exec dbms_network_acl_admin.create_acl ('utl_inaddr_access.xml','Normal Access','LDBO',TRUE,'resolve',NULL, NULL);
Exec dbms_network_acl_admin.add_privilege (acl => 'utl_inaddr_access.xml', principal =>  'LDBO',is_grant => TRUE, privilege => 'resolve');
Exec dbms_network_acl_admin.assign_acl ('utl_inaddr_access.xml', '*',NULL,NULL);
commit;

Exec dbms_network_acl_admin.create_acl ('utl_mail.xml','Allow mail to be send','LDBO',TRUE,'connect' );
Exec dbms_network_acl_admin.add_privilege ('utl_mail.xml','LDBO',TRUE,'resolve');
Exec dbms_network_acl_admin.assign_acl('utl_mail.xml','*',NULL,NULL);
commit ;


Exec dbms_network_acl_admin.create_acl ('utl_http.xml','HTTP Access','LDBO',TRUE,'connect',null,null);
Exec dbms_network_acl_admin.add_privilege ('utl_http.xml','LDBO',TRUE,'resolve',null,null);
Exec dbms_network_acl_admin.assign_acl ('utl_http.xml','*',NULL,NULL);
commit;

Grant Execute on utl_inaddr to ldbo ;
Grant Execute on utl_http to ldbo ;


--------------Jobs


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'compile',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN UTL_RECOMP.recomp_serial; END;',
    start_date      => '01-APR-12 06:31.00.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to compile invalid objects');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN DBMS_STATS.gather_schema_stats(''LDBO'',CASCADE=>TRUE); END;',
    start_date      => '01-APR-12 04.00.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=DAILY',
    enabled         => TRUE,
    comments        => 'JOB to gather LDBO statistics every DAY');
END;
/


CREATE OR REPLACE PROCEDURE Analyzetemp wrapped
a000000
b2
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
7
2a2 168
oUpO82c0m3kRqNYu+ieu9+lzKuMwg+1pLdwdf3QBvGQwDJTIaaoOXtFSqGZSURSwYVZJQEip
BFRn/ggU7vLVLWjhHH7ZwpoSP6gj6SPjNjqS0uQIQs8kvDQ16OKw2mqkYSLmslJfGnAEsfRW
JnR5Cd9xq50LGMSTM6dgp0p75Bh50uKOVdktzWyKuSYvQdBw1x012GW+S9N4SxkHOSFtETUS
CYQbBMFa/ZN7qJODaoZfNvtZ66GKRb0KHS1vnE6ZPN7TaRkCGOzCR4FJjTbfVXhPUcjwJAgf
s3RPbdJG8TrBXhzYkK5UseiJMxXJJhBAKs0ftLEz+d2dQJkt27wDd6pLZMvivKZk5hqg

/

show errors

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'ANALYZE_TEMP',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN Analyzetemp; END;',
    start_date      => '01-APR-12 08.01.00 AM ASIA/CALCUTTA',
    repeat_interval=> 'FREQ=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Analyze TEMP tables indexes');
END;
/

CREATE OR REPLACE PROCEDURE AnalyzeFull wrapped
a000000
b2
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
abcd
7
265 154
+WnW35aMpBSqlXDoysgRZr/L8jowg+1pLdwdf3QBvGQwumfmczfcvtuA72wzPvomAkPb3Z/g
Hml30Em9YungrX+PqGGHfp3BeGB/kbBaULtk2SmGeVfRmfX8IyPAaDjI54KVrfE3C2hJm5Fl
2AwxUj6Rco4sA7kA19gdMs6UhIlp8c2BaJIlHTSFtHuWVmJ9Z8mXNh/hk7AMYyH2teTmEcdp
3TrPXAPeR5Rp/YG2nd4+Taz3fDw7Ph5x6RMaS828znTwMfqNqp3vQF2klUabm/4+ekKqiDi6
bfGDvQUN+YmWDAeUOQ28nVd3OWs8BPXrmyg5XI+5tpcsjUQ=

/

show errors

BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'ANALYZE_FULL',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN analyzefull; END;',
   start_date      => '01-APR-12 01:00.00.00 PM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN; BYHOUR=01; BYMINUTE=01;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'Analyze all tables indexes');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_rkqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLRKQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.10.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLDIGITALBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'purge_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'DECLARE  po_t dbms_aqadm.aq$_purge_options_t;
    BEGIN dbms_aqadm.purge_queue_table(''LDBO.TBLMAINBOQUEUE'', NULL, po_t); END;',
        start_date      => '28-APR-13 04.30.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'purge queue table');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_rkqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLRKQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.00.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_boqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLDIGITALBOQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.11.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'analyze_mainboqueue',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN dbms_stats.gather_table_stats(''LDBO'',''TBLMAINBOQUEUE'' ,force=>TRUE); END;',
    start_date      => '28-APR-13 07.20.00 AM ASIA/CALCUTTA',
    repeat_interval => 'freq=DAILY',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'JOB to gather Queue Table statistics');
END;
/


---------------Backup job

host mkdir d:\expdp1314

create directory export_auto as 'd:\expdp1314';


CREATE USER dba_export_user PROFILE "DEFAULT" IDENTIFIED BY dba_export_user DEFAULT TABLESPACE "USR" TEMPORARY TABLESPACE "TEMPORARY" ACCOUNT UNLOCK ;

grant connect, create database link, resource, create view to dba_export_user;
grant unlimited tablespace to dba_export_user;
grant exp_full_database to dba_export_user;
grant read,write on directory export_auto to dba_export_user;
grant execute on dbms_flashback to dba_export_user;
grant create table to dba_export_user;
grant FLASHBACK ANY TABLE to dba_export_user;

ALTER USER dba_export_user QUOTA UNLIMITED on USR;
ALTER USER dba_export_user QUOTA UNLIMITED on INDX;


CREATE OR REPLACE PROCEDURE dba_export_user.start_export
IS
   hdl_job        NUMBER;
   l_cur_scn      NUMBER;
   l_job_state    VARCHAR2 (20);
   l_status       SYS.ku$_status1010;
   l_job_status   SYS.ku$_jobstatus1010;
BEGIN
 begin
    execute immediate 'drop table  dba_export_user.AUTO_EXPORT';
    exception when others then null;
   end;
   hdl_job := DBMS_DATAPUMP.OPEN ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'AUTO_EXPORT' );
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1314.dmp',directory => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_dump_file,reusefile => 1);
   DBMS_DATAPUMP.add_file (handle => hdl_job,filename => 'EXPDP1314.log',DIRECTORY => 'EXPORT_AUTO',filetype => DBMS_DATAPUMP.ku$_file_type_log_file,reusefile => 1);
   DBMS_DATAPUMP.start_job (handle => hdl_job);
   DBMS_DATAPUMP.wait_for_job (handle => hdl_job, job_state => l_job_state);
   DBMS_OUTPUT.put_line ('Job exited with status:' || l_job_state);
    DBMS_DATAPUMP.detach(handle => hdl_job);
END;
/

show errors


begin
 dbms_scheduler.create_job(
      job_name => 'EXPORT_JOB'
     ,job_type => 'STORED_PROCEDURE'
     ,job_action => 'dba_export_user.start_export'
     ,start_date => '01-MAR-13 10.00.00.00 PM ASIA/CALCUTTA'
       ,repeat_interval => 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;'
     ,enabled => TRUE
     ,comments => 'EXPORT_DATABASE_JOB');
end;
/


spool off

job rebuild index 11G


create or replace procedure sp_rebuildindex as
  vOwner   dba_indexes.owner%TYPE;            /* Index Owner            */
  vIdxName dba_indexes.index_name%TYPE;       /* Index Name             */
  vTbName dba_indexes.table_name%TYPE;        /* Table Name             */
  vAnalyze VARCHAR2(100);                     /* String of Analyze Stmt */
  vCursor  NUMBER;                            /* DBMS_SQL cursor        */
  vNumRows INTEGER;                           /* DBMS_SQL return rows   */
  vHeight  index_stats.height%TYPE;           /* Height of index tree   */
  vLfRows  index_stats.lf_rows%TYPE;          /* Index Leaf Rows        */
  vDLfRows index_stats.del_lf_rows%TYPE;      /* Deleted Leaf Rows      */
  vDLfPerc   NUMBER;                          /* Del lf Percentage      */
  vMaxHeight NUMBER;                          /* Max tree height        */
  vMaxDel    NUMBER;                          /* Max del lf percentage  */
  CURSOR cGetIdx IS SELECT owner,index_name,table_name
     FROM dba_indexes WHERE OWNER NOT in ('DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSMAN','SYSTEM','TSMSYS','WMSYS','XDB','SYSMAN');
BEGIN
  /* Define maximums. This section can be customized. */
  vMaxHeight := 3;
  vMaxDel    := 20;

  /* For every index, validate structure */
  OPEN cGetIdx;
  LOOP
     FETCH cGetIdx INTO vOwner,vIdxName,VTbName;
     EXIT WHEN cGetIdx%NOTFOUND;
     /* Open DBMS_SQL cursor */
     vCursor := DBMS_SQL.OPEN_CURSOR;
     /* Set up dynamic string to validate structure */
     vAnalyze := 'ANALYZE INDEX ' || vOwner || '.' || vIdxName || ' VALIDATE STRUCTURE';
     DBMS_SQL.PARSE(vCursor,vAnalyze,DBMS_SQL.V7);
     vNumRows := DBMS_SQL.EXECUTE(vCursor);
     /* Close DBMS_SQL cursor */
     DBMS_SQL.CLOSE_CURSOR(vCursor);
     /* Does index need rebuilding?  */
     /* If so, then generate command */
     SELECT height,lf_rows,del_lf_rows INTO vHeight,vLfRows,vDLfRows
        FROM INDEX_STATS;
     IF vDLfRows = 0 THEN         /* handle case where div by zero */
        vDLfPerc := 0;
     ELSE
        vDLfPerc := (vDLfRows / vLfRows) * 100;
     END IF;
     IF (vHeight > vMaxHeight) OR (vDLfPerc > vMaxDel) THEN
EXECUTE IMMEDIATE 'ALTER INDEX ' || vIdxName || ' REBUILD ONLINE';
EXECUTE IMMEDIATE 'ANALYZE TABLE ' || vTbName || ' compute statistics';
EXECUTE IMMEDIATE 'ANALYZE INDEX '  || vIdxName || ' compute statistics';
     END IF;
  END LOOP;
  CLOSE cGetIdx;
END;
/


BEGIN
  DBMS_SCHEDULER.create_job (
    job_name        => 'REBUILDINDEX',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN sp_rebuildindex; END;',
   start_date      => '01-APR-12 09:00.00.00 AM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN; BYHOUR=09; BYMINUTE=01;',
    end_date        => NULL,
    enabled         => TRUE,
    comments        => 'rebuild index');
END;
/

EXEC DBMS_SCHEDULER.RUN_JOB('REBUILDINDEX');


Temporary Tablespace Cleanup 11g


alter tablespace temporary shrink space keep 1G;

exec dbms_scheduler.drop_job('tempshrink');

 begin
dbms_scheduler.create_job
(job_name => 'tempshrink',
job_type => 'PLSQL_BLOCK',
job_action => 'begin execute immediate ''alter tablespace temporary shrink space keep 1G''; end;',
start_date => '01-APR-12 01:00.00.00 AM ASIA/CALCUTTA',
  repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN;',
comments=>'Shrink Temporary Tablespace');
end;
/

exec dbms_scheduler.run_job('tempshrink');

========================

select distinct t1.tablespace_name TB,t2.file_name TempFile_name,t1.tablespace_size/1024/1024 Used_Allocated_size,t1.allocated_space/1024/1024 Allocated_size,t1.free_space/1024/1024 Free_space,t2.Autoextensible,t2.bytes/1024/1024 Used_space, t2.maxbytes/1024/1024 Max_TB_size from dba_temp_free_space t1,dba_temp_files t2 where t1.tablespace_name=t2.tablespace_name ;


select tablespace_size/1024/1024,allocated_space/1024/1024,free_space/1024/1024 from dba_temp_free_space;

select tablespace_name,Autoextensible,bytes/1024/1024,maxbytes/1024/1024 from dba_temp_files;


=================================

create or replace trigger tg_clear_tempTB
    after startup
     on database
  declare
     j   integer;
    pragma autonomous_transaction;
  begin
    dbms_job.submit (j,  'begin execute immediate ''alter tablespace temporary shrink space keep 5G''; end;');
   commit;
 end tg_clear_logindetails;
 /


job OS audit files maintaince


For Oracle on unix .aud files are created whether you have auditing enabled or not.  They record sys operations.  If you've got a lot of shell scripts that connect / as sysdba you are going to get a lot of .aud files. (In windows they are written to the event viewer)

They will go to where you set audit_file_dest (aka adump) to be.  If you don't set adump the first default value is ORACLE_BASE/admin/ORACLE_SID/adump. If this doesn't exist then they will go in ORACLE_HOME/rdbms/audit

If you didn't know about this they tend to raise their heads when $ORACLE_HOME fills up and you wonder why.

Please be clear, I am talking about the OS .aud files.  This process will not touch $aud in your database.


Oracle at 11.2  provide a way to manage these .aud OS files using the audit management package DBMS_AUDIT_MGMT

Here's how:

Initialize DBMS_AUDIT_MGMT

Call just once the initialization procedure INIT_CLEANUP to set up the audit management infrastructure.

BEGIN
 DBMS_AUDIT_MGMT.INIT_CLEANUP(
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
  DEFAULT_CLEANUP_INTERVAL    => 24 );
END;
/



Create the Procedure to delete files (over a year old) for a single instance

CREATE OR REPLACE procedure SYS.delete_OSaud_files
is
ThisProc             VARCHAR2(30) := 'delete_OSaud_files';
ThisAppErr            NUMBER := -20000;
BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366);
 dbms_audit_mgmt.clean_audit_trail(
   audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   use_last_arch_timestamp => TRUE
  );
EXCEPTION
WHEN OTHERS THEN
   dbms_output.put_line(ThisProc||' - '||SQLERRM);
   ROLLBACK;
   RAISE_APPLICATION_ERROR(ThisAppErr, ThisProc);
END;
/

Create the Procedure to delete files (over a year old) for RAC

CREATE OR REPLACE procedure SYS.delete_OSaud_files
is
ThisProc             VARCHAR2(30) := 'delete_OSaud_files';
ThisAppErr            NUMBER := -20000;
BEGIN
  DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366,
    rac_instance_number => 1);
   DBMS_AUDIT_MGMT.set_last_archive_timestamp(
    audit_trail_type  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
    last_archive_time => SYSTIMESTAMP-366,
    rac_instance_number => 2);
 dbms_audit_mgmt.clean_audit_trail(
   audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
   use_last_arch_timestamp => TRUE);
EXCEPTION
WHEN OTHERS THEN
   dbms_output.put_line(ThisProc||' - '||SQLERRM);
   ROLLBACK;
   RAISE_APPLICATION_ERROR(ThisAppErr, ThisProc);
END;
/


If you don't need to keep a years worth, just change SYSTIMESTAMP-366


Feel free to moan at me about 'when others then' and post a better procedure in the comments - I'll readily admit my plsql is not what it should be and I'm happy to be corrected.


Create a Schedule
(I like this type of thing to run when I'm actually working so I don't get called out of hours if something goes wrong)

Begin
DBMS_SCHEDULER.CREATE_SCHEDULE (
   schedule_name   => 'DELETE_OSAUD_FILES_SCHED',
      repeat_interval =>'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=10;',
      comments        => 'Delete adump files');
END;
/

Schedule the Job

BEGIN
dbms_scheduler.create_job (
job_name =>'DELETE_OSAUD_FILES_JOB',
job_type =>'STORED_PROCEDURE',
job_action => 'SYS.delete_OSaud_files',
enabled    =>  TRUE,
auto_drop => false,
schedule_name =>  'DELETE_OSAUD_FILES_SCHED',
comments => 'Remove aud files from adump');
END;
/


Set up mail notifications
(I like to know when my jobs error)

BEGIN
DBMS_SCHEDULER.set_scheduler_attribute('email_server', 'smtp.mycompany.com:25');
DBMS_SCHEDULER.set_scheduler_attribute('email_sender', 'do_not_reply@mydatabase');
END;
/

Create the events that I want mailing about

to test notifications work first set up events for everything

BEGIN
 DBMS_SCHEDULER.add_job_email_notification (
  job_name   =>  'DELETE_OSAUD_FILES_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_all_events');
  END;
  /

run your job

BEGIN
dbms_scheduler.run_job (
job_name =>'DELETE_OSAUD_FILES_JOB');
END;
/

You should get an email saying it ran.
But you don't want an 'I ran' mail every day, I only want to know if it's failed so:

BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_succeeded');
END;
/
BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_started');
END;
/
 BEGIN
 DBMS_SCHEDULER.remove_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_completed');
END;
/


Which is the equivalent of :

BEGIN
 DBMS_SCHEDULER.add_job_email_notification (
  job_name   =>  'DELETE_OSAUD_DATA_JOB',
  recipients =>  'me@mycompany.com',
  events     =>  'job_broken,job_chain_stalled,job_completed,job_disabled,job_failed,
job_over_max_dur,job_sch_lim_reached,job_stopped');
  END;
  /



So now your .aud files will be kept in check for you.

Followers