Showing posts with label capacity planning. Show all posts
Showing posts with label capacity planning. Show all posts

Monday, May 7, 2012

Table Reorganization, Rebuild

Tables in Oracle database become fragmented after mass deletion, or after so many delete and/or insert operations.

BEGIN
 FOR cur_rec IN (SELECT distinct table_name
                  FROM   dba_tables WHERE OWNER NOT in ('DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSMAN','SYSTEM','TSMSYS','WMSYS','XDB','SYSMAN')) LOOP
    BEGIN
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' ENABLE ROW MOVEMENT';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' SHRINK SPACE COMPACT';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' SHRINK SPACE';
EXECUTE IMMEDIATE 'ALTER TABLE '|| cur_rec.table_name ||' DISABLE ROW MOVEMENT';
EXECUTE IMMEDIATE 'ANALYZE TABLE '|| cur_rec.table_name ||' COMPUTE STATISTICS';
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
 FOR cur_rec IN (SELECT distinct index_name
                  FROM   dba_indexes) LOOP
    BEGIN
       EXECUTE IMMEDIATE 'ALTER INDEX '|| cur_rec.index_name ||' REBUILD' ;
      EXECUTE IMMEDIATE 'ANALYZE INDEX '|| cur_rec.index_name ||' COMPUTE STATISTICS' ;
    EXCEPTION
      WHEN OTHERS THEN
        NULL;
    END;
  END LOOP;
    BEGIN
     SYS.UTL_RECOMP.recomp_serial('LDBO');
    END;
END;
/


Thursday, March 15, 2012

Server Configuration Planning


1) Existing server configuration (Processor, No of CPU, RAM, Disk Capacity, … , …)

2) No. of running databases on server
3) Databases folder size of all years


select a.data_size+b.temp_size+c.redo_size+d.controlfile_size "DB_Folder_size_GB"
from ( select sum(bytes)/1024/1024/1024 data_size
from dba_data_files) a,
( select nvl(sum(bytes),0)/1024/1024/1024 temp_size
from dba_temp_files ) b,
( select sum(bytes)/1024/1024/1024 redo_size
from sys.v_$log ) c,
( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024/1024 controlfile_size
from v$controlfile) d ;


4) Max concurrent connections in the database

Maximum concurrent connections (mcc) refers to the total number of sessions (connections) about which a device can maintain state simultaneously.

select highwater from dba_high_water_mark_statistics where name = 'SESSIONS';

select sum(inuse) from ( select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse from (select case when name = 'buffer_cache' then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name <> 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name );

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value) pga_size
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;

5) Connections Per Second
Connections per second (c/s) refers to the rate at which a device can establish state parameters for new connections.

6) Transactions Per Second
Transactions per second (t/s) refers to the number of complete actions of a particular type that can be performed per second.

6) Weekly or monthly growth of databases.

database_monitoring_script

7) Oracle core license??

8) Network Load (Bandwidth,.........)




-------------------------------
Memory in a data warehouse is particularly important for processing memory-intensive operations such as large sorts. Access to the data cache is less important in a data warehouse because most of the queries access vast amounts of data. Data warehouses do not have memory requirements as critical as OLTP applications.

The number of CPUs provides you a good guideline for the amount of memory you need. Use the following simplified formula to derive the amount of memory you need from the CPUs you selected:

= 2 *
For example, a system with 6 CPUs needs 2 * 6 = 12 GB of memory. Most standard servers fulfill this requirement.

------------------------------

select * from dba_high_water_mark_statistics where name in ('SESSIONS','DB_SIZE');
select * from v$resource_limit;


--------maximum amount of memory allocated by the currently connected sessions
SELECT SUM (value/1024/1024) "max memory allocation" FROM v$sesstat ss, v$statname st WHERE st.name = 'session uga memory max' AND ss.statistic# = st.statistic#;


---------------------------Used SGA-----------------
select sum(inuse) from (
select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name <> 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name );


-------------------pga requirement------------

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value) pga_size
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;
-------------------------------------
http://docs.oracle.com/cd/B28359_01/server.111/b28314/tdpdw_system.htm

http://www.wdpi.com/product/used-hp/proliant-servers/ml570
http://h18004.www1.hp.com/products/quickspecs/12474_na/12474_na.html
http://en.wikipedia.org/wiki/Intel_QuickPath_Interconnect
http://www.intel.com/content/www/us/en/io/quickpath-technology/quickpath-technology-general.html
http://www.dfisica.ubi.pt/~hgil/utils/Hyper-Threading.4_Turbo.Boost.html
http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html
http://h18000.www1.hp.com/products/quickspecs/13669_na/13669_na.html
http://www.cpubenchmark.net/multi_cpu.html

http://www.dbspecialists.com/files/presentations/mts_case_study.html

Configuration for LD DB:
2 Quad core processor 64 BIT (upgradable to 4 processor)
16 or 32 GB RAM
14 * 200 GB HDD (SAN)
Operating system : Win 2003 enterprise edition 64 bit

HP ProLiant ML570

Intel® Dual-Core 64-bit Xeon® processor 7000 sequence
Processor-4
3.00 GHz, 800MHz FSB
32 or 64 GB RAM

----------------------------------------------------------------------Eight Core Processor
Intel® Xeon® Processor E7-8837 product Family Xeon E7-8800
(24M Cache, 2.66 GHz, 6.40 GT/s Intel® QPI)
Thermal Design Power 130W (refers to the maximum amount of power the cooling system in a computer is required to dissipate.)


clock speed 2.66GHz
QPI (QuickPath Interconnect) is a point-to-point processor interconnect developed by Intel which replaces the Front Side Bus (FSB) in Xeon, Itanium, and certain desktop platforms.
Processor-2
Processor Core: Octa-core (8 Core) / Quad-core (4 core)

32 or 64 GB RAM

---------------------------
64-bit Intel® Xeon® Processor 24M Cache, 2.66 GHz, 6.40 GT/s Intel® QPI
with intel turbo boost technology / hyper threading technology

Most Used Processor: Intel® Xeon® Processor E7-8837 @ 2.66 GHz

Processor- 2 Quad-core (4 core)
32 or 64 GB RAM

-------------------------------------------------------------------Quad-Core Processors-----------------------

Intel® Xeon® E7520 (1.86GHz/4-core/18MB/95W) Processor
Memory 16 or 32 GB
Storage 8TB



--------------------------------------


The key to this dramatic claim is a feature called Turbo Boost technology. Basically, if the current application workload isn't keeping all four cores fully busy and pushing right up against the chip's TDP (Thermal Design Power) limit, Turbo Boost can increase the clock speed of each core individually to get more performance out of the chip.

---------------------------------------

Ten-Core Processors
Intel® Xeon® E7-4870 (2.40GHz/10-core/30MB/130W) Processor
Intel® Xeon® E7-4860 (2.26GHz/10-core/24MB/130W) Processor
Intel® Xeon® E7-4850 (2.00GHz/10-core/24MB/130W) Processor
Intel® Xeon® E7-8867L (2.13GHz/10-core/30MB/105W) Processor
Eight-Core Processors
Intel® Xeon® E7-8837 (2.67GHz/8-core/24MB/130W) Processor
Intel® Xeon® E7-4830 (2.13GHz/8-core/24MB/105W) Processor
Intel® Xeon® E7-4820 (2.0GHz/8-core/18MB/105W) Processor
Intel® Xeon® X7560 (2.26GHz/8-core/24MB/130W) Processor
Intel® Xeon® X7550 (2.0GHz/8-core/18MB/130W) Processor
Intel® Xeon® L7555 (1.86GHz/8-core/24MB/95W) Processor
Six-Core Processors
Intel® Xeon® E7-4807 (1.86GHz/6-core/18MB/95W) Processor
Intel® Xeon® E7540 (2.0GHz/6-core/18MB/105W) Processor
Intel® Xeon® E7530 (1.86GHz/6-core/12MB/105W) Processor
Intel® Xeon® X7542 (2.66GHz/6-core/18MB/130W) Processor
Quad-Core Processors
Intel® Xeon® E7520 (1.86GHz/4-core/18MB/95W) Processor

NOTE: New Intel Microarchitecture with Intel Virtualization Technology FlexMigration. Industry Standard Intel® 7500 Chipset with four high-speed interconnects up to 6.4GT/s.


Hardware Requirement

Minimum Hardware Requirements
On small instances, server load is primarily driven by peak visitors.

5 Concurrent Users

2GHz+ CPU
512MB RAM
5GB database space
25 Concurrent Users

Quad 2GHz+ CPU
2GB+ RAM
10GB database space
----------------------------------------------------------RAM---------

= 2 *

-------------------------------------Disk-----------

Use the following formula to determine the number of disk arrays you need:

= /

For example, a system with 1200 MB per second throughput requires at least 1200 / 180 = 7 disk arrays.

Ensure you have enough physical disks to sustain the throughput you require. Ask your disk vendor for the throughput numbers of the disks.

---------------------------
PGA_AGGREGATE_TARGET = 3 * SGA_TARGET.
----------------



-------------------Measure the Cost of Each Operation---------------

Cost per request. You can calculate the cost in terms of processor cycles required for processing a request by using the following formula:
Cost (Mcycles/request) = ((number of processors x processor speed) x processor use) / number of requests per second

For example, using the values identified for the performance counters in Step 2, where processor speed is 1.3 GHz or 1300 Mcycles/sec, processor usage is 90 percent, and Requests/Sec is 441, you can calculate the page cost as:

((2 x 1,300 Mcycles/sec) x 0.90) / (441 Requests/Sec) = 5.30 Mcycles/request

Cost per operation. You can calculate the cost for each operation by using the following formula:
Cost per operation = (number of Mcycles/request) x number of pages for an operation

The cost of the Login operation is:

5.30 x 3 = 15.9 Mcycles


---------------Calculate the Cost of an Average User Profile

Average cost of profile in Mcycles/sec = Total cost for a profile / session length in seconds
Thus, the average cost for the profile is:

147.52/600 = 0.245 Mcycles/sec


---------------------------------Calculate Site Capacity


To calculate these values, use the following formulas:

Simultaneous users with a given profile that your application can currently support. After you determine the cost of the average user profile, you can calculate how many simultaneous users with a given profile your application can support given a certain CPU configuration. The formula is as follows:
Maximum number of simultaneous users with a given profile = (number of CPUs) x (CPU speed in Mcycles/sec) x (maximum CPU utilization) / (cost of user profile in Mcycles/sec)

Therefore, the maximum number of simultaneous users with a given profile that the sample application can support is:

(2 x 1300 x 0.75)/0.245 = 7,959 users

Future resource estimates for your site. Calculate the scalability requirements for the finite resources that need to be scaled up as the number of users visiting the site increases. Prepare a chart that gives you the resource estimates as the number of users increases.
Based on the formulas used earlier, you can calculate the number of CPUs required for a given number of users as follows:

Number of CPUs = (Number of users) x (Total cost of user profile in Mcycles/sec) / (CPU speed in MHz) x (Maximum CPU utilization)

If you want to plan for 10,000 users for the sample application and have a threshold limit of 75 percent defined for the processor, the number of CPUs required is:

10000 x 0.245 / (1.3 x 1000) x 0.75 = 2.51 processors

Your resource estimates should also factor in the impact of possible code changes or functionality additions in future versions of the application. These versions may require more resources than estimated for the current version.



-------------------------------------------------------------------------------
Assessing Your Application Performance Objectives
At this stage in capacity planning, you gather information about the level of activity expected on your server, the anticipated number of users, the number of requests, acceptable response time, and preferred hardware configuration. Capacity planning for server hardware should focus on maximum performance requirements and set measurable objectives for capacity.
For your application, take the information that you derive from Examining Results from the Baseline Applications, to see how your application differs from one of the baseline applications. For example, if you are using the HTTPS protocol for a business application similar to MedRec, you should examine the metrics provided for the heavy MedRec application. Perform the same logical process for all of the factors listed in Capacity Planning Factors.
The numbers that you calculate from using one of our sample applications are of course just a rough approximation of what you may see with your application. There is no substitute for benchmarking with the actual production application using production hardware. In particular, your application may reveal subtle contention or other issues not captured by our test applications.


Calculating Hardware Requirements
To calculate hardware capacity requirements:
Evaluate the complexity of your application, comparing it to one or more of the applications described in Examining Results from the Baseline Applications. The example in Guidelines for Calculating Hardware Requirements identifies this value as the Complexity Factor. If your application is about as complex as one of the baselines, your Complexity Factor = 1.
Consider what throughput is required for your application. In the example, this is called the Required TPS (transactions per second).
Take the preferred hardware TPS value from the appropriate table. The example in Guidelines for Calculating Hardware Requirements identifies this value as the Reference TPS.



Guidelines for Calculating Hardware Requirements
The number of computers required is calculated as follows:
Number of boxes = (Required TPS) / (Reference TPS / Complexity Factor)
For example, if your assessment shows:
Your application is twice as complex as the Light MedRec application; the Complexity Factor = 2.
The requirement is for a 400 TPS; the Required TPS = 400.
The preferred hardware configuration is Windows 2000 using 4x700 MHz processors.
The Reference TPS is 205, from Table 2-3, configuration number lmW1.
The number of boxes required is approximately equal to:
400/(205/2) = 400/102.5 = next whole number, 3.90 rounded up = 4 boxes.
Always test the capacity of your system before relying on it for production deployments.
-----------------------


For data-warehouse project hard disk performance is everything. database cache, indexes, execution plans, memory, number of processors will not make any difference if your server hard disks are slow.

For example you have a table with 10 gigs of data with no indexes. Running select (*) from table will require a full scan of of the table you hardisk does 100mb per sec

10gig/100meg =10 000 /100 = 100sec

Is 100 sec acceptable for you?

---------------------

Disk Size depends on database size

disk speed depends on TPS (transactions per second)
--------------
RAM depends on sort operation,merge join,......no of concurrent sessions....
Using the Sort transform is a fully blocking operation. Whatever rows you're asking the Sort to work on will be blocked in the data flow until the sort completes. The Sort is significantly slower if it can't put all that data into RAM - so that's a critical limit you should design for. Take your max rows that you expect into a Sort multiplied by the row length.

The Merge Join is a partially blocking operation. You don't have to be as concerned about this as the Sort, but if your join is particularly "malformed", it could require a lot of RAM to buffer one of the inputs while the other waits for rows.

--------------------------------
number of processors and speed depends on your data processing

----------------------

Factors Affecting Capacity Planning
There are various factors to consider when conducting a capacity-planning exercise. Each of the following factors has a significant impact on system performance (and on system capacity as well).
- Operational load at backend
- Front end load
- Number of concurrent users/requests
- Base load and peak load
- Number of processes and Instances of processes
- Log size
- Archival requirements
- Persistence requirements
- Base recommendations from vendor
- Installation requirements
- Test results and extrapolation
- Interface Architecture and Performance Tuning
- Processing requirements and I/O operations
- Network bandwidth and latency
- Architecture resilience
- Network/Transmission losses
- Load factor loss
- Legacy interfacing loss/overheads
- Complexity of events and mapping
- Factor of safety

--------------

Hardware capacity determination
The hardware requirements can be evaluated based on the test results for a given set of conditions. There are several tools available to simulate clients (LoadRunner, WebLOAD, etc.). By simulating the transactions mix client load can be generated and load can be increased by adding more concurrent users. This is an iterative process, and the goal is to achieve as high CPU utilization as possible. If the CPU utilization doesn't increase (and hasn't yet peaked out) with the addition of more users, database or application bottlenecks are analyzed. There are several commercially available profilers (IntroScope, OptimizeIt, and JProbe) that can be used to identify these hot spots. In a finely tuned system, the CPU utilization (at steady state) in ideal case is usually less than 70%. While throughput won't increase with the addition of more load, response times, on the other hand, will increase as more clients are added. The capacity of the hardware is the point where the response time increases for additional load.






Database Growth Monitoring

Step : 1 Calculate total Size of tablespace

select sum(bytes)/1024/1024 "TOTAL SIZE (MB)" from dba_Data_files;


Step : 2 Calculate Free Space in Tablespace

select sum(bytes)/1024/1024 "FREE SPACE (MB)" from dba_free_space;

Step : 3 Calculate total size , free space and used space in tablespace

select t2.total "TOTAL DISK USAGE",t1.free "FREE SPACE",(t1.free/t2.total)*100 "FREE (%)",(t2.total-t1.free) "USED SPACE", (1-t1.free/t2.total)*100 "USED (%)"
from (select sum(bytes)/1024/1024 free from dba_free_space) t1 , (select sum(bytes)/1024/1024 total from dba_Data_files) t2 ;


Step : 4 Create table which is store all free/use space related information of tablespace

create table db_growth
as select *
from (
select sysdate,t2.total "TOTAL_DISK_USAGE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE% "
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
);

Step : 5 Insert free space information in DB_GROWTH table (if you want to populate data Manually)

insert into db_growth
select *
from (
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
);

COMMIT;


Step : 6 Create View on DB_GROWTH based table ( This Steps is Required if you want to populate data automatically)


create view v_db_growth
as select *
from
(
select sysdate,t2.total "TOTAL_SIZE",t1.free "FREE_SPACE",(t2.total-t1.free) "USED_SPACE",(t1.free/t2.total)*100 "FREE%"
from
(select sum(bytes)/1024/1024 free
from dba_free_space) t1 ,
(select sum(bytes)/1024/1024 total
from dba_Data_files) t2
)
;

Step : 7 Insert data into DB_GROWTH table from V_DD_GROWTH view


insert into db_growth select *
from v_db_growth;
COMMIT;


Step : 8 Check everything goes fine.

select * from db_growth;

Check Result

Step : 9 Execute following SQL for more time stamp information

alter session set nls_date_format ='dd-mon-yyyy hh24:mi:ss' ;
Session altered.

Step : 10 Create a DBMS jobs which execute after 24 hours

declare
jobno number;
begin
dbms_job.submit(
jobno, 'begin insert into db_growth select * from v_db_growth;commit;end;', sysdate, 'SYSDATE+ 1', TRUE);
commit;
end;
/


PL/SQL procedure successfully completed.

Step: 11 View your dbms jobs and it's other information

select * from user_jobs;


-----If you want to execute dbms jobs manually execute following command other wise jobs is executing automatically

exec dbms_job.run(ENTER_JOB_NUMBER)
exec dbms_job.run(23);



PL/SQL procedure successfully completed.

exec dbms_job.remove(21); ------to remove a job


Step: 12 Finally all data populated in db_growth table

select * from db_growth;

Table Actual Size

SELECT
owner, table_name, TRUNC(sum(bytes)/1024/1024) Meg
FROM
(SELECT segment_name table_name, owner, bytes
FROM dba_segments
WHERE segment_type = 'TABLE'
UNION ALL
SELECT i.table_name, i.owner, s.bytes
FROM dba_indexes i, dba_segments s
WHERE s.segment_name = i.index_name
AND s.owner = i.owner
AND s.segment_type = 'INDEX'
UNION ALL
SELECT l.table_name, l.owner, s.bytes
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.segment_name
AND s.owner = l.owner
AND s.segment_type = 'LOBSEGMENT'
UNION ALL
SELECT l.table_name, l.owner, s.bytes
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.index_name
AND s.owner = l.owner
AND s.segment_type = 'LOBINDEX')
WHERE owner ='LDBO'
GROUP BY table_name, owner
HAVING SUM(bytes)/1024/1024 > 10 /* Ignore really small tables */
ORDER BY SUM(bytes) desc
;

Monday, February 13, 2012

Server Capacity Planning

1) Existing server configuration (Processor, No of CPU, RAM, Disk Capacity, … , …)
2) No. of running databases on server
3) Databases folder size of all years
4) No of Users ( concurrent connections) in the database
5) Weekly or monthly growth of databases.
6) Oracle core license??

----------------------------Memory----------------
select * from dba_high_water_mark_statistics where name in ('SESSIONS','DB_SIZE');
select * from v$resource_limit;
--------maximum amount of memory allocated by the currently connected sessions
SELECT SUM (value) "max memory allocation" FROM v$sesstat ss, v$statname st WHERE st.name = 'session uga memory max' AND ss.statistic# = st.statistic#;

------------------pga requirement------------

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS');)*(2048576+a.value+b.value) pga_size
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;


-----------------------CPU Benchmark------------------------------
http://www.cpubenchmark.net/multi_cpu.html

-----------------------Space Management---------------

As per database growth weekly /monthly and planning for how many year

SGA PGA measuring

select * from v$sgastat order by 1;
select * from v$pgastat order by 1;


I noticed that you have some Pools in your SGA which are not used:

large pool free memory 209715200

But your PGA could reach about 340 Mo.

So, you may decrease about 160 Mo the large_pool_size parameter (you have 200 Mo free).

It will decrease the SGA (about 160 Mo).

Then you may increase the PGA_AGGREGATE_TARGET to 512 Mo.

The most important is that SGA + PGA remains below 2GB (except if you use /3GB parameter
which may help you to get 1 GB more).

------------------------Used SGA-----------------

select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name <> 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;


------------------------Free SGA-----------------

select name, round(sum(mb),1) mb, round(sum(inuse),1) free
from (select case when name = 'buffer_cache'
then 'db_cache_size'
when name = 'log_buffer'
then 'log_buffer'
else pool
end name,
bytes/1024/1024 mb,
case when name = 'free memory'
then bytes/1024/1024
end inuse
from v$sgastat
)group by name;

--------------------

select name,value from v$parameter where name ='sort_area_size';
---------------------------------- maximum PGA usage per process:--
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
/

-----------sum of all current PGA usage per process---------
select
sum(pga_used_mem) sum_pga_used_mem
, sum(pga_alloc_mem) sum_pga_alloc_mem
, sum(pga_max_mem) sum_pga_max_mem
from v$process
/

-----------pga requirement as per high water mark

select
(select highwater from dba_high_water_mark_statistics where name = ('SESSIONS'))*(2048576+a.value+b.value)/1024/1024 pga_size_MB
from
v$parameter a,
v$parameter b
where
a.name = 'sort_area_size'
and
b.name = 'hash_area_size'
;

Tuesday, December 13, 2011

Resizing Recreating RedoLogs / Increase Redo log / Archivelog generation fast size


--------------------------------------------final steps--------------------------
select group#, status from v$log;

ALTER SYSTEM CHECKPOINT GLOBAL;

select group#, status from v$log;

alter database drop logfile group 1;

alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG') size 200M reuse ;

alter system switch logfile;
alter system switch logfile;

select group#, status from v$log;


1)
SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;


Make the last redo log CURRENT

select group#, status from v$log;

alter system switch logfile;

select group#, status from v$log;


ALTER SYSTEM CHECKPOINT GLOBAL;

ALTER DATABASE DROP LOGFILE GROUP 1;



2) Re-create dropped online redo log group


alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG' ) size 200m reuse;



3)
select group#, status from v$log;


GROUP# STATUS
---------- ----------------
1 UNUSED
2 INACTIVE
3 CURRENT


Force another log switch

alter system switch logfile;



select group#, status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
2 INACTIVE
3 ACTIVE





4)
Loop back to Step 2 until all logs are rebuilt

alter database add logfile group 2 ('F:\NBSD1112\REDO02.LOG' ) size 200m reuse;




-----------------------------------SECOND METHOD-------------------

SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

GROUP# MEMBER BYTES
1 F:\NBSD1112\REDO01.LOG 52428800
2 F:\NBSD1112\REDO02.LOG 52428800
3 F:\NBSD1112\REDO03.LOG 52428800


Here is how i changed this to five 200M redo logs:

SQL> alter database add logfile group 4 ('F:\NBSD1112\REDO04.LOG') size 200M;
SQL> alter database add logfile group 5 ('F:\NBSD1112\REDO05.LOG') size 200M;

while running following sql commands, if you hit an error like this:

ORA-01623: log 3 is current log for instance RPTDB (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: 'F:\NBSD1112\REDO03.LOG'

you should run " alter system switch logfile;" until current log is 4 or 5.

Then execute "alter system checkpoint;"

SQL> alter database drop logfile group 1;
SQL> alter database drop logfile group 2;
SQL> alter database drop logfile group 3;

then move (or maybe drop) old redo logs

RENAME F:\NBSD1112\REDO01.LOG F:\NBSD1112\REDO01_OLD.LOG
RENAME F:\NBSD1112\REDO02.LOG F:\NBSD1112\REDO02_OLD.LOG
RENAME F:\NBSD1112\REDO03.LOG F:\NBSD1112\REDO03_OLD.LOG

finally

SQL> alter database add logfile group 1 ('F:\NBSD1112\REDO01.LOG') size 200M;
SQL> alter database add logfile group 2 ('F:\NBSD1112\REDO02.LOG') size 200M;
SQL> alter database add logfile group 3 ('F:\NBSD1112\REDO03.LOG') size 200M;

Monday, December 12, 2011

Archivelog Frequency

A script to check the frequency of log switches


col MidN format 999
col 1AM format 999
col 2AM format 999
col 3AM format 999
col 4AM format 999
col 5AM format 999
col 6AM format 999
col 7AM format 999
col 8AM format 999
col 9AM format 999
col 10AM format 999
col 11AM format 999
col Noon format 999
col 1PM format 999
col 2PM format 999
col 3PM format 999
col 4PM format 999
col 5PM format 999
col 6PM format 999
col 7PM format 999
col 8PM format 999
col 9PM format 999
col 10PM format 999
col 11PM format 999
select to_char(first_time,'mm/dd/yy') logdate,
sum(decode(to_char(first_time,'hh24'),'00',1,0)) "MidN",
sum(decode(to_char(first_time,'hh24'),'01',1,0)) "1AM",
sum(decode(to_char(first_time,'hh24'),'02',1,0)) "2AM",
sum(decode(to_char(first_time,'hh24'),'03',1,0)) "3AM",
sum(decode(to_char(first_time,'hh24'),'04',1,0)) "4AM",
sum(decode(to_char(first_time,'hh24'),'05',1,0)) "5AM",
sum(decode(to_char(first_time,'hh24'),'06',1,0)) "6AM",
sum(decode(to_char(first_time,'hh24'),'07',1,0)) "7AM",
sum(decode(to_char(first_time,'hh24'),'08',1,0)) "8AM",
sum(decode(to_char(first_time,'hh24'),'09',1,0)) "9AM",
sum(decode(to_char(first_time,'hh24'),'10',1,0)) "10AM",
sum(decode(to_char(first_time,'hh24'),'11',1,0)) "11AM",
sum(decode(to_char(first_time,'hh24'),'12',1,0)) "Noon",
sum(decode(to_char(first_time,'hh24'),'13',1,0)) "1PM",
sum(decode(to_char(first_time,'hh24'),'14',1,0)) "2PM",
sum(decode(to_char(first_time,'hh24'),'15',1,0)) "3PM",
sum(decode(to_char(first_time,'hh24'),'16',1,0)) "4PM",
sum(decode(to_char(first_time,'hh24'),'17',1,0)) "5PM",
sum(decode(to_char(first_time,'hh24'),'18',1,0)) "6PM",
sum(decode(to_char(first_time,'hh24'),'19',1,0)) "7PM",
sum(decode(to_char(first_time,'hh24'),'20',1,0)) "8PM",
sum(decode(to_char(first_time,'hh24'),'21',1,0)) "9PM",
sum(decode(to_char(first_time,'hh24'),'22',1,0)) "10PM",
sum(decode(to_char(first_time,'hh24'),'23',1,0)) "11PM"
from v$log_history
group by to_char(first_time,'mm/dd/yy')
order by 1
/

Saturday, December 3, 2011

Buffer Cache Size

High CPU_COUNT and increased granule size can cause ORA-0431 error.

------CPU_COUNT specifies the number of CPUs available to Oracle. On single-CPU computers, the value of CPU_COUNT is 1.

Memory sizing depends on CPU_COUNT (No of processor groups).
Please use below formulas to calculate min buffer cache size

--Minimum Buffer Cache Size
10g : max(CPU_COUNT) * max(Granule size)
11g : max(4MB * CPU_COUNT)

Please note that If SGA_MAX_SIZE < 1GB then use Granule size = 4mb, SGA_MAX_SIZE > 1G then use Granule size = 8MB.

-- _PARALLEL_MIN_MESSAGE_POOL Size
If PARALLEL_AUTOMATIC_TUNING =TRUE then large pool is used for this area otherwise shared pool is used.

CPU_COUNT*PARALLEL_MAX_SERVERS*1.5*(OS msg bufferr size) OR CPU_COUNT*5*1.5*(OS message size)

-- Add extra 2MB per CPU_COUNT for shared pool.

Here is the example:-

Sun Solaris server has threaded CPUs. 2 physical CPUs has 8 cores, and each core has 8 threads, then Oracle evaluates CPU_COUNT = 2*8*8=128.

When SGA_MAX_SIZE=900MB,
Minimum Buffer Cache = CPU_COUNT *Granule size = 128*4M = 512MB
Shared Pool can use 338MB

When SGA_MAX_SIZE=1200MB,
Minimum Buffer Cache = CPU_COUNT *Granule size = 128*8M = 1024MB
Shared Pool can use 176 MB, so ORA-4031 occurs despite larger SGA_MAX_SIZE.

You need to manually tune CPU_COUNT parameter to resolve this error.

Saturday, April 9, 2011

Memory management SGA PGA

RAM=16 GB
WINDOW PROCESS=16*20%=3.2 GB

ORACLE SERVICES=12 GB

PGA_AGGREGATE_TARGET==(RAM*80%)*50% FOR DSS
PGA_AGGREGATE_TARGET==(RAM*80%)*20% FOR OLTP


SGA_TARGET + PGA_AGGREGATE_TARGET determine how much memory oracle is going to use.

SGA_MAX_SIZE is only upper limit. Not how much oracle is going to allocate.


PGA_AGGREGATE_TARGET is not being allocated at the startup it is allocated on as needed basis, so when some server process need memory from pga_aggregate _target area this memory is being allocated by setting 1GB to PGA_AGGREGATE_TARGET you set max size for this region, so even if some process will need memory but you have already reached 1GB Oracle will not allocate more.

INSTANCE =3



SGA PGA TOTAL
YR 0910 1 0.5 1.5
YR 1011 1 0.5 1.5
YR 1112 6 3 9











Friday, April 8, 2011

SGA Tuning

There are two parameters
SGA_TARGET: RAM SPACE IS ASSIGNED TO ORACLE SERVICES, CAN SEE ON TASK MANAGER PROCESS

This parameter is new with Oracle 10g. It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.

sga_target cannot be higher than sga_max_size.


If sga_max_size is less than the sum of db_cache_size + log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.



SGA_MAX_SIZE

This parameter sets the hard limit up to which sga_target can dynamically adjust sizes. Usually, sga_max_size and sga_target will be the same value, but there may be times when you want to have the capability to adjust for peak loads. By setting this parameter higher than sga_target, you allow dynamic adjustment of the sga_target parameter.

SGA Sizing on a dedicated serverBold
OS Reserved RAM – This is RAM required to run the OS kernel and system functions, 20% of total RAM for MS-Windows, and 10% of total RAM for UNIX/Linux

Oracle Database Connections RAM – Each Oracle connection requires OS RAM regions for sorting and hash joins. (This does not apply when using the Oracle multi-threaded server or pga_aggregate_target .) The maximum amount of RAM required for a session is as follows:

2 MB RAM session overhead + sort_area_size + hash_area_size

Oracle SGA Sizing for RAM – This is determined by the Oracle parameter settings. The total is easily found by either the show sga command or the value of the sga_max_size parameter.


Eg. RAM=16 GB

for windows processes = 16*20%=3.5 GB reserved for windows

2MB + 64 KB + 128 KB




ALTER SYSTEM SET SGA_TARGET=6512M;

ALTER SYSTEM SET SGA_MAX_SIZE=8152M SCOPE=SPFILE;

PGA Tuning

Select Name,Value/1024/1024 From V$parameter where name like '%pga%';

Make a first estimate for PGA_AGGREGATE_TARGET, based on a rule of thumb. By default, Oracle uses 20% of the SGA size. However, this initial setting may be too low for a large DSS system.

You must then divide the resulting memory between the SGA and the PGA.

  • For OLTP systems, the PGA memory typically accounts for a small fraction of the total memory available (for example, 20%), leaving 80% for the SGA.

  • For DSS systems running large, memory-intensive queries, PGA memory can typically use up to 70% of that total (up to 2.2 GB in this example).

Good initial values for the parameter PGA_AGGREGATE_TARGET might be:

  • For OLTP: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 20%

  • For DSS: PGA_AGGREGATE_TARGET = (total_mem * 80%) * 50%

    where total_mem is the total amount of physical memory available on the system.





The PGA_AGGREGATE_TARGET should be set to attempt to keep the ESTD_PGA_CACHE_HIT_PERCENTAGE greater than 95 percent. By setting this appropriately, more data will be sorted in memory that may have been sorted on disk. The next query returns the minimum value for the PGA_AGGREGATE_TARGET that is projected to yield a 95 percent or greater cache hit ratio:



Select Min(Pga_Target_For_Estimate/1024/1024) "recommended_pga"
from v$pga_target_advice
Where Estd_Pga_Cache_Hit_Percentage > 95;




alter system set pga_aggregate_target= "recommended_pga";

Shrink datafile space from dropped table

Shrink datafile space from dropped table
==================================================

select sum(bytes) / 1024 / 1024 / 1024 from dba_segments where tablespace_name='USR';
114.127 GB

Select Sum(Bytes) / 1024 / 1024 / 1024 From V$datafile Where Name Like '%USERS01%';

251.13 GB

Select Sum(Bytes) / 1024 / 1024 / 1024 From Dba_Free_Space where Tablespace_Name='USR';
137 GB

137 GB free space, how to shrink it.



==================================================

ALTER TABLE XYZ ENABLE ROW MOVEMENT;
ALTER TABLE XYZ SHRINK SPACE CASCADE;

alter database datafile 'D:\ARID0910\USERS01.ORA' resize 102400M;


===================
create a temperory table space.
move all three user to the new tablespace
move all the tables to new tablespace.
now drop the old tablespace and
create a new table space with same name and
restore all ur user and tables.



====================================================



-- Enable row movement.
ALTER TABLE scott.emp ENABLE ROW MOVEMENT;

-- Recover space and amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE;

-- Recover space, but don't amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE COMPACT;

-- Recover space for the object and all dependant objects.
ALTER TABLE scott.emp SHRINK SPACE CASCADE;


====================================================

you have two options besides import/export. Basically, only the first step differs:
1. As Douglas Paiva de Sousa stated, you can use DBMS_REDEFINITION to re-create the objects left in the tablespace to the same tablespace or a new one;
2. Without DBMS_REDEFINITION you can create an old-fashioned script based on the dictionary views that moves all the tables (and indexes if there is any in there) left in that tablespace to another one with ALTER TABLE MOVE TABLESPACE. The indexes must be rebuilt afterwards as they become invalid in the process.

If You moved the objects to a new tablespace, then the next step is to rename the new tablespace to the old one.

As a last step the datafile size should be changed to a lower value; at that point will only the Oracle DB release physical disc space for the OS.

The first one is better from the point of view of system accessibility as this can be done while the system using the given tables is online;
the second possibility may render that system useless so it requires them to be offline.


=======================================================


You can use the free space in the tablespace for new extents, but if You want to release it to the Operating System You must do some kind

of migration of the data.
Either while the system(s) using the tablespace are online or while offline.
The second is much more easier, as You can generate the script to do that from the data dictionary-but You must have enough free space in

the OS to hold one more copy of them and the accessing system(s) must be offline.
So this will be a planned downtime for them...

The steps in more detail:
create tablespace users2 ...

run the following query, then execute its results:

select 'ALTER TABLE ' || o.owner || '.' || o.TABLE_NAME || ' enable row movement; '
from dba_tables o
where o.TABLESPACE_NAME = 'USERS'

then execute the following query's results:

select 'ALTER TABLE ' || o.owner || '.' || o.TABLE_NAME || ' move tablespace users2; '
from dba_tables o
where o.TABLESPACE_NAME = 'USERS'

in case there are indexes in the TS also, run query and execute results:
select 'ALTER INDEX ' || o.owner || '.' || o.INDEX_NAME || ' rebuild tablespace users2; '
from dba_indexes o
where o.TABLESPACE_NAME = 'USERS'

then run the last two queries changed for the original users TS in order table, index to move them back (OR simply rename the new

tablespace to the old one if You checked aand it contains nothing).
After that You can do the datafile resize if it is necessary.

As a last step You should check the SPs, packages and functions for invalidity.


==============================================================

If YES, when you drop a table, its goes to "recyclebin" for "flash-back transactions", then you need to PURGE the table from recyclebin.

to list tables on recyclebin:
select object_name from recyclebin;

to clear users area(ALL objects on recyclebin):
PURGE RECYCLEBIN; (need sysdba privs)

to clear only a table:
PURGE TABLE TABLE_NAME;

==============================================================


To resize a datafile, you need to free the last blocks on the file, lik this:
A=table A
B=Table B
C=Index C
F=Free space
Datafile_blocks= AAABBBBBAABBCFFFFCCB

Droping table "A" you get:
FFFBBBBBFFBBCFFFFCCB

Then you get free space on Database, but not on Filesystem.

Moving segments "C" and "B" to another tablespace or to first blocks you can resize de datafile.

use this scritp to MAP the datafile segmets:
select file_id, block_id first_block, block_id+blocks-1 last_block,substr(segment_name,1,20) SegName
from dba_extents
where tablespace_name = 'USR' /*tablespace name*/
and file_id=5 /*id of datafile, see on dba_data_files table*/
union all
select file_id, block_id, block_id+blocks-1, 'FREE'
from dba_free_space
where tablespace_name = 'USR'
and file_id= /*Id of datafile*/
order by file_id, first_block

==============================================================

Wednesday, April 6, 2011

Check Unused Space

SQL> set serveroutput on

SQL> set pages 1000

SQL> set lines 160

SQL> DECLARE

2 alc_bks NUMBER;

3 alc_bts NUMBER;

4 unsd_bks NUMBER;

5 unsd_bts NUMBER;

6 luefi NUMBER;

7 luebi NUMBER;

8 lub NUMBER;

9 BEGIN

10 DBMS_SPACE.UNUSED_SPACE (

11 segment_owner => 'RNCRY'

12 , segment_name => 'COMS'

13 , segment_type => 'TABLE'

14 , total_blocks => alc_bks

15 , total_bytes => alc_bts

16 , unused_blocks => unsd_bks

17 , unused_bytes => unsd_bts

18 , last_used_extent_file_id => luefi

19 , last_used_extent_block_id => luebi

20 , last_used_block => lub

21 );

22

23 DBMS_OUTPUT.PUT_LINE('Allocated space = '|| alc_bts );

24 DBMS_OUTPUT.PUT_LINE('Actual used space = '|| unsd_bts );

25 EXCEPTION

26 WHEN OTHERS THEN

27 DBMS_OUTPUT.PUT_LINE(SUBSTR(SQLERRM,1,250));

28 END;

29 /

Allocated space = 8534360064

Actual used space = 46874624



PL/SQL procedure successfully completed.

Followers