Wednesday, August 29, 2012

Oracle 11g Advanced Compression


You can implement advanced compression option into your databases having more update, insert operation tables to manage high growing amount of data.

Followings are very simple command to implement advanced compression

alter table  tablename compress for all operations;

alter index index1 rebuild compress;
alter index index2 rebuild compress;


SELECT table_name, compression, compress_for FROM user_tables where table_name='tablename';

select index_name,COMPRESSION,STATUS from dba_indexes where ='tablename';


Satish,

Please implement the same on UAT first to get performance.



Following is test case to show the difference between 10gR2 Table Compression feature and 11gR2’s Advanced Compression. Oracle provided table level compression feature in 10gR2. While this compression provided some storage reduction, 10g’s table compression only compressed the data during BULK LOAD operations. New and updated data were not compressed.

With 11g’s Advanced Compression new and updated data are also compressed; achieving highest level in storage reduction, while providing performance improvements as compressed blocks result in more data being moved per I/O. 

Note1: Basic compression comes with oracle 11g Enterprise Edition, To make table as OLTP compressed its again extra cost (US$11,500.00/ Processor) perpetual option with Enterprise Edition.

Note2: There is tradeoff between Disk IO and CPU. it depends on how your system is configured. If your performance bottleneck is disk I/O, you almost certainly will benefit from using compression, because it saves a lot of disk reads. If you are on the other hand low on CPU, you might not always.

------------------------------Test Case-----------------------------------------------

Following test case was executed in 10g database server. 

A table called TEST was created without COMPRESSION option. 

SQL> select table_name,compression from dba_tables where table_name = 'TEST';

TABLE_NAME COMPRESS
------------------------- -------------
TEST DISABLED


SQL> select bytes from dba_segments where segment_name = 'TEST';

SUM(BYTES)
------------------
92274688 


The size of the table was around 92MB.

Now create another table called TEST_COMPRESSED with COMPRESS option. 

SQL> create table TEST_COMPRESSED COMPRESS as select * from test;

Table created.

SQL> select table_name, compression from dba_tables where table_name like 'TEST
%';

TABLE_NAME COMPRESS
------------------------------ ---------------
TEST_COMPRESSED ENABLED
TEST DISABLED


Now let’s check the size of the COMPRESSED table.


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------------
30408704

Check out the size of the COMPRESSED table. It is only 30MB, around 30% reduction in size. So far so good. 

Now let’s do a plain insert into the COMPRESSED table.

SQL> insert into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

Let’s check the size of the COMPRESSED table. 

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED'

2 /

SUM(BYTES)
----------
117440512


Wow! From 30MB to 117MB? So, plain INSERT statement does not COMPRESS the data in 10g.

(You will see this is not the case with 11g)

Now let’s do the same insert with a BULK LOAD 

SQL> insert /*+ APPEND */ into TEST_COMPRESSED select * from TEST;

805040 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';


SUM(BYTES)
----------
142606336

Ok, now the size of the COMPRESSED table is 142MB from 117MB. For the same number of rows, the table size only increased by 25MB. So BULK LOAD compresses the data. 

Let’s check other DML statements such as DELETE and UPDATE against the COMPRESSED table. 


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
142606336

No change in total size of the table. DELETE has no impact as expected.


Let’s check UPDATE. 

SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where
rownum < 100000;

99999 rows updated.

SQL> commit;


SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

SUM(BYTES)
----------
150994944

The table size is increased by 8MB? No compression for UPDATE statement either. 


All this clearly shows that 10g’s Table COMPRESSION would work great for initial BULK LOADS, however subsequent UPDATE’s, DELETE’s and INSERT’s will not result in COMPRESSED blocks.





Now, let’s see 11g’s Test Results.

The following SQL statements were executed against 11.2.0.1 database version.


TEST table of 100MB in size was created as before.



SQL> select bytes from dba_segments where segment_name = 'TEST';

BYTES
----------
100663296

So 100MB of table created.

Let’s create a table with COMPRESS FOR ALL OPERATIONS option. This is only available in 11g.


SQL> create table test_compressed compress for all operations as select * from
test;

Table created.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
31457280


Check out the size of the compressed table vs. uncompressed table. 30% less space usage on a compressed table. Not a big difference compared to 10g.




Let’s check other DML statements.

Let’s do a plain insert to the compressed table.

SQL> insert into TEST_COMPRESSED select * from test;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
75497472



11g’s Advanced compression, compressed 100MB of data to 40MB and inserted to the compressed table, WITHOUT BULK LOAD option. 

Now let’s do the BULK LOAD onto 11g’s COMPRESSED table.


SQL> insert into /*+ APPEND */ test_compressed select * from TEST;

789757 rows created.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904
It has a same impact as PLAIN insert. 

What about deletes and updates?


SQL> delete from test_compressed where rownum < 100000;

99999 rows deleted.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


No change in deletes. This is expected as the blocks are compressed when the new rows are added to the existing blocks and that the threshold reaches PCTFREE.


SQL> update test_compressed set object_name = 'XXXXXXXXXXXXXXXXXXXXXXXXXX' where 

2 rownum < 100000;

99999 rows updated.

SQL> commit;

Commit complete.

SQL> select bytes from dba_segments where segment_name = 'TEST_COMPRESSED';

BYTES
----------
109051904


There is no change in this case as existing blocks were able to accommodate updates. However the same update generated more data in 10g. 


No comments:

Post a Comment

Followers