Monday, January 2, 2012
ORA-01114: IO error writing block to file 202 (block # 1473756)
Friday, December 30, 2011
Oracle 11g new features
Automatic Memory Tuning - Automatic PGA tuning was introduced in Oracle 9i. Automatic SGA tuning was introduced in Oracle 10g. In 11g, all memory can be tuned automatically by setting one parameter.
SQL Performance Analyzer (Fully Automatic SQL Tuning) - Using SPA, you can tell 11g to automatically apply SQL profiles for statements where the suggested profile give 3-times better performance that the existing statement. The performance comparisons are done by a new administrative task during a user-specified maintenance window.
Automated Storage Load balancing - Oracle’s Automatic Storage Management (ASM) now enables a single storage pool to be shared by multiple databases for optimal load balancing. Shared disk storage resources can alternatively be assigned to individual databases and easily moved from one database to another as processing requirements change.
Enhanced ILM - Information Lifecycle Management (ILM) has been around for decades, but Oracle has made a push to codify the approach in 11g.
File Group Repository - Oracle introduced an exciting new feature in 10gr2 dubbed the Oracle File Group Repository (FGR). The FGR allows the DBA to define a logically-related group of files and build a version control infrastructure. The working of the Oracle file group repository were created to support Oracle Streams, and they mimic the functionality of an IBM mainframe generation data group (GDG), in that you can specify relative incarnations of the file sets (e.g. generation 0, generation -3).
Interval partitioning for tables - This is a new 11g partitioning scheme that automatically creates time-based partitions as new data is added. You can now partition by date, one partition per month for example, with automatic partition creation.
New load balancing utilities -There are several new load balancing utilities in 11g.
Web server load balancing - The web cache component includes Apache extension to load-balance transactions to the least-highly-loaded Oracle HTTP server (OHS).
RAC instance load balancing - Staring in Oracle 10g release 2, Oracle JDBC and ODP.NET provide connection pool load balancing facilities through integration with the new “load balancing advisory” tool. This replaces the more-cumbersome listener-based load balancing technique.
Automated Storage Load balancing - Oracle’s Automatic Storage Management (SAM) now enables a single storage pool to be shared by multiple databases for optimal load balancing. Shared disk storage resources can alternatively be assigned to individual databases and easily moved from one database to another as processing requirements change.
Data Guard Load Balancing – Oracle Data Guard allows for load balancing between standby databases.
Listener Load Balancing - If advanced features such as load balancing and automatic failover are desired, there are optional sections of the listener.ora file that must be present
New table Data Type "simple_integer" - A new 11g datatype dubbed simple_integer is introduced. The simple_integer data type is always NOT NULL, wraps instead of overflows and is faster than PLS_INTEGER.
Improved table/index compression - Segment compression now works for all DML, not just direct-path loads, so you can create tables compressed and use them for regular OLTP work. Also supports column add/drop.
Faster DML triggers - DML triggers are up to 25% faster. This especially impacts row level triggers doing updates against other tables (think Audit trigger).
Server-side connection pooling - In 11g server-side connection pooling, an additional layer to the shared server, to enable faster [actually to bypass] session creation. Server-side connection pooling allows multiple Oracle clients to share a server-side pool of sessions (USERIDs must match). Clients can connect and disconnect (think PHP applications) at will without the cost of creating a new server session - shared server removes the process creation cost but not the session creation cost.
RMAN UNDO bypass - RMAN backup can bypass undo. Undo tablespaces are getting huge, but contain lots of useless information. Now RMAN can bypass those types of tablespace. Great for exporting a tablespace from backup.
Capture/replay database workloads - Sounds appealing. You can capture the workload in prod and apply it in development. Oracle is moving toward more workload-based optimization, adjusting SQL execution plans based on existing server-side stress. This can be very useful for Oracle regression testing.
Scalability Enhancements - The features in 11g focused on scalability and performance can be grouped into four areas: Scalable execution, scalable storage, scalable availability and scalable management.
Virtual columns - Oracle 11g virtual table columns are columns that are actually functions ("create table t1 (c1 number, c2 number, c3 as (c1+c2) virtual"), and similarly, virtual indexes that are based on functions.
REF partitioning - The 11g REF partitioning allows you to partition a table based on the values of columns within other tables.
A "super" object-oriented DDL keyword - This is used with OO Oracle when instantiating a derivative type (overloading), to refer to the superclass from whence the class was derived.
Oracle 11g XML data storage - Starting in 11g, you can store XML either as a CLOB or a binary data type, adding flexibility. Oracle11g will support query mechanisms for XML including XQuery and SQL XML, emerging standards for querying XML data stored inside tables.
New Trigger features - A new type of "compound" trigger will have sections for BEFORE, ROW and AFTER processing, very helpful for avoiding errors, and maintaining states between each section.
Partitioning - partitioning by logical object and automated partition creation.
LOB's - New high-performance LOB features.
Automatic Diagnostic Repository (ADR) - When critical errors are detected, they automatically create an “incident”. Information relating to the incident is automatically captured, the DBA is notified and certain health checks are run automatically.
Health Monitor (HM) utility - The Health Monitor utility is an automation of the dbms_repair corruption detection utility. When a corruption-like problem happens, the HR utility will checks for possible corruption within database blocks, redo log blocks, undo segments, or dictionary table blocks.
Incident Packaging Service (IPS) - This wraps up all information about an incident, requests further tests and information if necessary, and allows you to send the whole package to Oracle Support.
Feature Based Patching - All one-off patches will be classified as to which feature they affect. This allows you to easily identify which patches are necessary for the features you are using. EM will allow you to subscribe to a feature based patching service, so EM automatically scans for available patches for the features you are using.
New Oracle11g Advisors - New 11g Oracle Streams Performance Advisor and Partitioning Advisor.
Table trigger firing order - Oracle 11g PL/SQL will you to specify trigger firing order.
Invisible indexes - Rich Niemiec claims that the new 11g "invisible indexes" are a great new feature. It appears that the invisible indexes will still exist, that they can just be marked as "invisible" so that they cannot be considered by the SQL optimizer. With the overhead of maintaining the index intact.
-------------Oracle11g High Availability & RAC new features-----------------
Oracle continues to enhance Real Application Clusters in Oracle11g and we see some exciting new features in RAC manageability and enhanced performance:
Oracle 11g RAC parallel upgrades - Oracle 11g promises to have a rolling upgrade features whereby RAC database can be upgraded without any downtime.
Oracle RAC load balancing advisor - Starting in 10gr2 we see a RAC load balancing advisor utility. Oracle says that the 11g RAC load balancing advisor is only available with clients which use .NET, ODBC, or the Oracle Call Interface (OCI).
ADDM for RAC - Oracle will incorporate RAC into the automatic database diagnostic monitor, for cross-node advisories.
Interval Partitioning - Robert Freeman notes that 11g "interval Partitioning makes it easier to manage partitions:
you want to partition every month and it would create the partitions for you? That is exactly what interval partitioning does. Here is an example:
create table selling_stuff_daily
( prod_id number not null, cust_id number not null
, sale_dt date not null, qty_sold number(3) not null
, unit_sale_pr number(10,2) not null
, total_sale_pr number(10,2) not null
, total_disc number(10,2) not null)
partition by range (sale_dt)
interval (numtoyminterval(1,'MONTH'))
( partition p_before_1_jan_2007 values
less than (to_date('01-01-2007','dd-mm-yyyy')));
Note the interval keyword. This defines the interval that you want each partition to represent. In this case, Oracle will create the next partition for dates less than 02-01-2007 when the first record that belongs in that partition is created."
Optimized RAC cache fusion protocols - moves on from the general cache fusion protocols in 10g to deal with specific scenarios where the protocols could be further optimized.
Oracle 11g RAC Grid provisioning - The Oracle grid control provisioning pack allows you to "blow-out" a RAC node without the time-consuming install, using a pre-installed "footprint".
Data Guard - Standby snapshot - The new standby snapshot feature allows you to encapsulate a snapshot for regression testing. You can collect a standby snapshot and move it into your QA database, ensuring that your regression test uses real production data.
Quick Fault Resolution - Automatic capture of diagnostics (dumps) for a fault.
-----------------Oracle 11g programming language support New Features------------
PHP - Improved PHP driver for Oracle.
Compilers - Improved native Java & PL/SQL compilers.
Oracle 11g XML Enhancements - Oracle 11g will also support Content Repository API for Java Technology (JSR 170). Oracle 11g has XML "duality", meaning that you can also embed XML directives inside PL/SQL and embed PL/SQL inside XML code. Oracle 11g XML will also support schema-based document Type Definitions (DTD's), to describe internal structure of the XML document.
Scalable Java - The next scalable execution feature is automatic creation of "native" Java code, with just one parameter for each type with an "on/off" value. This apparently provides a 100% performance boost for pure Java code, and a 10%-30% boost for code containing SQL.
Improved sequence management - A new features of Oracle 11g will bypass DML (sequence.nextval) and allow normal assignments on sequence values.
Intra-unit inlining. In C, you can write a macro that gets inlined when called. Now any stored procedure is eligible for inlining if Oracle thinks it will improve performance. No change to your code is required.
-----------------Oracle 11g PL/SQL New Features-------------------------
PL/SQL "continue" keyword - This will allow a "C-Like" continue in a loop, skipping an iteration to bypass any "else" Boolean conditions. A nasty PL/SQL GOTO statement is no longer required to exit a Boolean within a loop. Oracle professional Jurgen Kemmelings has an excellent PL/SQL example of the PL/SQL continue clause in-action:
begin
for i in 1..3
loop
dbms_output.put_line(’i=’||to_char(i));
if ( i = 2 )
then
continue;
end if;
dbms_output.put_line(’Only if i is not equal to 2′);
end loop;
end;
Disabled state for PL/SQL - Another 11g new feature is a "disabled" state for PL/SQL (as opposed to "enabled" and "invalid" in dba_objects).
Easy PL/SQL compiling - Native Compilation no longer requires a C compiler to compile your PL/SQL. Your code goes directly to a shared library.
Improved PL/SQL stored procedure invalidation mechanism - A new 11g features will be fine grained dependency tracking, reducing the number of objects which become invalid as a result of DDL.
Scalable PL/SQL - The next scalable execution feature is automatic creation of "native" PL/SQL (and Java code), with just one parameter for each type with an "on/off" value. This apparently provides a 100% performance boost for pure PL/SQL code, and a 10%-30% performance boost for code containing SQL.
Enhanced PL/SQL warnings - The 11g PL/SQL compiler will issue a warning for a "when others" with no raise.
Stored Procedure named notation - Named notation is now supported when calling a stored procedure from SQL.
--------------Oracle 11g SQL New Features-------------------------
New "pivot" SQL clause - The new "pivot" SQL clause will allow quick rollup, similar to an MS-Excel pivot table, where you can display multiple rows on one column with SQL. MS SQL Server 2005 also introduced a pivot clause.
The /*+result_cache*/ SQL hint - This suggests that the result data will be cached in the data buffers, and not the intermediate data blocks that were accessed to obtain the query results. You can cache SQL and PL/SQL results for super-fast subsequent retrieval. The "result cache" ties into the "scalable execution" concept. There are three areas of the result cache:
The SQL query result cache - This is an area of SGA memory for storing query results.
The PL/SQL function result cache - This result cache can store the results from a PL/SQL function call.
The OCI client result cache - This cache retains results from OCI calls, both for SQL queries or PL/SQL functions.
Scalable Execution - This 11g feature consists of a number of features, the first of which is query results caching; this feature automatically caches the results of an SQL query as opposed to the data blocks normally cached by the buffer cache, and works both client (OCI) and server side - this was described as "buffer cache taken to the next level". The DBA sets the size of the results cache and turns the feature on at a table level with the command "alter table DEPT cache results", the per-process cache is shared across multiple session and at the client level, is available with all 11g OCI-based clients.
XML SQL queries - Oracle11g will support query mechanisms for XML including XQuery and SQL XML, emerging standards for querying XML data stored inside tables.
SQL Replay - Similar to the previous feature, but this only captures and applies the SQL workload, not total workload.
Improved optimizer statistics collection speed - Oracle 11g has improved the dbms_stats performance, allowing for an order of magnitude faster CBO statistics creation. Oracle 11g has also separated-out the "gather" and "publish" operations, allowing CBO statistics to be retained for later use. Also, Oracle 11g introduces multi-column statistics to give the CBO the ability to more accurately select rows when the WHERE clause contains multi-column conditions or joins.
SQL execution Plan Management - Oracle 11g SQL will allow you to fix execution plans (explain plan) for specific statements, regardless of statistics or database version changes.
Dynamic SQL. DBMS_SQL is here to stay. It's faster and is being enhanced. DBMS_SQL and NDS can now accept CLOBs (no more 32k limit on NDS). A ref cursor can become a DBMS_SQL cursor and vice versa. DBMS_SQL now supprts user defined types and bulk operations.
SQL Performance Advisor - You can tell 11g to automatically apply SQL profiles for statements where the suggested profile give 3-times better performance that the existing statement. The performance comparisons are done by a new administrative task during a user-specified maintenance window.
Improved SQL Access Advisor - The 11g SQL Access Advisor gives partitioning advice, including advice on the new interval partitioning. Interval partitioning is an automated version of range partitioning, where new equally-sized partitions are automatically created when needed. Both range and interval partitions can exist for a single table, and range partitioned tables can be converted to interval partitioned tables.
Automatic Memory Tuning - Automatic PGA tuning was introduced in Oracle 9i. Automatic SGA tuning was introduced in Oracle 10g. In 11g, all memory can be tuned automatically by setting one parameter. You literally tell Oracle how much memory it has and it determines how much to use for PGA, SGA and OS Processes. Maximum and minimum thresholds can be set. This is controlled by the Oracle 11g memory_target parameter.
Resource Manager - The 11g Resource Manager can manage I/O, not just CPU. You can set the priority associated with specific files, file types or ASM disk groups.
ADDM - The ADDM in 11g can give advice on the whole RAC (database level), not just at the instance level. Directives have been added to ADDM so it can ignore issues you are not concerned about. For example, if you know you need more memory and are sick of being told it, you can ask ADDM not to report those messages anymore.
Faster sorting - Starting in 10gr2 we see an improved sort algorithm, “Oracle10gRw introduced a new sort algorithm which is using less memory and CPU resources. A hidden parameter _newsort_enabled = {TRUE|FALSE} governs whether the new sort algorithm will be used.”
AWR Baselines - The AWR baselines of 10g have been extended to allow automatic creation of baselines for use in other features. A rolling week baseline is created by default.
Adaptive Metric Baselines - Notification thresholds in 10g were based on a fixed point. In 11g, notification thresholds can be associated with a baseline, so the notification thresholds vary throughout the day in line with the baseline.
Enhanced Password - Oracle 11g will have case sensitive passwords and also the password algorithm has changed to SHA-1 instead of the old DES based hashing used."
Oracle SecureFiles - replacement for LOBs that are faster than Unix files to read/write. Lots of potential benefit for OLAP analytic workspaces, as the LOBs used to hold AWs have historically been slower to write to than the old Express .db files.
Oracle 11g audit vault - Oracle Audit Vault is a new feature that will provide a solution to help customers address the most difficult security problems remaining today, protecting against insider threat and meeting regulatory compliance requirements.
FGAC for UTL_SMTP, UTL_TCP and UTL_HTTP. You can define security on ports and URLs.
Fine Grained Dependency Tracking (FGDT). This means that when you add a column to a table, or a cursor to a package spec, you don't invalidate objects that are dependent on them.
Database Workload Replay - Oracle "Replay" allows the total database workload to be captured, transferred to a test database created from a backup or standby database, then replayed to test the affects of an upgrade or system change.
You specify the SQL tuning sets similar to the 10g offering and use the dbms_sqlpa package (SQL performance analyzer) to manage the SQL each "analyzer task" with dbms_sqlpa procedures (create_analysis_task, cancel_analysis_task, drop_analysis_task, reset_analysis_task, report_analysis_task, resume_analysis_task, interrupt_analysis_task).
Oracle 11g PLSQL Native Compilation
Machine code is sometimes called native code when referring to platform-dependent parts of language features or libraries.
Change parameter value,
SQL>alter system set plsql_code_type=native scope=spfile;
when use PLSQL_CODE_TYPE='NATIVE', arithmetic operations are done directly in the hardware which provides significantly better performance.
To compile a PL/SQL package to native code without setting plsql_code parameter,
ALTER PACKAGE
To compile a PL/SQL procedure to native code without setting plsql_code parameter,
ALTER PROCEDURE
Procedure to convert the entire database and recompile all PL/SQL modules into native mode
1) Shut down database
2) Edit spfile.ora and set PLSQL_CODE_TYPE =native and plsql_optimise_level=2
3) connect sys/password as sysdba
startup upgrade
4) @$ORACLE_HOME/rdbms/admin/dbmsupgnv.sql (which updates the execution mode of all PL/SQL modules to native) (You can use the TRUE command line parameter with the dbmsupgnv.sql script to exclude package specs from recompilation to NATIVE, saving time in the conversion process.)
5) shutdown immediate
startup
@$ORACLE_HOME/rdbms/admin/utlrp.sql (to recompile all invalid objects)
Thursday, December 29, 2011
Oracle Auditing
Database Hardening
Wednesday, December 28, 2011
ORA-01652 unable to extend temp segment by 64 in tablespace USR
ORA-1653: unable to extend table by 4096 in tablespace USR
Saturday, December 24, 2011
ORA-01114: IO error writing block to file 201 (block # 763489) ORA-27072: I/O error Linux Error: 28: No space left on device
Create / Clear Temporary tablespace
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE 'E:\SNSD1011\TEMP02.ORA' SIZE 5M REUSE
AUTOEXTEND ON NEXT 1M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
2)
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
3)
DROP TABLESPACE temporary INCLUDING CONTENTS AND DATAFILES;
4)
cREATE TEMPORARY TABLESPACE temporary
TEMPFILE 'E:\SNSD1011\TEMP01.ORA' SIZE 500M REUSE
AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
5)
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temporary;
6)
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
7)
SELECT tablespace_name, file_name, bytes
FROM dba_temp_files WHERE tablespace_name = 'temporary';
TNS-12518: TNS:listener could not hand off client connection TNS-12560 TNS-00530 32-bit Windows Error: 2: No such file or directory
Friday, December 23, 2011
Transparent Data Encryption (TDE)
Thursday, December 22, 2011
Find and Delete duplicate check Constraints
Oracle Connection taking long time to establish / tnsping taking too long
Running SQL query
Microsoft ODBC driver for Oracle on 64 bit Machine
you will be unable to use the drive until these software
The Microsoft ODBC Driver can be installed under 64-bit Windows but 64-bit applications cannot access MS ODBC driver because it comes only in 32-bit version. For 32-bit applications under 64-bit Windows there's ODBC Data Source Administrator for the 32-bit ODBC drivers %systemdrive%\Windows\SysWoW64\odbcad32.exe (usually C:\WINDOWS\SysWOW64\odbcad32.exe).
They put the 32 bit odbcad32.exe in the syswow64 directory. They put the 64 bit odbcad32.exe in the system32 directory. 32 bit apps will pick up the 32 bit registry setting and 64 bit will pick up the 64 bit registry setting. system32 comes before syswow64 in the system path so the 64bit software runs before the 32 bit software.
Install Oracle Server 10.2.0.4
Solution: also install oracle client versiob 10.2.0.3 and above to get required driver
Table Defragmentation / Table Reorganization / Table Rebuilding
have an option to reorganize (or defragement) the table by traditional export/truncate/import method, i.e., exporting data from affected table, truncate the
table, then importing data back to the table.
There is an “alter table table_name move” command that you can use to defragment tables.
Note: This method does not apply to tables with with 'LONG' columns.
--------detecting chained row-----
SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
List Chained Rows
Creating a CHAINED_ROWS Table
@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlchain.sql will create following table
create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);
SELECT owner_name,table_name, head_rowid FROM chained_rows;
-------------------
SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE TABLE_NAME IN (
SELECT distinct table_name FROM CHAINED_ROWS);
If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table)
which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.
---------------------------Detect all Tables with Chained and Migrated Rows------------------------
1) Analyze all or only your Tables
SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;' FROM user_tables;
Analyze only chained rows tables
SELECT owner, table_name, chain_cnt FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;
set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\chained_statistics.sql;
SELECT 'ANALYZE TABLE ' ||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;
spool off
2) Alter Table ......Move
set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\defrag.sql;
SELECT DISTINCT 'ALTER TABLE ' ||table_name|| FROM CHAINED_ROWS;
spool off
or
select sum(bytes/1024/1024) "FOR INITIAL VALUE OR MORE"
from dba_segments
where owner = 'LDBO'
and segment_name = 'TBLOPTIONACCESSHISTORY';
SELECT DISTINCT 'ALTER TABLE ' ||table_name||' MOVE PCTFREE 20 PCTUSED 40 STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 20 PCTINCREASE 0);' FROM
CHAINED_ROWS;
3) Rebuild Indexes because these tables’s indexes are in unstable state.
connect deltek/xxx@fin;
set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\rebuild_index.sql;
SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
spool off
4) Analyze Tables for compute statistics after defragmentation
set heading off;
set feedback off;
set pagesize 1000;
spool C:\temp\compute_stat.sql;
SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
spool off
5) Show the RowIDs for all chained rows
This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild
with a higher value for PCTFREE
SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/
6) SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
delete FROM chained_rows;
Wednesday, December 21, 2011
Change Snapshot Setting
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 66240, -- = 46 Days
interval => 15) -- = 15 Minutes
;
END;
/
Tuesday, December 20, 2011
Detect Row Chaining, Migrated Row and Avoid it
This query will show how many chained (and migrated) rows each table has:
SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
-------------------
SELECT a.name, b.value
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower(a.name) = 'table fetch continued row';
----------------------------------------------------------------------------------------
SELECT 'Chained or Migrated Rows = '||value FROM v$sysstat WHERE name = 'table fetch continued row';
Result:
Chained or Migrated Rows = 31637
Explain:
You could have 1 table with 1 chained row that was fetched 31'637 times. You could have 31'637 tables, each with a chained row, each of which was fetched once. You could have any combination of the above -- any combo.
---------------------------------------------------------------------------------
How many Rows in a Table are chained?
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE table_name = 'ROW_MIG_CHAIN_DEMO';
CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
3 100 3691 10 40
PCT_CHAINED shows 100% which means all rows are chained or migrated.
------------------------------------------------------------------------------------------------
List Chained Rows
You can look at the chained and migrated rows of a table using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows.
Creating a CHAINED_ROWS Table
To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script in $ORACLE_HOME/rdbms/admin. These scripts are provided by the database. They create a table named CHAINED_ROWS in the schema of the user submitting the script.
D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlchain.sql will create following table
create table CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date
);
After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement.
ANALYZE TABLE row_mig_chain_demo LIST CHAINED ROWS;
SELECT owner_name,table_name, head_rowid FROM chained_rows;
-----------------------------How to avoid Chained and Migrated Rows?--------------------------
Increasing PCTFREE can help to avoid migrated rows. If you leave more free space available in the block, then the row has room to grow. You can also reorganize or re-create tables and indexes that have high deletion rates. If tables frequently have rows deleted, then data blocks can have partially free space in them. If rows are inserted and later expanded, then the inserted rows might land in blocks with deleted rows but still not have enough room to expand. Reorganizing the table ensures that the main free space is totally empty blocks.
The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.
ALTER TABLE MOVE
First count the number of Rows per Block before the ALTER TABLE MOVE
SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM row_mig_chain_demo
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
Block-Nr Rows
---------- ----------
2066 3
Now, de-chain the table, the ALTER TABLE MOVE rebuilds the row_mig_chain_demo table in a new segment, specifying new storage parameters:
SELECT distinct table_name FROM CHAINED_ROWS;
ALTER TABLE tbloptionaccesshistory MOVE
PCTFREE 20
PCTUSED 40
STORAGE (INITIAL 20K
NEXT 40K
MINEXTENTS 2
MAXEXTENTS 20
PCTINCREASE 0);
Table altered.
Again count the number of Rows per Block after the ALTER TABLE MOVE
SELECT dbms_rowid.rowid_block_number(rowid) "Block-Nr", count(*) "Rows"
FROM tbloptionaccesshistory
GROUP BY dbms_rowid.rowid_block_number(rowid) order by 1;
Rebuild the Indexes for the Table
Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
ERROR at line 1:
ORA-01502: index 'SCOTT.SYS_C003228' or partition of such index is in unusable
state
This is the primary key of the table which must be rebuilt.
ALTER INDEX SYS_C003228 REBUILD;
Index altered.
------------
SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
-------------
ANALYZE TABLE row_mig_chain_demo COMPUTE STATISTICS;
Table analyzed.
---------------------
SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
------------------------
SELECT chain_cnt,
round(chain_cnt/num_rows*100,2) pct_chained,
avg_row_len, pct_free , pct_used
FROM user_tables
WHERE TABLE_NAME IN (
SELECT distinct table_name FROM CHAINED_ROWS);
CHAIN_CNT PCT_CHAINED AVG_ROW_LEN PCT_FREE PCT_USED
---------- ----------- ----------- ---------- ----------
If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.
---------------
SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
-----------------
---------------------------Detect all Tables with Chained and Migrated Rows------------------------
1) Analyse all or only your Tables
SELECT 'ANALYZE TABLE '||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM user_tables
/
SELECT owner, table_name, chain_cnt FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0;
SELECT 'ANALYZE TABLE ' ||table_name||' LIST CHAINED ROWS INTO CHAINED_ROWS;'
FROM dba_tables WHERE owner='LDBO' and chain_cnt > 0
/
SELECT distinct table_name FROM CHAINED_ROWS;
2) Alter Table ......Move
SELECT DISTINCT 'ALTER TABLE ' ||table_name||' MOVE PCTFREE 20 PCTUSED 40 STORAGE (INITIAL 20K NEXT 40K MINEXTENTS 2 MAXEXTENTS 20 PCTINCREASE 0);' FROM CHAINED_ROWS;
3) Rebuild Indexes
SELECT 'ALTER INDEX ' ||INDEX_NAME||' REBUILD;' FROM DBA_INDEXES WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
4) Analyze Tables
SELECT 'ANALYZE TABLE '||table_name||' COMPUTE STATISTICS;' FROM user_tables WHERE TABLE_NAME IN ( SELECT distinct table_name FROM CHAINED_ROWS);
5) Show the RowIDs for all chained rows
This will allow you to quickly see how much of a problem chaining is in each table. If chaining is prevalent in a table, then that table should be rebuild with a higher value for PCTFREE
SELECT owner_name,
table_name,
count(head_rowid) row_count
FROM chained_rows
GROUP BY owner_name,table_name
/
6) SELECT owner, table_name, chain_cnt FROM dba_tables WHERE chain_cnt > 0;
Conclusion
Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, you can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.
Row migration is typically caused by UPDATE operation
Row chaining is typically caused by INSERT operation.
SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.
To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT view
To remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.
Saturday, December 17, 2011
Move segments from one Tablespace to another
Tables + indexes of tables EMP,PRODUCTS,CUSTOMERS into tablespace TBS1.
All the other tables + indexes of this user into tablespace TBS2.
set serveroutput on
--***********************************************
-- (Run the script as DBA user)
-- Parameters:
---------------
-- user_name : owner to which to move segments
-- TBS1 : Tablespace-A
-- Tables_TBS1 : list of tables to move to tablespace-A
-- TBS2 : tablespace to move all tables NOT in the list
-- put_Output : if 'true' - create output of operations (dbms_output)
-- put_Execute : if 'true' - execute the move operations
--***********************************************
declare
User_Name varchar2(20) default 'PROD_USER';
TBS1 varchar2(20) default 'TBS1';
Tables_TBS1 varchar2(1000) default 'EMP,PRODUCTS,CUSTOMERS';
TBS2 varchar2(20) default 'TBS2';
put_Output boolean default true;
put_Execute boolean default true;
Sort_memory number default 10000000;
TBS varchar2(20);
begin
Tables_TBS1 := upper(','||Tables_TBS1||',');
execute immediate 'alter session set sort_area_size = '||to_char(Sort_memory);
for crs in (select distinct s.owner, s.segment_name, s.partition_name, s.tablespace_name, s.segment_type from dba_segments s where owner like User_Name and segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION')) loop
if instr(Tables_TBS1,','||crs.segment_name||',') != 0 then
TBS := TBS1;
else
TBS := TBS2;
end if;
if crs.tablespace_name = TBS then
--------------------------------------------------
-- Table is already in the correct tablespace.
-- check only indexes.
--------------------------------------------------
for crs2 in (select distinct s.owner, s.segment_name, s.partition_name, s.tablespace_name, s.segment_type from dba_indexes i, dba_segments s
where i.table_owner=crs.owner and i.table_name = crs.segment_name and s.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
and s.owner = i.owner and s.segment_name = i.index_name and (s.partition_name = crs.partition_name or s.partition_name is null and crs.partition_name is null)) loop
if instr(Tables_TBS1,','||crs.segment_name||',') != 0 then
TBS := TBS1;
else
TBS := TBS2;
end if;
if crs2.tablespace_name != TBS then
if crs2.segment_type in ('INDEX PARTITION') then
if put_Output then dbms_output.put_line ('> INDEX PARTITION '||crs2.owner||'.'||crs2.segment_name||':'||crs2.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild partition '||crs2.partition_name ||' tablespace '||TBS; end if;
elsif crs2.segment_type in ('INDEX SUBPARTITION') then
if put_Output then dbms_output.put_line ('> INDEX SUBPARTITION '||crs2.owner||'.'||crs2.segment_name||':'||crs2.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild subpartition '||crs2.partition_name ||' tablespace '||TBS; end if;
elsif crs2.segment_type = 'INDEX' then
if put_Output then dbms_output.put_line ('> INDEX '||crs2.owner||'.'||crs2.segment_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild tablespace '||TBS; end if;
end if;
end if;
end loop;
else
--------------------------------------------------
-- Move Table AND all rebuild ALL the indexes.
--------------------------------------------------
if crs.segment_type in ('TABLE PARTITION') then
if put_Output then dbms_output.put_line ('TABLE PARTITION '||crs.owner||'.'||crs.segment_name||':'||crs.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter table '||crs.owner||'.'||crs.segment_name||' move partition '||crs.partition_name ||' tablespace '||TBS; end if;
elsif crs.segment_type in ('TABLE SUBPARTITION') then
if put_Output then dbms_output.put_line ('TABLE SUBPARTITION '||crs.owner||'.'||crs.segment_name||':'||crs.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter table '||crs.owner||'.'||crs.segment_name||' move subpartition '||crs.partition_name ||' tablespace '||TBS; end if;
elsif crs.segment_type = 'TABLE' then
if put_Output then dbms_output.put_line ('TABLE '||crs.owner||'.'||crs.segment_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter table '||crs.owner||'.'||crs.segment_name||' move tablespace '||TBS; end if;
end if;
for crs2 in (select distinct s.owner, s.segment_name, s.partition_name, s.tablespace_name, s.segment_type from dba_indexes i, dba_segments s
where i.table_owner=crs.owner and i.table_name = crs.segment_name and s.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
and s.owner = i.owner and s.segment_name = i.index_name and (s.partition_name = crs.partition_name or s.partition_name is null and crs.partition_name is null)) loop
if crs2.segment_type in ('INDEX PARTITION') then
if put_Output then dbms_output.put_line ('> INDEX PARTITION '||crs2.owner||'.'||crs2.segment_name||':'||crs2.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild partition '||crs2.partition_name ||' tablespace '||TBS; end if;
elsif crs2.segment_type in ('INDEX SUBPARTITION') then
if put_Output then dbms_output.put_line ('> INDEX SUBPARTITION '||crs2.owner||'.'||crs2.segment_name||':'||crs2.partition_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild subpartition '||crs2.partition_name ||' tablespace '||TBS; end if;
elsif crs2.segment_type = 'INDEX' then
if put_Output then dbms_output.put_line ('> INDEX '||crs2.owner||'.'||crs2.segment_name||' -> '||TBS); end if;
if put_Execute then execute immediate 'alter index '||crs2.owner||'.'||crs2.segment_name||' rebuild tablespace '||TBS; end if;
end if;
end loop;
end if;
end loop;
end;
/
Thursday, December 15, 2011
EMAIL NOTIFICATION changes in init.ora parameters
Auditing changes to init.ora parameters (via pfile or spfile) is an important DBA task. Sometimes, users which have “alter system” privilege can make unauthorized changes to the initialization parameters in the spfile on a production database. Hence, auditing changes to parameters is a critical DBA task. Fortunately, it's quite simple to audit these changes by implementing the audit_sys_operations=true.
Here is a method to track changes to the initialization parameters. In order to track all changes to parameters we can use audit for thealter system statement for any specific user
We should follow below steps to track changes to init.ora parameters:
1. ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE;
2. SHUTDOWN IMMEDIATE
3. STARTUP
4. CREATE USER TEST IDENTIFIED BY TEST;
5. GRANT DBA TO TEST;
6. AUDIT ALTER SYSTEM BY test;
7. CONN TEST/TEST
8. ALTER SYSTEM SET AUDIT_TRAIL=db SCOPE=SPFILE;
9. Create an alert script to notify the DBA when a parameter has changed.
Let's start by finding the action_name in the dba_audit_trail view for the alter system command:
SQL> select username, timestamp, action_name from dba_audit_trail;
USERNAME TIMESTAMP ACTION_NAME
------------------------------ ------------- ----------------------------
TEST 29-MAY-09 ALTER SYSTEM
STEP 1 - We can track changes made by SYS user by setting audit_sys_operations parameter to TRUE.
SQL> alter system set audit_sys_operations=true scope=spfile;
System altered.
STEP 2 - Next, we bounce the instance to make the change take effect:
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.
Here we see our auditing parameters:
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /home/oracle/oracle/product/10 .2.0/db_1/admin/fkhalid/adump
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string DB
SQL> alter system set audit_trail=db scope=spfile;
System altered.
STEP 3 - Here we go to the adump directory and examine the audit files:
SQL> host
[oracle@localhost bin]$ cd /home/oracle/oracle/product/10.2.0/db_1/admin/kam/adump/
[oracle@localhost adump]$ ls
ora_5449.aud ora_5476.aud ora_5477.aud ora_5548.aud ora_5575.aud ora_5576.aud
[oracle@localhost adump]$ cat ora_5576.aud
Audit file /home/oracle/oracle/product/10.2.0/db_1/admin/kam/adump/ora_5576.aud
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1/
System name: Linux
Node name: localhost.localdomain
Release: 2.6.18-92.el5
Version: #1 SMP Tue Jun 10 18:49:47 EDT 2008
Machine: i686
Instance name: kam
Redo thread mounted by this instance: 1
Oracle process number: 15
Unix process pid: 5576, image: oracle@localhost.localdomain (TNS V1-V3)
Fri May 29 02:38:30 2009
ACTION : 'alter system set audit_trail=db scope=spfile'
DATABASE USER: '/'
PRIVILEGE : SYSDBA
CLIENT USER: oracle
CLIENT TERMINAL: pts/2
STATUS: 0
STEP 4 - Now, create a crontab job to seek new entries in the adump directory.
#******************************************************
# list the full-names of all possible adump files . . . .
#******************************************************
rm -f /tmp/audit_list.lst
find $DBA/$ORACLE_SID/adump/*.trc -mtime -1 -print >> /tmp/audit_list.lst
STEP 5 - When found, send the DBA an e-mail:
# If initialization paramneter has changed, send an e-mail
if [ -f /tmp/audit_list.lst]; then
then
# Now, be sure that we don't clog the mailbox.
# the following statement checks to look for existing mail,
# and only sends mail when mailbox is empty . . .
if [ ! -s /var/spool/mail/oramy_sid ]
then
cat /oracle/MY_SID/scripts/oracheck.log | mail oramy_sid
fi
sendmail . . .
fi
Please beware that using the auditing command imposes additional work on the production database.
How to fix - ORA-12514
1. Test communication between the client and the listener
We will use tnsping to complete this step. It's a common misconception that tnsping tests connectivity to the instance. In actual fact, it only tests connectivity to the listener.
Here, we will use it to prove that a) the tnsnames.ora has the correct hostname and port, and b) that there is a listener listening on the specified host and port. Run tnsping:
tnsping
oracle@bloo$ tnspinng scr9
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS =
(PROTOCOL = TCP) (HOST = bloo)(PORT = 1521))) (CONNECT_DATA =
(SERVER = DEDICATED) (SERVICE_NAME = scr9)))
OK (40 msec)
If not, here are some common errors, and some suggestions for fixing them:
TNS-03505: Failed to resolve name
The specified database name was not found in the tnsnames.ora, onames or ldap. This means that tnsping hasn't even got as far as trying to make contact with a server - it simply can't find any record of the database that you are trying to tnsping. Make sure that you've spelled the database name correctly, and that it has an entry in the tnsnames.ora.
If you have a sqlnet.ora, look at for the setting NAMES.DEFAULT_DOMAIN. If it is set, then all entries in your tnsnames.ora must have a matching domain suffix.
TNS-12545: Connect failed because target host or object does not exist
The host specified in the tnsnames is not contactable. Verify that you have spelled the host name correctly. If you have, try pinging the host with 'ping
TNS-12541: TNS:no listener
The hostname was valid but the listener was not contactable. Things to check are that the tnsnames has the correct port (and hostname) specified, and that the listener is running on the server and using the correct port.
tnsping hangs for a long time
I've seen this happen in situations where there is something listening on the host/port - but it isn't an oracle listener. Make sure you have specified the correct port, and that your listener is running. If all looks ok, try doing a 'netstat -ap | grep 1521' (or whatever port you are using) to find out what program is listening on that port.
2. Attempt a connection to the instance
Once you have proven that the tnsnames is talking to the listener properly, the next step is to attempt a full connection to the instance. To do this we.ll use sqlplus:
sqlplus [username]/[password]@
If it works you will successfully log into the instance. If not, here are some common errors:
ORA-01017: invalid username/password; logon denied
This is actually a good error in these circumstances! Even though you didn't use the correct username or password, you must have successfully made contact with the instance.
ORA-12505: TNS:listener does not currently know of SID given in connect
Either the SID is misspelled in the tnsnames, or the listener isn't listening for it. Check the tnsnames.ora first. If it looks ok, do a 'lsnrctl status' on your server, to see what databases the listener is listening for.
ORA-12514: TNS:listener could not resolve SERVICE_NAME given in connect
This is quite a common error and it means that, while the listener was contactable, the database (or rather the service) specified in the tnsnames wasn't one of the things that it was listening out for.
Begin by looking at your tnsnames.ora. In it, you will a setting like SERVICE_NAME=
If you are running a single instance database (ie. not RAC), and you are sure that you are not using services, it might be easier to change SERVICE_NAME= to SID= in your tnsnames. Using service names is the more modern way of doing things, and it does have benefits, but SID still works perfectly well (for now anyway).
If you would prefer to continue using service names, you must first check that you have not misspelled the service name in your tnsnames. If it looks alright, next check that the listener is listening for the service. Do this by running 'lsnrctl services' on your server. If there isn't an entry for your service, you need to make sure that the service_names parameter is set correctly on the database.
Wednesday, December 14, 2011
Missing ArchiveLog at Standby server
Switchover and Failover steps
1. SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
-----------------
TO STANDBY
1 row selected
2. ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
3. SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
4. SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
------------
TO_PRIAMRY
5. at standby ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
6. alter database open [if db opened read only since last time it was started]
else shutdown and restart
7. ALTER SYSTEM SWITCH LOGFILE;
FAILOVER
First resolve gap:
A) Identify and resolve any gaps in the archived redo log files.
SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
---------- ------------- --------------
1 90 92
ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
B) Repeat A) until all gaps are resolved.
C) Copy any other missing archived redo log files.
SQL> SELECT UNIQUE THREAD# AS THREAD, MAX(SEQUENCE#)
2> OVER (PARTITION BY thread#) AS LAST from V$ARCHIVED_LOG;
THREAD LAST
---------- ----------
1 100
ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
now initiate failover at standby
1. ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;
2. ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
3. alter database open [if db opened read only since last time it was started]
else shutdown and restart