For Example: If Existing Oracle is installed in c:\Oracle\product\10.2.0\db_1 then you have to select the same path When you run the Setup.exe. After Successful installation start theListener &Db Console etc.,
Issue: how to recover deleted file from network drive
Solution: settings on Server
1. go the properties of Share Network Drive 2. Shadow Copies 3. Setting as per your requirement
settings on Client
To recover a file that was accidentally deleted
1. Locate the folder where the deleted file was stored (on the network), right-click the folder, and click Properties. The Properties dialog box will appear.
2. On the Previous Versions tab, double-click the most recent version of the folder that contains the file that you want to recover. A list of files that are stored in that previous version will appear.
3. Right-click the file that was deleted and clicks Copy and paste.
Issue: how to recover deleted file from network drive
Solution: settings on Server
1. go the properties of Share Network Drive 2. Shadow Copies 3. Setting as per your requirement
settings on Client
To recover a file that was accidentally deleted
1. Locate the folder where the deleted file was stored (on the network), right-click the folder, and click Properties. The Properties dialog box will appear.
2. On the Previous Versions tab, double-click the most recent version of the folder that contains the file that you want to recover. A list of files that are stored in that previous version will appear.
3. Right-click the file that was deleted and clicks Copy and paste.
2) In Oracle SQL Developer open Tools -> Preferences Database -> Third Party JDBC Drivers Add Entry Browse the unzipped driver and add the jtds-1.2.jar file
3) Error I/O Error SSO failed Native SSPI library not loaded check the java.library.path.system property
Copy the file 'jtds-1.2.2-dist\x86\SSO\ntlmauth.dll' from the unzipped JTDS --> \jdk\jre\bin.
-------------------Backup Format----- 1 %c The copy number of the backup piece within a set of duplexed backup pieces. If you did not duplex a backup, then this variable is 1 for backup sets and 0 for proxy copies. If one of these commands is enabled, then the variable shows the copy number. The maximum value for %c is 256.
%d The name of the database.
%D The current day of the month (in format DD)
%F Combination of DBID, day, month, year, and sequence into a unique and repeatable generated name.
%M The month (format MM)
%n The name of the database, padded on the right with x characters to a total length of eight characters. (AKA: Porn star alias name) For example, if the scott is the database name, %n= scottxxx.
%p The piece number within the backup set. This value starts at 1 for each backup set and is incremented by 1 as each backup piece is created. Note: If you specify PROXY, then the %p variable must be included in the FORMAT string either explicitly or implicitly within %U.
%s The backup set number. This number is a counter in the control file that is incremented for each backup set. The counter value starts at 1 and is unique for the lifetime of the control file. If you restore a backup control file, then duplicate values can result. Also, CREATE CONTROLFILE initializes the counter back to 1.
%t The backup set time stamp, which is a 4-byte value derived as the number of seconds elapsed since a fixed reference time. The combination of %s and %t can be used to form a unique name for the backup set.
%T The year, month, and day (YYYYMMDD)
%u An 8-character name constituted by compressed representations of the backup set number and the time the backup set was created.
%U A convenient shorthand for %u_%p_%c that guarantees uniqueness in generated backup filenames. If you do not specify a format, RMAN uses %U by default.
%Y The year (YYYY)
%% Specifies the '%' character. e.g. %%Y translates to %Y.
------------ARCHIVELOG Format------
%s log sequence number
%S log sequence number, zero filled
%tthread number
%Tthread number, zero filled
%a activation ID
%d database ID
%R resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
Issue: how to recover deleted file from network drive
Solution:
settings on Server
1. go the properties of Share Network Drive 2. Shadow Copies 3. Setting as per your requirement
Settings on Client
1. Locate the folder where the deleted file was stored (on the network), right-click the folder, and click Properties. The Properties dialog box will appear.
2. On the Previous Versions tab, double-click the most recent version of the folder that contains the file that you want to recover. A list of files that are stored in that previous version will appear.
3. Right-click the file that was deleted and clicks Copy and paste.
/f - Stands for that process be forcefully terminated.
/im - Stands for the image name of the process to be terminated
In order to kill all these process I made a batch file which contains the forcefull termination command for all these programs and then I added the batch file in windows startup.
1. open Notepad and paste the following commands one per each line by line
SELECT pr.username "O/S Id", ss.username "Oracle User Id", ss.status "status", ss.sid "Session Id", ss.serial# "Serial No", lpad(pr.spid,7) "Process Id", substr(sqa.sql_text,1,900) "Sql Text", First_load_time "Load Time" FROM v$process pr, v$session ss, v$sqlarea sqa WHERE pr.addr=ss.paddr AND ss.username is not null AND ss.sql_address=sqa.address(+) AND ss.sql_hash_value=sqa.hash_value(+) AND ss.status='ACTIVE' ORDER BY 1,2,7 ; Spool Out ;
=============
set lin 132 set pages 66 column "SID" format 999 column "SER" format 99999 column "Table" format A10 column "SPID" format A5 column "CPID" format A5 column "OS User" format A7 column "Table" format A10 column "SQL Text" format A40 wor column "Mode" format A20 column "Node" format A10 column "Terminal" format A8
rem spool /tmp/locks.lst
select s.sid "SID", s.serial# "SER", o.object_name "Table", s.osuser "OS User", s.machine "Node", s.terminal "Terminal", --p.spid "SPID", --s.process "CPID", decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') "Mode", substr (c.sql_text, 1, 150) "SQL Text" from v$lock l, v$lock d, v$session s, v$session b, v$process p, v$transaction t, sys.dba_objects o, v$open_cursor c where l.sid = s.sid and o.object_id (+) = l.id1 and c.hash_value (+) = s.sql_hash_value and c.address (+) = s.sql_address and s.paddr = p.addr and d.kaddr (+) = s.lockwait and d.id2 = t.xidsqn (+) and b.taddr (+) = t.addr and l.type = 'TM' group by o.object_name, s.osuser, s.machine, s.terminal, p.spid, s.process, s.sid, s.serial#, decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>'), substr (c.sql_text, 1, 150) order by decode (s.lockwait, null, 'Have Lock(s)', 'Waiting for <' || b.sid || '>') desc, o.object_name asc, s.sid asc; rem spool off;
Not able to send mail to @jalindia.co.in due to jalindia.co.in DNS issue. they use their own dns to resolve jalindia.co.in.
they should use some other dns server to resove it. Resolve it into our dns server uniconindia.in
============dns server 10.100.0.100============ create zone file for jalindia.co.in at dns server 10.100.0.10 make zone file entry into jalindia.co.in.zone /etc/named.conf
vi /var/named/chroot/etc/named.conf zone "jalindia.co.in" IN { type master; file "jalindia.co.in.zone"; allow-update { none; }; };
jalindia.co.in. 86400 IN MX 1 jal-gate-svr1.jalindia.co.in. jalindia.co.in. 86400 IN MX 2 jal-gate-svr2.jalindia.co.in.
jalindia.co.in. 86400 IN NS secondarydns.jalindia.co.in. jalindia.co.in. 86400 IN NS primarydns.jalindia.co.in.
jal-gate-svr1.jalindia.co.in. 86400 IN A 115.119.16.172 jal-gate-svr2.jalindia.co.in. 86400 IN A 115.119.16.168 primarydns.jalindia.co.in. 86400 IN A 10.10.10.10 secondarydns.jalindia.co.in. 86400 IN A 10.10.10.11
rman>list incarnation of database; sql>shutdown immediate sql>startup mount rman>reset database to incarnation 3; rman>restore database until scn 4312345; rman>recover database until scn 4312345; rman>list incarnation; rman> alter database open resetlogs; rman>list incarnation;
================
LD ASP Error:
we hide the client but data is fetched from some branch.
but on LD ASp server, no data is fetched for that hidden client.
Now IIS Application so that it won't cache your ASP pages. But this alone is not enough. At the top of the .asp page that you do not want cached, add the following line: <% Response.Expires=0 %>
==========
stopping and restarting both the web site and the application
pool, we've also stopped and restarted IIS Admin Service and the WWW
publishing service.
If problem is not solved then this is due to networking stack between client browser and the server. It looks like a proxy or browser-side cache.
=============Browser-side Cache===========
Retain server results in a browser-side cache. The cache holds query-result pairs; the queries are cache keys and the server results are server results. So, whenever the browser performs an XMLHttpRequest Call, it first checks the cache. If the query is held as a key in the cache, the corresponding value is used as the result, and there is no need to access the server.
==========
ipconfig /flushdns
Modify the behavior of the Microsoft Windows DNS caching algorithm by setting two registry entries in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters registry key.
The MaxCacheTtl represents the maximum time that the results of a DNS lookup will be cached. The default value is 86,400 seconds. If you set this value to 1, DNS entries will only be cashed for a single second.
MaxNegativeCacheTtl represents the maximim time that the results of a failed DNS lookup will be cached. The default value is 900 seconds. If you set this value to 0, failed DNS lookups will not be cached.
--------------A--------------------------------------- crosscheck copy of archivelog all; crosscheck archivelog all; resync catalog; delete force obsolete; delete expired archive all;
Note: not able to resync because not able to connect to recovery catalog because database is not open.
---------------B---------------------------------------- Point in time Recovery ------------- 1) restore database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')"; recover database UNTIL TIME "TO_DATE('03/27/09 10:05:00','MM/DD/YY = HH24:MI:SS')";
rman-03002 rman-20207
2) restore database until scn 1000; recover database until scn 1000;
3) restore database until sequence 923 thread 1; recover database until sequence 923 thread 1;
If Backup is failed or stop then i will get notification
Steps : - Open a database target (SNS0910) - Click on User-Defined-Metrics - Create - Metric Name = Last datafile backup - Type = Number - Output = Single Value - SQL Query : the time in hour since the oldest checkpoint time of the newest backup
select (sysdate-min(t))*24 from ( select max(b.CHECKPOINT_TIME) t from v$backup_datafile b, v$tablespace ts, v$datafile f where INCLUDED_IN_DATABASE_BACKUP='YES' and f.file#=b.file# and f.ts#=ts.ts# group by f.file# ) - Credentials : dbsnmp/***** - Threshold Operator > Warning 24 Critical 48 - Repeat every 1 hour - OK
Same for redologs, with a name of Last redolog backup query of
select (sysdate-max(NEXT_TIME))*24 from v$BACKUP_REDOLOG
It is now possible to define alerts. - Preferences - Notification Rules - Create - Apply to specific targets : Add you productive databases group - Deselect Availability Down - Metric: Add : Show all: Check User defined metric : Select : Last datafile backup , Last redolog backup - Severity : Critical and Clear - Policy : None - Method : Email
======================
select to_char(max(completion_time) ,'DDMMYYHH24MISS') lastbackup from (SELECT completion_time FROM v$backup_set UNION SELECT completion_time FROM v$datafile_copy union select sysdate-365 from dual )
Warining 0 Critical 320000000000
===========
SELECT ELAPSED_SECONDS/60 minutes FROM V$RMAN_BACKUP_JOB_DETAILS ORDER BY SESSION_KEY desc;
OEM > Maintenance > Manage Current Backup> Backup Sets Crosscheck > Validate Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired
Image Copies Crosscheck > Validate Catalog Additional Files > Crosscheck All > Delete All Obselete > Delete All Expired
-----------------Solution Through command prompt----------
RMAN > crosscheck archivelog all;
If error is still there then follow following steps RMAN> crosscheck copy of archivelog all; RMAN> crosscheck archivelog all; RMAN> resync catalog; RMAN> delete force obsolete; RMAN> delete expired archive all;
DBMS_DATAPUMP.metadata_filter( handle => l_dp_handle, name => 'SCHEMA_EXPR', value => '= ''LDBO''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle); END; /
----------------ERROR--------------------------- ORA-39001: invalid argument value ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926 ORA-06512: at "SYS.DBMS_DATAPUMP", line 3162 ORA-06512: at line 14
alter system set smtp_out_server = 'mail.uniconindia.in' scope=spfile;
shutdown immediate; startup
grant execute on utl_mail to ldbo;
create or replace trigger ldbo.db_shutdown before shutdown on database begin sys.utl_mail.send ( sender =>'dbanotification@uniconindia.in', recipients =>'dbamonitoring@uniconindia.in', subject => 'Oracle Database Server DOWN', message => 'May be LD Server Down for maintenance'|| ' but also contact to DBA for further details. ' ); end; /
create or replace trigger ldbo.db_startup after startup on database begin sys.utl_mail.send ( sender =>'dbanotification@uniconindia.in', recipients =>'dbamonitoring@uniconindia.in', subject => 'Oracle Database Server UP', message => 'LD Server OPEN for normal use.' ); end; /
Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as transactionally consistent copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other. For example, you can have a standby database on the same system as the production database, along with two standby databases on other systems at remote locations.
You can manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager.
Primary Database
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
The primary database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.
Standby Database
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
Similar to a primary database, a standby database can be either a single-instance Oracle database or an Oracle Real Application Clusters database.
A standby database can be either a physical standby database or a logical standby database:
Physical standby database
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database by recovering the redo data received from the primary database.
Logical standby database
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database by transforming the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. This allows users to access a logical standby database for queries and reporting purposes at any time. Also, using a logical standby database, you can upgrade Oracle Database software and patch sets with almost no downtime. Thus, a logical standby database can be used concurrently for data protection, reporting, and database upgrades.
Data Guard Services
The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:
Log Transport Services
Control the automated transfer of redo data from the production database to one or more archival destinations.
Log Apply Services
Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.
Role Management Services
Change the role of a database from a standby database to a primary database or from a primary database to a standby database using either a switchover or a failover operation.
A database can operate in one of the two mutually exclusive roles: primary or standby database.
Failover
During a failover, one of the standby databases takes the primary database role.
Switchover
Primary and standby database can continue to alternate roles. The primary database can switch the role to a standby database; and one of the standby databases can switch roles to become the primary.
The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo data:
For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database,
For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database
Data Guard Interfaces
Oracle provides three ways to manage a Data Guard environment:
1. SQL*Plus and SQL Statements
Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:
SQL> alter database commit to switchover to physical standby;
2. Data Guard Broker GUI Interface (Data Guard Manager)
Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.
3. Data Guard Broker Command-Line Interface (CLI)
It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.
The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following are some of the operations that the broker automates and simplifies:
Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, log transport services, and log apply services, where any of the databases could be Real Application Clusters (RAC) databases.
Adding up to 8 additional new or existing (physical or logical, RAC, or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from 1 to 9 standby databases in the same configuration.
Managing an entire Data Guard configuration, including all databases, log transport services, and log apply services, through a client connection to any database in the configuration.
Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.
Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.
You can perform all management operations locally or remotely through the broker’s easy-to-use interfaces: the Data Guard web pages of Oracle Enterprise Manager, which is the broker’s graphical user interface (GUI), and the Data Guard command-line interface (CLI) called DGMGRL.
Configuring Oracle DataGuard using SQL commands - Creating a physical standby database
Step 1) Getting the primary database ready (on Primary host)
We are assuming that you are using SPFILE for your current(Primary) instance. You can check if your instance is using SPFILE or not using spfile parameter.
SQL> show parameter spfile
NAME TYPE VALUE
———————————— ———– ——————————
spfile string
You can create spfile as shown below. (on primary host)
SQL> create spfile from pfile;
File created.
The primary database must meet two conditions before a standby database can be created from it:
It must be in force logging mode and
It must be in archive log mode (also automatic archiving must be enabled and a local archiving destination must be defined.
Before putting database in force logging mode, check if the database is force logging mode using
SQL> select force_logging from v$database;
FOR
—
NO
Also the database is not in archive log mode as shown below.
SQL> archive log list Database log mode No Archive Mode
Automatic archival Disabled
Archive destination d:/oracle/product/10.2.0/dbs/arch
Oldest online log sequence 10
Current log sequence 12
Now we will start the database in archive log and force logging mode.
Starting database in archive log mode:-
We need to set following 2 parameters to set the database in archive log mode.
If a database is in force logging mode, all changes, except those in temporary tablespaces, will be logged, independently from any nologging specification.
nls_language = AMERICAN nls_territory = AMERICA background_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/bdump user_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/udump core_dump_dest=d:/oracle/product/10.2.0/db_1/admin/orcl/cdump db_unique_name=’PRIMARY’ Log_archive_dest_1=’Location=d:/oracle/product/10.2.0/archive/orcl’ Log_archive_dest_state_1=ENABLE
Step 2) Creating the standby database
Since we are creating a physical stand by database we have to copy all the datafiles
of primary database to standby location. For that, you need to shutdown main database, copy the files of main database to new location and start the main database again.
Step 3) Creating a standby database control file
A control file needs to be created for the standby system. Execute the following on the primary system:
SQL> alter database create standby controlfile as ‘d:/oracle/product/10.2.0/dbf/standby.ctl’;
Database altered.
Step 4) Creating an init file
SQL> show parameters spfile NAME TYPE VALUE
———————————— ———– ——————————
spfile string d:/oracle/product/10.2.0/dbs/s
pfiletest.ora
Step 5) Changing init.ora file for standby database
A pfile is created from the spfile. This pfile needs to be modified and then be used on the standby system to create an spfile from it. So create a pfile from spfile on primary database.
create pfile=’/some/path/to/a/file’ from spfile
SQL> create pfile=d:’/oracle/product/10.2.0/dbs/standby.ora’ from spfile;
File created.
The following parameters must be modified or added:
control_files
standby_archive_dest
db_file_name_convert (only if directory structure is different on primary and standby server)
log_file_name_convert (only if directory structure is different on primary and standby server)
log_archive_format
log_archive_dest_1 — This value is used if a standby becomes primary during a switchover or a failover.
Net service names must be created on both the primary and standby database that will be used by log transport services. That is: something like to following lines must be added in the tnsnames.ora.
Total System Global Area 629145600 bytes
Fixed Size 1980744 bytes
Variable Size 171968184 bytes
Database Buffers 452984832 bytes
Redo Buffers 2211840 bytes SQL> alter database mount standby database;
Database altered.
Try to connect to stand by database from primary database
Following connections should work now
From Primary host:
sqlplus sys/oracle@orcl as sysdba –> This will connect to primary database
sqlplus sys/oracle@to_standby as sysdba –> This will connect to standby database from primary host
From Standby host
sqlplus sys/oracle@orcl as sysdba –> This will connect to standby database
sqlplus sys/oracle@to_primary as sysdba –> This will connect to primary database from standby host
LOG SHIPPING
On PRIMARY site enable Log_archive_dest_state_2 to start shipping archived redo logs.
SQL> Alter system set Log_archive_dest_state_2=ENABLE scope=both;
System Altered.
Check the sequence # and the archiving mode by executing following command.
SQL> Archive Log List
Then switch the logfile on primary side
SQL> Alter system switch logfile;
System Altered.
Start physical apply log service on standby side.
SQL> Alter Database Recover Managed Standby Database Disconnect;
Database Altered.
Now the session will be available to you and MRP will work as a background process and apply the redo logs.
You can check whether the log is applied or not by querying V$ARCHIVED_LOG.
SQL> Select Name, Applied, Archived from v$Archived_log;
This query will return the name of archived files and their status of being archived and applied.
Once you complete above step, you are done with physical standby. We can verify the log files which got applied using following commands.
On Standby side
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
Errors: ORA-31626: job does not exist ORA-31633: unable to create master table "SNSEXPORT.EXPORTTEST" ORA-06512: at
"SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 863 ORA-00955: name is already used by an existing object
Exception : ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line
911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356 ORA-06512: at line 2
solutions add privilieges for snsexport user
CREATE SESSION BACKUP ANY TABLE SELECT ANY TABLE SELECT ANY SEQUENCE EXECUTE ANY PROCEDURE CREATE ANY DIRECTORY EXECUTE ANY TYPE ADMINISTER RESOURCE MANAGER RESUMABLE SELECT ANY DICTIONARY READ ANY FILE GROUP create table
------------------------------------- 3) Now Following Error
ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS", line
142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1
Solution Add role MGMT_USER
--------------------------
4) ORA-20204: User does not exist: SNSEXPORT ORA-06512: at "SYSMAN.MGMT_USER", line 122 ORA-06512: at "SYSMAN.MGMT_JOBS",
line 142 ORA-06512: at "SYSMAN.MGMT_JOBS", line 78 ORA-06512: at line 1
Solution:
add the user snsexport
-login as user SYSTEM (or user SYS) to the ‘Enterprise Manager 10g Database Control’ - At the top right, click on the link ‘Setup’ - On the page ‘Administrators’, click on the button ‘Create’ - On the page ‘Create Administrator: Properties’, add the user snsexport - Click on the button: ‘Finish’ - On the page ‘Create Administrator: Review’, click on the button: ‘Finish’ - On the page ‘Administrators’, confirm that the user has been added. - At the top right, click on the link ‘Logout’ --------------------------
RUN { allocate auxiliary channel c1 DEVICE TYPE disk; DUPLICATE TARGET DATABASE to test nofilenamecheck ; }
6, Duplicate database before current time
RUN { allocate auxiliary channel c1 DEVICE TYPE disk; DUPLICATE TARGET DATABASE to test nofilenamecheck UNTIL TIME “TO_DATE(’06/09/2009',’MM/DD/YYYY’)”; }
set head off; set feed off; set trimspool on; set linesize 32767; set pagesize 32767; set echo off; set termout off; Spool c:\abc.xls select * from dba_users; Spool off exit
SPOOL TO HTML
set pagesize 9999 set feedback off SET TERMOUT on SET NEWPAGE 1 SET UNDERLINE ON set markup html on; spool c:\abc.html select * from dba_users; Spool off exit
OEM OS Host Credentials (User Authenication Error)
Validation Error - Connection to host as user kgupta2 failed: ERROR: Wrong password for user
Solution OEM>Preferences >== Preferred Credentials >== Target Types Host. provide the hostname, password which is mentioned in following
You have to provide the 'Log on as a batch job' privilege:
1. Go to control panel/administrative tools a. click on "local security policy" b. click on "local policies" c. click on "user rights assignments" d. double click on "log on as a batch job" e. click on "add" and add the user that was entered in the "normal username" or "privileged username" section of the EM Console.
2. Go to the Preferences link in the EM GUI a. click on Preferred Credentials (link on the left menu) b. under "Target Type: Host" click on "set credentials" c. enter the OS user who has logon as a batch job privilege into the "normal username" and "normal password" fields
3. Test the connection a. while in the Set Credentials window, click on "Test"