Hello,
This post explains how to restart Oracle Data Guard
Restarting Data Guard
Pausing Data Guard
On standby database (BBMLDR)
[/opt/oracle/app/oracle]> sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 20 10:26:10 2010
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL > Conn sys/as sysdba
Password -******
Connected
SQL> ALTER DATABASE RECOVER STANDBY DATABASE CANCEL;
Restarting
On standby database (BBMLDR)
sqlplus /as sysdba
SQL> startup nomount; (if database server has been shutdown)
SQL> alter database mount standby database; (if the database server has been shutdown)
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
or
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
On Primary database (BBMLPROD)
sqlplus /as sysdba
SQL> Alter system switch logfile;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
(above statement archive the current log file and send previous archive log to BBMLDR)
(verify within the alert log of the primary database : vi $HOME/admin/BBMLPROD/bdump/ALert8.log
e.g. ARC1: Standby redo logfile selected for thread 3 sequence 1487 for destination LOG_ARCHIVE_DEST_2
verify on standby database:
select sequence#, name, first_time, next_time , completion_time applied from v$ARCHIVED_LOG order by first_time;
Check the last set of rows to verify that the first_time, next_time and completion_time columns are at least showing current date. Also, the applied column has YES through the data set except on the last few rows because the log hasn't been applied to the database.
Regards,
Sowmya (OSR)
Friday, August 20, 2010
Thursday, August 19, 2010
How to Open the Standby Database when the Primary is Lost -Oracle Data Guard
Hello,
Here are the following steps to activate the standby and open the standby.
Follow these steps to open the standby database
1. Startup Mount
2. Check status
3. Recover if you have logs to apply
4. Finish the Recovery Process
5. Activate the Standby Database
6. Check the new status
7. Open the Database
1. Startup Mount
[/opt/oracle/app/oracle/product/10.2.0/Db_1]> sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Wed Aug 18 10:34:16 2010
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL> conn sys/as sysdba
Enter password:
Connected.
SQL> Startup Mount
2. Check Status
SQL>SQL> select NAME ,LOG_MODE , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,ACTIVATION# from v$database;
NAME LOG_MODE OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
ACTIVATION#
-----------
BBMLPROD ARCHIVELOG MOUNTED MAXIMUM PERFORMANCE PHYSICAL STANDBY
3731587029
3. Recover if you have logs to apply
In this example we consider that primary is lost and we don’t have more archived logs to be applied.
SQL> Recover Standby Database;
4. Finish the Recovery Process
SQL> Alter database recover managed standby database finish;
Database altered
5. Activate the Standby Database
SQL> Alter Database Activate physical standby database;
Database altered
6. Check the new status
SQL> select NAME , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,
from v$database;
NAME OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
BBMLPROD MOUNTED MAXIMUM PERFORMANCE PRIMARY
7. Open Database
SQL> Alter database open;
Database altered
SQL> select NAME , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,
from v$database;
NAME OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
BBMLPROD READ WRITE MAXIMUM PERFORMANCE PRIMARY
Regards,
Sowmya (OSR)
Here are the following steps to activate the standby and open the standby.
Follow these steps to open the standby database
1. Startup Mount
2. Check status
3. Recover if you have logs to apply
4. Finish the Recovery Process
5. Activate the Standby Database
6. Check the new status
7. Open the Database
1. Startup Mount
[/opt/oracle/app/oracle/product/10.2.0/Db_1]> sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Wed Aug 18 10:34:16 2010
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL> conn sys/as sysdba
Enter password:
Connected.
SQL> Startup Mount
2. Check Status
SQL>SQL> select NAME ,LOG_MODE , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,ACTIVATION# from v$database;
NAME LOG_MODE OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
ACTIVATION#
-----------
BBMLPROD ARCHIVELOG MOUNTED MAXIMUM PERFORMANCE PHYSICAL STANDBY
3731587029
3. Recover if you have logs to apply
In this example we consider that primary is lost and we don’t have more archived logs to be applied.
SQL> Recover Standby Database;
4. Finish the Recovery Process
SQL> Alter database recover managed standby database finish;
Database altered
5. Activate the Standby Database
SQL> Alter Database Activate physical standby database;
Database altered
6. Check the new status
SQL> select NAME , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,
from v$database;
NAME OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
BBMLPROD MOUNTED MAXIMUM PERFORMANCE PRIMARY
7. Open Database
SQL> Alter database open;
Database altered
SQL> select NAME , OPEN_MODE, PROTECTION_MODE , DATABASE_ROLE ,
from v$database;
NAME OPEN_MODE PROTECTION_MODE DATABASE_ROLE
--------- ------------ ---------- -------------------- ----------------
BBMLPROD READ WRITE MAXIMUM PERFORMANCE PRIMARY
Regards,
Sowmya (OSR)
Thursday, August 12, 2010
Oracle DBA Check List
Hello,
Here i am going to explain you daily, weekly, monthly responsibilities of Oracle DBA
Oracle DBA Checklist
Daily Procedures
1. Verify all instances are up and running: Run daily reports via Enterprise manager Probe event or through Oracle Grid
2. Look for any new alert log entries: Check for any ORA –errors in you alert log $ORACLE_HOME/admin/bdump/alert_ORACLE_SID.log
3. Verify success of database backup
4. Verify success of database archiving to tape
5. Verify DBSNMP is running
6. Verify free space in tablespaces
7. Verify rollback segment
8. Identify bad growth projection
9. Verify whether archivelogs are backed up successfully to your tapes
10. Verify rollback segment
11. Identify space bound objects
12. Process to review contention for CPU, memory, netowrk or disk resources
13. Copy Archived logs to standby Database and Roll Forward
14. Read DBA manuals, forums, journal and so on.
15. Analyze tables and indexes if needed
Nightly Procedures
1. Collect volumetric data : mk_Volfact.sql, analyze_comp.sql, pop_vol.sql
Weekly Procedures
1. Look for objects that break rules
a. Check for Next_Extents ,
b. Check Existing Extents
c. Check missing PK
d. Check whether all indexes are using INDEXES tablespace
e. Check the consistency of the schema objects between production and test environment
2. Look for security Policy Violations
3. Look for SQL * Net logs for errors, issues
a. Check Client side logs
b. Check server side logs
4. Archvie all alert logs to history
5. Attend Weekly Meetings (Project status , discussions on issues and so on)
Monthly Procedures
1. Look for harmful growth rates
2. Review tuning opppurtunities : Cache hit ration, latch contention etc
3. Look for I/O Contention
4. Review fragmentation : row chaining etc
5. Project Performance into the future
6. Perform tuning and maintenance
7. Provide monthly reports (this depends on company to company)
8. Attend Monthly meetings
Quartely Procedures
1. Provide Quartely Audit Reports if needed (This depends on company to company)
This is checking of all users, roles, privileges and so on
Regards,
Sowmya (OSR)
Here i am going to explain you daily, weekly, monthly responsibilities of Oracle DBA
Oracle DBA Checklist
Daily Procedures
1. Verify all instances are up and running: Run daily reports via Enterprise manager Probe event or through Oracle Grid
2. Look for any new alert log entries: Check for any ORA –errors in you alert log $ORACLE_HOME/admin/bdump/alert_ORACLE_SID.log
3. Verify success of database backup
4. Verify success of database archiving to tape
5. Verify DBSNMP is running
6. Verify free space in tablespaces
7. Verify rollback segment
8. Identify bad growth projection
9. Verify whether archivelogs are backed up successfully to your tapes
10. Verify rollback segment
11. Identify space bound objects
12. Process to review contention for CPU, memory, netowrk or disk resources
13. Copy Archived logs to standby Database and Roll Forward
14. Read DBA manuals, forums, journal and so on.
15. Analyze tables and indexes if needed
Nightly Procedures
1. Collect volumetric data : mk_Volfact.sql, analyze_comp.sql, pop_vol.sql
Weekly Procedures
1. Look for objects that break rules
a. Check for Next_Extents ,
b. Check Existing Extents
c. Check missing PK
d. Check whether all indexes are using INDEXES tablespace
e. Check the consistency of the schema objects between production and test environment
2. Look for security Policy Violations
3. Look for SQL * Net logs for errors, issues
a. Check Client side logs
b. Check server side logs
4. Archvie all alert logs to history
5. Attend Weekly Meetings (Project status , discussions on issues and so on)
Monthly Procedures
1. Look for harmful growth rates
2. Review tuning opppurtunities : Cache hit ration, latch contention etc
3. Look for I/O Contention
4. Review fragmentation : row chaining etc
5. Project Performance into the future
6. Perform tuning and maintenance
7. Provide monthly reports (this depends on company to company)
8. Attend Monthly meetings
Quartely Procedures
1. Provide Quartely Audit Reports if needed (This depends on company to company)
This is checking of all users, roles, privileges and so on
Regards,
Sowmya (OSR)
Wednesday, August 11, 2010
Oracle Grid - Creating Notification Rules (2)
Hello,
In this post i am going to specify some more Notification rules
6. Setting up notification rule for Monitoring Cluster_Database_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_Cluster_Database_Critical, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the necessary metrics which are needed and I have mentioned few of them)
a. Blocking Session Count
b. Broken Job Count
c. Failed Job Count
d. Failed Login Count
e. Segment Approaching Maximum Extents Count
f. Segment Not Able to Extend Count
Select Severity as Critical
Click ok and finish
7. Setting up notification rule for Monitoring Cluster_Database_Warning
Login into Oracle Grid using Dbwatch
Click on preferences ->Rules
Click on Create like ->Provide Name: MONITOR_Cluster_Database_Warning, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the necessary metrics)
a. Blocking Session Count
b. Broken Job Count
c. Failed Job Count
d. Failed Login Count
e. Segment Approaching Maximum Extents Count
f. Segment Not Able to Extend Count
Select Severity as Warning
Click ok and finish
8. Setting up notification rule for Monitoring Database_Instance_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_Database_Instance_Critical, enter Description, make the rule as public, target type –Database Instance and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the metrics which are needed below are few metrics)
a. Archive Area Used (%)
b. Archiver Hung alert Log Error
c. Archiver Hung Alert Log Error Status
d. Blocking Session Count
e. Broken Job Count
f. Data Block Corruption Alert Log Error
g. Failed Job Count
h. Generic Alert Log Error
i. Process Limit Usage (%)
j. Segment Approaching Maximum Extents Count
Select Severity as Critical
Click ok and finish
9. Setting up notification rule for Monitoring Database_Instance_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_Database_Instance_Warning, enter Description, make the rule as public, target type –Database Instance and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the metrics which are needed, below are the few metrics)
a. Archive Area Used (%)
b. Archiver Hung alert Log Error
c. Archiver Hung Alert Log Error Status
d. Blocking Session Count
e. Broken Job Count
f. Data Block Corruption Alert Log Error
g. Failed Job Count
h. Generic Alert Log Error
i. Process Limit Usage (%)
j. Segment Approaching Maximum Extents Count
Select Severity as Warning
Click ok and finish
10. Setting up notification rule for Monitoring Host_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_HOST_CRITICAL, enter Description, make the rule as public, target type –Host and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics
a. CPU in I/O Wait (%)
b. CPU Utilization (%)
c. Filesystem Space Available (%)
d. Memory Utilization (%)
Select Severity as Critical
Click ok and finish
11. Setting up notification rule for Monitoring HOST_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_HOST_Warning, enter Description, make the rule as public, target type –Host and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics
a. CPU in I/O Wait (%)
b. CPU Utilization (%)
c. Filesystem Space Available (%)
d. Memory Utilization (%)
Select Severity as Warning
Click ok and finish
Regards,
Sowmya (OSR)
In this post i am going to specify some more Notification rules
6. Setting up notification rule for Monitoring Cluster_Database_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_Cluster_Database_Critical, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the necessary metrics which are needed and I have mentioned few of them)
a. Blocking Session Count
b. Broken Job Count
c. Failed Job Count
d. Failed Login Count
e. Segment Approaching Maximum Extents Count
f. Segment Not Able to Extend Count
Select Severity as Critical
Click ok and finish
7. Setting up notification rule for Monitoring Cluster_Database_Warning
Login into Oracle Grid using Dbwatch
Click on preferences ->Rules
Click on Create like ->Provide Name: MONITOR_Cluster_Database_Warning, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the necessary metrics)
a. Blocking Session Count
b. Broken Job Count
c. Failed Job Count
d. Failed Login Count
e. Segment Approaching Maximum Extents Count
f. Segment Not Able to Extend Count
Select Severity as Warning
Click ok and finish
8. Setting up notification rule for Monitoring Database_Instance_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_Database_Instance_Critical, enter Description, make the rule as public, target type –Database Instance and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the metrics which are needed below are few metrics)
a. Archive Area Used (%)
b. Archiver Hung alert Log Error
c. Archiver Hung Alert Log Error Status
d. Blocking Session Count
e. Broken Job Count
f. Data Block Corruption Alert Log Error
g. Failed Job Count
h. Generic Alert Log Error
i. Process Limit Usage (%)
j. Segment Approaching Maximum Extents Count
Select Severity as Critical
Click ok and finish
9. Setting up notification rule for Monitoring Database_Instance_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_Database_Instance_Warning, enter Description, make the rule as public, target type –Database Instance and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select the metrics which are needed, below are the few metrics)
a. Archive Area Used (%)
b. Archiver Hung alert Log Error
c. Archiver Hung Alert Log Error Status
d. Blocking Session Count
e. Broken Job Count
f. Data Block Corruption Alert Log Error
g. Failed Job Count
h. Generic Alert Log Error
i. Process Limit Usage (%)
j. Segment Approaching Maximum Extents Count
Select Severity as Warning
Click ok and finish
10. Setting up notification rule for Monitoring Host_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_HOST_CRITICAL, enter Description, make the rule as public, target type –Host and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics
a. CPU in I/O Wait (%)
b. CPU Utilization (%)
c. Filesystem Space Available (%)
d. Memory Utilization (%)
Select Severity as Critical
Click ok and finish
11. Setting up notification rule for Monitoring HOST_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like -> Provide Name: MONITOR_HOST_Warning, enter Description, make the rule as public, target type –Host and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics
a. CPU in I/O Wait (%)
b. CPU Utilization (%)
c. Filesystem Space Available (%)
d. Memory Utilization (%)
Select Severity as Warning
Click ok and finish
Regards,
Sowmya (OSR)
Oracle Data Guard - Introduction
Hello,
Introduction to Oracle Data Guard
Oracle Data Guard is most effective and comprehensive data protection and disaster recovery solutions available for enterprise data.
Oracle Data Guard maintains, manages and monitors one or more standby database to protect enterprise data from failures, disasters, errors and corruptions. Standby database is a consistent copy of production database. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime and enabling zero data loss.
Data Guard Configuration
1. Primary Database (Production Database)
2. Standby Database
a. Physical Standby database (Redo Apply)
b. Logical standby database (SQL Apply)
Data Guard Services
1. Log Transport Services
2. Log Apply Services : Redo Apply or SQL Apply
3. Role Management Services
Data Guard Broker
Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations using either GUI (In EM) or CLI(DGMGRL prompt)
Data Guard Protection Modes
1. Maximum Protection: No data loss. Primary database shuts down in case of transmission failure
2. Maximum Availability: No data loss. Primary database tolerates one transmission failure
3. Maximum Performance: A transaction is committed when its redo entries are written to local redo log.
Data Guard and complementary Technologies
1. Oracle Real Application clusters (RAC)
2. Flashback Database
3. Recovery Manager (RMAN)
Summary of Data Guard Benefits
1. Disaster recovery, data protection , and high availability
2. Complete data protection
3. Efficient use of system resources
4. Automatic gap detection and resolution
5. Centralized and simple management
6. Integration with Oracle Database
Types of Standby Databases
1. Physical Standby Databases : It works in 2 modes: Redo Apply ( the database cannot be opened while redo is being applied), open Read only and Open read /write
Although the physical standby database cannot perform both Redo Apply and be opened in read -only mode at the same time, you can switch between them.
Benefits of a Physical Standby Database
1. Efficient disaster recovery and high availability
2. Data Protection
3. Reduction in Primary database workload Performance
2. Logical Standby Database: The logical standby database can be used concurrently for data protection and reporting operations. It has some restrictions on datatypes, types of tables, and types of DDL and DML operations
Benefits of a Logical standby Database
1. It has same benefits as in a Physical standby database
2. Efficient use of standby hardware resources
3. Reduction in primary database workload
Online Redo Logs, Archived Redo Logs, and Standby Redo Logs
Redo data transmitted from the primary database is received by the remote file server (RFS) process on the standby system where the RFS process writes the redo data to archived log files or standby redo log files
Online Redo Logs and Archived Redo Logs
1. Online Redo Logs apply in the primary database and does not apply in the physical standby database
2. Primary database and both physical and logical standby databases each have an archived redo log.
Standby Redo Logs
1. A standby redo log is similar in all ways to an online redo log, except that a standby redo log is used only when the database is running in the standby role to store redo data received from the primary database. It is highly recommended to configure.
Regards,
Sowmya Bandaru (OSR)
Introduction to Oracle Data Guard
Oracle Data Guard is most effective and comprehensive data protection and disaster recovery solutions available for enterprise data.
Oracle Data Guard maintains, manages and monitors one or more standby database to protect enterprise data from failures, disasters, errors and corruptions. Standby database is a consistent copy of production database. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime and enabling zero data loss.
Data Guard Configuration
1. Primary Database (Production Database)
2. Standby Database
a. Physical Standby database (Redo Apply)
b. Logical standby database (SQL Apply)
Data Guard Services
1. Log Transport Services
2. Log Apply Services : Redo Apply or SQL Apply
3. Role Management Services
Data Guard Broker
Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations using either GUI (In EM) or CLI(DGMGRL prompt)
Data Guard Protection Modes
1. Maximum Protection: No data loss. Primary database shuts down in case of transmission failure
2. Maximum Availability: No data loss. Primary database tolerates one transmission failure
3. Maximum Performance: A transaction is committed when its redo entries are written to local redo log.
Data Guard and complementary Technologies
1. Oracle Real Application clusters (RAC)
2. Flashback Database
3. Recovery Manager (RMAN)
Summary of Data Guard Benefits
1. Disaster recovery, data protection , and high availability
2. Complete data protection
3. Efficient use of system resources
4. Automatic gap detection and resolution
5. Centralized and simple management
6. Integration with Oracle Database
Types of Standby Databases
1. Physical Standby Databases : It works in 2 modes: Redo Apply ( the database cannot be opened while redo is being applied), open Read only and Open read /write
Although the physical standby database cannot perform both Redo Apply and be opened in read -only mode at the same time, you can switch between them.
Benefits of a Physical Standby Database
1. Efficient disaster recovery and high availability
2. Data Protection
3. Reduction in Primary database workload Performance
2. Logical Standby Database: The logical standby database can be used concurrently for data protection and reporting operations. It has some restrictions on datatypes, types of tables, and types of DDL and DML operations
Benefits of a Logical standby Database
1. It has same benefits as in a Physical standby database
2. Efficient use of standby hardware resources
3. Reduction in primary database workload
Online Redo Logs, Archived Redo Logs, and Standby Redo Logs
Redo data transmitted from the primary database is received by the remote file server (RFS) process on the standby system where the RFS process writes the redo data to archived log files or standby redo log files
Online Redo Logs and Archived Redo Logs
1. Online Redo Logs apply in the primary database and does not apply in the physical standby database
2. Primary database and both physical and logical standby databases each have an archived redo log.
Standby Redo Logs
1. A standby redo log is similar in all ways to an online redo log, except that a standby redo log is used only when the database is running in the standby role to store redo data received from the primary database. It is highly recommended to configure.
Regards,
Sowmya Bandaru (OSR)
Friday, July 23, 2010
Oracle Grid - Creating Notification Rules
Hello,
In this blog i am going to explain creating the notification rules.
I have created different rules for critical alerts and warning rules, and i am going to assign all critical alerts to critical admin and warning alerts to warning admin.
here i am just creating the rules and assigning them to public. Later i am going to subscribe them to email by logging into critical admin and warning admin
Notification Rules are setup for the following
1. Monitoring Agent
2. Monitoring Cluster
3. Monitoring Listener
4. Monitoring ASM_Critical
5. Monitoring ASM_Warning
6. Monitoring Cluster_Database_Critical
7. Monitoring Cluster_Database_Warning
8. Monitoring Database_Instance_Critical
9. Monitoring Database_Instance_Warning
10. Monitoring Host_Critical
11. Monitoring HOST_Warning
Creating Notification Rules
1. Setting up notification rule for Monitoring Agent
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create Like-> Provide Name: MONITOR_AGENT, enter Description, make the rule as public, target type –agent and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
2. Setting up notification rule for Monitoring Cluster
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_CLUSTER, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
3. Setting up notification rule for Monitoring Listener
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_LISTENER, enter Description, make the rule as public, target type –listener and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
4. Setting up notification rule for Monitoring ASM_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_CRITICAL, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Critical
Click ok and finish
5. Setting up notification rule for Monitoring ASM_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_Warning, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Warning
Click ok and finish
Regards,
Sowmya (OSR)
In this blog i am going to explain creating the notification rules.
I have created different rules for critical alerts and warning rules, and i am going to assign all critical alerts to critical admin and warning alerts to warning admin.
here i am just creating the rules and assigning them to public. Later i am going to subscribe them to email by logging into critical admin and warning admin
Notification Rules are setup for the following
1. Monitoring Agent
2. Monitoring Cluster
3. Monitoring Listener
4. Monitoring ASM_Critical
5. Monitoring ASM_Warning
6. Monitoring Cluster_Database_Critical
7. Monitoring Cluster_Database_Warning
8. Monitoring Database_Instance_Critical
9. Monitoring Database_Instance_Warning
10. Monitoring Host_Critical
11. Monitoring HOST_Warning
Creating Notification Rules
1. Setting up notification rule for Monitoring Agent
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create Like-> Provide Name: MONITOR_AGENT, enter Description, make the rule as public, target type –agent and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
2. Setting up notification rule for Monitoring Cluster
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_CLUSTER, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
3. Setting up notification rule for Monitoring Listener
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_LISTENER, enter Description, make the rule as public, target type –listener and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
4. Setting up notification rule for Monitoring ASM_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_CRITICAL, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Critical
Click ok and finish
5. Setting up notification rule for Monitoring ASM_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_Warning, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Warning
Click ok and finish
Regards,
Sowmya (OSR)
Tuesday, June 22, 2010
Data Guard - Setting up Physcial Standby Database
Hello,
Setting Up Physical Standby Database
Setting up a Physical Standby database is easy when all prerequisites and setup steps are met. It is easy to build a Data guard Physical Standby Database using Oracle 10g Grid Control Console.
Note:
1. Identical Hardware, OS on both Primary and Secondary
2. Oracle Database 10g Enterprise Edition on Primary and Secondary
3. SSH is configured for user oracle on both nodes
4. Oracle Home is on identical path on both nodes
Implementation Notes:
Follow these steps when once you have your primary database up and running
Primary Database
1. Enable Force Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Standby Database
1. Create a control file for the standby database
2. Backup the Primary Database and transfer a copy to the standby node
3. Prepare an initialization Parameter file for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to support the standby database on the standby node
6. Start the physical standby database
7. Verify the physical standby database is performing properly
Step by Step implementation of a Physical Standby environment
Primary Database Steps:
1. Check the name of the database by querying the database
SQL>desc v$database
BBMLPROD
SQL>select name from v$database;
2. Check the data file location on primary side
SQL>desc v$dba_data_files
SQL>select filename from v$dba_data_files
3. Check the online log file location on the primary side
SQL>desc v$log, v$logfile
SQL>select group3, member from v$logfile;
SQL>select * from v$log;
4. SQL>show parameter unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string BBMLPROD
5. Enable force Logging
In order to implement standby database enable ‘forced Logging’. This ensures that even in the event that a ‘no logging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.
SQL>alter database force logging;
Database altered
6. Create a Password File
In order to be able to ship and apply archive log files from primary to standby database a password file must be created on the primary and copied the same to the secondary.
Note: Sys password must be identical on both sites.
cd $ORACLE_HOME/dbs
[/opt/oracle]>orapwd file=orapwbbmlprod password=oracle force=y
7. Configure standby redo log is added to enable Data guard Maximum availability and Maximum Protection modes. Configure standby redo logs (SRL) with same size as online redo logs.
SQL> select * from v$logfile;
I have noticed there are 3 groups
SQL> select bytes from v$log;
Bytes
--------
52443300
SQL> alter database add standby logfile group 4 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M;
Database altered
Now make sure that u have three more log groups
Use this query
SQL> select * from v$logfile;
6 rows selected
8. Enable Archive log mode
Check whether database is in archive log if not alter the database to archive log mode.
SQL>archive log list
Change the database to archive log mode
SQL> startup mount;
SQL> alter database archive log;
SQL>alter database open
SQL>archive log list
9. Set primary Database init.ora Parameter
Initialization Parameter controls the behavior of a Data Guard environment. Data guard must use spfile, in order to configure this first we create and configure standby parameters on a regular pfile and later we convert this to spfile
SQL> create pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’ from spfile;
Edit the pfile to add the standby parameters, few parameters are listed which needs to be modified.
Db_unique_name=’BBMLPROD’
Log_Archive_config=’DG_CONFIG=BBMLPROD, BBMLDR’
Log_Archive_Dest_1=’Location=/../../archdest/
Valid_for=(All_logfiles, All_roles)
Db_unique_Name=’BBMLPROD’
Log_Archive_Dest_2=
‘Service=’BBMLDR LGWR ASYNC
Valid_for=(online_logfiles, primary_role)
Db_unique_name=BBMLDR’
Log_archive_dest_state_1=Enable
Log_Archive_Format=%t_%s_%r.arc
Log_Archive_Max_Processes=30
#standby role Parameters
Fal_server=BBMLDR
Fal_client=BBMLPROD
Standby_file_management=auto
Db_file_name_convert=’BBMLDR/BBMLDR’,’ BBMLPROD/BBMLPROD’
Log_file_name_convert=’/../../../’,’/…/../../’
Once the parameter file is ready create spfile from this new pfile
SQL> shutdown immediate;
SQL>startup nomount pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL> create spfile from pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL>shutdown immediate;
SQL>startup;
Standby Database Steps
1. Standby database will use the control file that is generated on the primary database
SQL>startup mount;
SQL>alter database create standby controlfile as ‘/oradisk/app01/oracle/product/10gDB/dbs/BBMLDR.ctl’;
SQL> Alter Database Open;
2. Backup the Primary Database and transfer a copy to the standby node.
Note: Backup the primary database is nothing your .dbf, .log, .tmp files. Perform copy of those files and using scp move that on standby database.
3. On the standby node create necessary directories to get the datafiles, logfiles and so on.
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/datafile/
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/onlinelog/
[/opt/oracle/]#chown –R oracle:dba/oradisk/oracle
4. Now Copy the entire database from primary to standby while the main database is down or in backup mode. Copying database includes copying all .dbf,.log, .tmp files
5. Prepare an initialization parameter file for the standby database.
Following list shows the parameter
Edit the following
Db_unique_name=’BBMLDR’
audit_file_dest=’/oradisk/app01/oracle/admin/bbmldr/adump’
background_dump_dest,=’/oradisk/app01/oracle/admin/BBMLDR/bdump’
core_dump_dest=’/oradisk/app01/oracle/admin/BBMLDR/cdump’
user_dump_dest, control_files=’/oradisk/app01/oracle/admin/BBMLDR/udump’
db_create_file_dest=’/oradisk/oracle/BBMLDR’
db_file_name_convert=’/oradisk/oracle/bbmldr/datafile/’,’/../../../bbmlprod/datafile/’
log_file_name_convert=’/oradisk/oracle/bbmldr/onlinelog/’,’/../../../bbmlprod/onlinelog/’
fal_server=’bbmlprod’
fal_client=’bbmldr’
log_archive_dest_1=’location=/oradisk/oracle/bbmldr/archives/’
valid_for=(all_logfiles, all_roles)
db_unique_name=’bbmldr’
log_archive_dest_2=’service=bbmlprod lgwr async
valid_for=(online_logfiles, primary_role)
db_unique_name=’bbmlprod’
6. Create all required directories for dump directories and archived log destination
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/budmp
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/cdump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/udump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/archives/
7. Copy from the primary controlfile to standby control file to its destination using scp –p
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_02.ctl
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_01.ctl
8. Configure the listener and tnsnames to support the database on both nodes
#ON sblbgp01
LISTENER_BBMLPROD=
(DESCRIPTION=
(DESCRIPTION=
(ADDRESS =(PROTOCOL = TCP)(HOST=sblbgp01)(PORT=1520)(IP=FIRST))
)
)
SID_LIST_LISTENER_BBMLPROD
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME= BBMLPROD)
(ORACLE_HOME = /oradisk/appl01/oracle/product/10gDB)
(SID_NAME = bbmlprod)
)
)
#drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT=1520) (IP=FIRST)
)
)
SID_LIST_LISTENER_BBMLDR =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = BBMLDR)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB)
(SID_NAME = BBMLDR)
)
)
Configure tnsnames.ora on both servers to hold entries for both databases.
# ON sblbgp01
LISTENER_BBMLPROD =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = sblbgp01) (PORT = 1520) (IP = FIRST)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =sblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLPROD)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =BBMLDR)
)
)
#ON drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST =drsblbgp01) (PORT =1520) (IP = FIRST)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = drsblbgp01) (PORT =1520))
(CONNECT_DATA=
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLDR)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =SBLBGP01) (PORT =1520))
(CONNECT_DATA =
(SERVER =DEDICATED)
(SERVICE_NAME =BBMLPROD)
)
)
9. Start the listener and check tnsping on both nodes to both
On node 1
[sblbgp01]>Lsnrctl start bbmlprod
[sblbgp01]>Lsnrctl start bbmldr
[sblbgp01]>tnsping bbmlprod
[sblbgp01]>tnsping bbmldr
10. Set up the environment to support the standby database on the standby node.
Copy the password file from primary to standby, sys password must be identical
[sblbgp01]>scp orapwbbmlprod drsblbgp01:/oradisk/app01/oracle/product/10gDB/dbs/orapwbbmldr
Setup the environment variable to point to the standby database
ORACLE_HOME=/oradisk/app01/oracle/product/10gDB
ORACLE_SID=bbmldr
PATH=$ORACLE_HOME/bin:$PATH
Startup nomount the standby database and generate an spfile
[drsblbgp01]> cd $ORACLE_HOME
[drsblbgp01]>Sqlplus /nolog
[drsblbgp01]>conn sys/as sysdba
SQL>startup nomount pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL>create spfile from pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL> shutdown immediate;
SQL> startup mount
SQL> Alter Database Recover Managed standby database disconnect from session;
Database altered
Check the alert log of both primary and standby which will give you complete information
Verifying the Physical standby database is performing properly
Check archived redo log on standby
SQL>show parameters db_unique_name;
SQL> Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Switch logfile on primary
SQL>alter system switch logfile;
SQL> archive log list
SQL>alter system switch logfile
SQL>archive log list
Check archived redo log on standby
SQL>Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Regards,
Sowmya Bandaru (OSR)
Setting Up Physical Standby Database
Setting up a Physical Standby database is easy when all prerequisites and setup steps are met. It is easy to build a Data guard Physical Standby Database using Oracle 10g Grid Control Console.
Note:
1. Identical Hardware, OS on both Primary and Secondary
2. Oracle Database 10g Enterprise Edition on Primary and Secondary
3. SSH is configured for user oracle on both nodes
4. Oracle Home is on identical path on both nodes
Implementation Notes:
Follow these steps when once you have your primary database up and running
Primary Database
1. Enable Force Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Standby Database
1. Create a control file for the standby database
2. Backup the Primary Database and transfer a copy to the standby node
3. Prepare an initialization Parameter file for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to support the standby database on the standby node
6. Start the physical standby database
7. Verify the physical standby database is performing properly
Step by Step implementation of a Physical Standby environment
Primary Database Steps:
1. Check the name of the database by querying the database
SQL>desc v$database
BBMLPROD
SQL>select name from v$database;
2. Check the data file location on primary side
SQL>desc v$dba_data_files
SQL>select filename from v$dba_data_files
3. Check the online log file location on the primary side
SQL>desc v$log, v$logfile
SQL>select group3, member from v$logfile;
SQL>select * from v$log;
4. SQL>show parameter unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string BBMLPROD
5. Enable force Logging
In order to implement standby database enable ‘forced Logging’. This ensures that even in the event that a ‘no logging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.
SQL>alter database force logging;
Database altered
6. Create a Password File
In order to be able to ship and apply archive log files from primary to standby database a password file must be created on the primary and copied the same to the secondary.
Note: Sys password must be identical on both sites.
cd $ORACLE_HOME/dbs
[/opt/oracle]>orapwd file=orapwbbmlprod password=oracle force=y
7. Configure standby redo log is added to enable Data guard Maximum availability and Maximum Protection modes. Configure standby redo logs (SRL) with same size as online redo logs.
SQL> select * from v$logfile;
I have noticed there are 3 groups
SQL> select bytes from v$log;
Bytes
--------
52443300
SQL> alter database add standby logfile group 4 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M;
Database altered
Now make sure that u have three more log groups
Use this query
SQL> select * from v$logfile;
6 rows selected
8. Enable Archive log mode
Check whether database is in archive log if not alter the database to archive log mode.
SQL>archive log list
Change the database to archive log mode
SQL> startup mount;
SQL> alter database archive log;
SQL>alter database open
SQL>archive log list
9. Set primary Database init.ora Parameter
Initialization Parameter controls the behavior of a Data Guard environment. Data guard must use spfile, in order to configure this first we create and configure standby parameters on a regular pfile and later we convert this to spfile
SQL> create pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’ from spfile;
Edit the pfile to add the standby parameters, few parameters are listed which needs to be modified.
Db_unique_name=’BBMLPROD’
Log_Archive_config=’DG_CONFIG=BBMLPROD, BBMLDR’
Log_Archive_Dest_1=’Location=/../../archdest/
Valid_for=(All_logfiles, All_roles)
Db_unique_Name=’BBMLPROD’
Log_Archive_Dest_2=
‘Service=’BBMLDR LGWR ASYNC
Valid_for=(online_logfiles, primary_role)
Db_unique_name=BBMLDR’
Log_archive_dest_state_1=Enable
Log_Archive_Format=%t_%s_%r.arc
Log_Archive_Max_Processes=30
#standby role Parameters
Fal_server=BBMLDR
Fal_client=BBMLPROD
Standby_file_management=auto
Db_file_name_convert=’BBMLDR/BBMLDR’,’ BBMLPROD/BBMLPROD’
Log_file_name_convert=’/../../../’,’/…/../../’
Once the parameter file is ready create spfile from this new pfile
SQL> shutdown immediate;
SQL>startup nomount pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL> create spfile from pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL>shutdown immediate;
SQL>startup;
Standby Database Steps
1. Standby database will use the control file that is generated on the primary database
SQL>startup mount;
SQL>alter database create standby controlfile as ‘/oradisk/app01/oracle/product/10gDB/dbs/BBMLDR.ctl’;
SQL> Alter Database Open;
2. Backup the Primary Database and transfer a copy to the standby node.
Note: Backup the primary database is nothing your .dbf, .log, .tmp files. Perform copy of those files and using scp move that on standby database.
3. On the standby node create necessary directories to get the datafiles, logfiles and so on.
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/datafile/
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/onlinelog/
[/opt/oracle/]#chown –R oracle:dba/oradisk/oracle
4. Now Copy the entire database from primary to standby while the main database is down or in backup mode. Copying database includes copying all .dbf,.log, .tmp files
5. Prepare an initialization parameter file for the standby database.
Following list shows the parameter
Edit the following
Db_unique_name=’BBMLDR’
audit_file_dest=’/oradisk/app01/oracle/admin/bbmldr/adump’
background_dump_dest,=’/oradisk/app01/oracle/admin/BBMLDR/bdump’
core_dump_dest=’/oradisk/app01/oracle/admin/BBMLDR/cdump’
user_dump_dest, control_files=’/oradisk/app01/oracle/admin/BBMLDR/udump’
db_create_file_dest=’/oradisk/oracle/BBMLDR’
db_file_name_convert=’/oradisk/oracle/bbmldr/datafile/’,’/../../../bbmlprod/datafile/’
log_file_name_convert=’/oradisk/oracle/bbmldr/onlinelog/’,’/../../../bbmlprod/onlinelog/’
fal_server=’bbmlprod’
fal_client=’bbmldr’
log_archive_dest_1=’location=/oradisk/oracle/bbmldr/archives/’
valid_for=(all_logfiles, all_roles)
db_unique_name=’bbmldr’
log_archive_dest_2=’service=bbmlprod lgwr async
valid_for=(online_logfiles, primary_role)
db_unique_name=’bbmlprod’
6. Create all required directories for dump directories and archived log destination
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/budmp
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/cdump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/udump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/archives/
7. Copy from the primary controlfile to standby control file to its destination using scp –p
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_02.ctl
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_01.ctl
8. Configure the listener and tnsnames to support the database on both nodes
#ON sblbgp01
LISTENER_BBMLPROD=
(DESCRIPTION=
(DESCRIPTION=
(ADDRESS =(PROTOCOL = TCP)(HOST=sblbgp01)(PORT=1520)(IP=FIRST))
)
)
SID_LIST_LISTENER_BBMLPROD
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME= BBMLPROD)
(ORACLE_HOME = /oradisk/appl01/oracle/product/10gDB)
(SID_NAME = bbmlprod)
)
)
#drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT=1520) (IP=FIRST)
)
)
SID_LIST_LISTENER_BBMLDR =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = BBMLDR)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB)
(SID_NAME = BBMLDR)
)
)
Configure tnsnames.ora on both servers to hold entries for both databases.
# ON sblbgp01
LISTENER_BBMLPROD =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = sblbgp01) (PORT = 1520) (IP = FIRST)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =sblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLPROD)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =BBMLDR)
)
)
#ON drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST =drsblbgp01) (PORT =1520) (IP = FIRST)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = drsblbgp01) (PORT =1520))
(CONNECT_DATA=
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLDR)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =SBLBGP01) (PORT =1520))
(CONNECT_DATA =
(SERVER =DEDICATED)
(SERVICE_NAME =BBMLPROD)
)
)
9. Start the listener and check tnsping on both nodes to both
On node 1
[sblbgp01]>Lsnrctl start bbmlprod
[sblbgp01]>Lsnrctl start bbmldr
[sblbgp01]>tnsping bbmlprod
[sblbgp01]>tnsping bbmldr
10. Set up the environment to support the standby database on the standby node.
Copy the password file from primary to standby, sys password must be identical
[sblbgp01]>scp orapwbbmlprod drsblbgp01:/oradisk/app01/oracle/product/10gDB/dbs/orapwbbmldr
Setup the environment variable to point to the standby database
ORACLE_HOME=/oradisk/app01/oracle/product/10gDB
ORACLE_SID=bbmldr
PATH=$ORACLE_HOME/bin:$PATH
Startup nomount the standby database and generate an spfile
[drsblbgp01]> cd $ORACLE_HOME
[drsblbgp01]>Sqlplus /nolog
[drsblbgp01]>conn sys/as sysdba
SQL>startup nomount pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL>create spfile from pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL> shutdown immediate;
SQL> startup mount
SQL> Alter Database Recover Managed standby database disconnect from session;
Database altered
Check the alert log of both primary and standby which will give you complete information
Verifying the Physical standby database is performing properly
Check archived redo log on standby
SQL>show parameters db_unique_name;
SQL> Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Switch logfile on primary
SQL>alter system switch logfile;
SQL> archive log list
SQL>alter system switch logfile
SQL>archive log list
Check archived redo log on standby
SQL>Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Regards,
Sowmya Bandaru (OSR)
Friday, May 28, 2010
Queries Running Very Slow
Query Running very slow:
This is the one which we often get, and the immediate answer would be, just go and take plan of the query and create some indexes if necessary. Lot of times this will not help you. There are so many possible reasons that we need to look.
If the query is running very slow collect some basic information before doing anything, this will be the first step to resolve this issue.
Information needs to collect when query is running slow
1. First execute the query and check how long it took? Is this query running very slow since it developed or it happened today? I mean yesterday or sometime before it was running fine but today it is very slow. The first thing you have to do is check the network traffic/Network errors for something when wrong in the network side, the best way to check is access some share drive which is located somewhere in network or use tnsping, traceroute, try pinging or ask network admin to check the packet size. If you notice any slow access then contacts network support. Sometimes this has been a problem. Which is out of scope of oracle database but created problem to the database, recently I faced the same kind of issue.
2. Network Issue:
Network is good no problems noticed then next step needs to check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS
3. Statistics are fine. Still the query is slow. Next step needs to check is whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.
4. I/O Issue:
Sometimes queries will be very slow due to more I/O from database to client.
Mechanism to avoid disk I/O and make the physical reads and writes more efficient
First get the largest amount of physical reads and logical reads and the simple way to get them are:
For Physical Reads:
Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;
For Logical Reads:
Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by
Buffer_gets desc;
Multiple approaches must be made to cover all types of queries
5. Memory Issue:
Pin (Cache) most used PL/SQL statement into memory
Syntax: dbms_shared_Pool.keep(Object_name)
Few queries which would help you to resolve the memory related issues are:
Select name, value from gv$parameter where name in (‘db_block_buffers’, ‘db_block_size’, ‘shared_pool_size’, ‘sort_area_size’);
Check the following:
select 1-(sum(decode(name, 'physical reads', value,0))/ (sum(decode(name, 'db block gets', value,0)) + (sum(decode(name, 'consistent gets', value,0))))) * 100 "Read Hit Ratio" from gv$sysstat;
Read Hit Ratio
98.99
Hit ratio below 90-95% is signed as poor indexing;
Check the Shared Pool Size:
It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.
How to find the data dictionary cache miss ratio?
Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”
From gv$rowcache;
Gets Misses HitRate
11000 558 96.5
This is a good ratio does not require any further action.
How to find library Cache Hit Ratio?
select sum(pins) Executions, sum(pinhits) "Execution Hits", ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;
EXECUTIONS Execution Hits PHITRAT MISSES HITRAT
---------- -------------- ---------- ---------- ----------
1126456014 1121589121 99.5679465 102177 99.9909302
How much memory is left for shared_pool_size?
select to_number(gv$parameter.value) value, gv$sgastat.bytes, (gv$sgastat.bytes/gv$parameter.value)*100 “Percent Free
from gv$sgastat, gv$parameter where gv$sgastat.name = 'free memory and gv$ parameter .name = ‘shared_pool_size;
Shared Pool Size Free Bytes Percent Free
100,000,000 82,278,960 82.278
Regards,
Sowmya (OSR)
This is the one which we often get, and the immediate answer would be, just go and take plan of the query and create some indexes if necessary. Lot of times this will not help you. There are so many possible reasons that we need to look.
If the query is running very slow collect some basic information before doing anything, this will be the first step to resolve this issue.
Information needs to collect when query is running slow
1. First execute the query and check how long it took? Is this query running very slow since it developed or it happened today? I mean yesterday or sometime before it was running fine but today it is very slow. The first thing you have to do is check the network traffic/Network errors for something when wrong in the network side, the best way to check is access some share drive which is located somewhere in network or use tnsping, traceroute, try pinging or ask network admin to check the packet size. If you notice any slow access then contacts network support. Sometimes this has been a problem. Which is out of scope of oracle database but created problem to the database, recently I faced the same kind of issue.
2. Network Issue:
Network is good no problems noticed then next step needs to check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS
3. Statistics are fine. Still the query is slow. Next step needs to check is whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.
4. I/O Issue:
Sometimes queries will be very slow due to more I/O from database to client.
Mechanism to avoid disk I/O and make the physical reads and writes more efficient
First get the largest amount of physical reads and logical reads and the simple way to get them are:
For Physical Reads:
Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;
For Logical Reads:
Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by
Buffer_gets desc;
Multiple approaches must be made to cover all types of queries
5. Memory Issue:
Pin (Cache) most used PL/SQL statement into memory
Syntax: dbms_shared_Pool.keep(Object_name)
Few queries which would help you to resolve the memory related issues are:
Select name, value from gv$parameter where name in (‘db_block_buffers’, ‘db_block_size’, ‘shared_pool_size’, ‘sort_area_size’);
Check the following:
select 1-(sum(decode(name, 'physical reads', value,0))/ (sum(decode(name, 'db block gets', value,0)) + (sum(decode(name, 'consistent gets', value,0))))) * 100 "Read Hit Ratio" from gv$sysstat;
Read Hit Ratio
98.99
Hit ratio below 90-95% is signed as poor indexing;
Check the Shared Pool Size:
It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.
How to find the data dictionary cache miss ratio?
Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”
From gv$rowcache;
Gets Misses HitRate
11000 558 96.5
This is a good ratio does not require any further action.
How to find library Cache Hit Ratio?
select sum(pins) Executions, sum(pinhits) "Execution Hits", ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;
EXECUTIONS Execution Hits PHITRAT MISSES HITRAT
---------- -------------- ---------- ---------- ----------
1126456014 1121589121 99.5679465 102177 99.9909302
How much memory is left for shared_pool_size?
select to_number(gv$parameter.value) value, gv$sgastat.bytes, (gv$sgastat.bytes/gv$parameter.value)*100 “Percent Free
from gv$sgastat, gv$parameter where gv$sgastat.name = 'free memory and gv$ parameter .name = ‘shared_pool_size;
Shared Pool Size Free Bytes Percent Free
100,000,000 82,278,960 82.278
Regards,
Sowmya (OSR)
Wednesday, May 12, 2010
ORA-0600: Internal Code Error
ORA-0600: Internal Code Error
Explanation: Any ORA-600 error indicates Oracle has detected an internal inconsistency or a problem which it doesn’t know how best to address. These are not necessarily bugs and can occur for reasons such as Operating System, IO Problems, and Running out of some resources and so on.
Cause: This generic internal error number for oracle program exceptions. This indicates that a process has encountered an exceptional condition.
Note: This type of errors is raised from kernel code of the Oracle RDBMS software when there is an inconsistency in detected or unexpected condition is noticed. This situation need not be considered as bug all the time; it might be caused by problems with operating system, lack of resources etc.
With the ORA-600 error comes with a list of arguments in square brackets. The first of these arguments tells us from where in the code the error was caught and this is the key information for identifying the problem. This can be either numbers or arguments.
Whenever there is an ORA-600 error is raised a trace file is generated in either Udump or Bdump depending on whether the error was caught in a user or a background process. The information is also present in the alert log.
Remedy: Contact Oracle Support.
• Report as bug if the first argument is the internal error number
The most useful items are:
• Check the alert log for the instance or any other instances in a parallel server environment. Alert log is present in Background_dump_dest
• Check the trace file for the failing session and other related trace files
• Check whether anything got changed recently in the system environment such OS been upgraded, patches, database files got restored etc.
• Make sure whether the problem is reproducible or one time occurrence
The following is partial list of the ORA-00600 known bugs and issues. You can find complete list and details about these bugs at Oracle Metalink.
• Partial list of ORA-00600 known bugs and issues:
1. An Oracle ORA-00600 [729]: indicates a memory leak in the UGA. Setting diagnostic events 10262 ignores this error, that is, no ORA-00600 is produced even if a memory leak is detected.
2. ORA-00600 [16365]: indicates a Half Duplex Violation. Shared servers and dispatchers cannot both be writing to the same virtual circuit at the same time because they communicate with a half-duplex protocol (TTC protocol).
3. ORA-00600 [kghpih:ds] : and ORA-00600 [17175] may occur in oracle 9.2.0.6 when two processes work on the same library cache object. Fixed in oracle 9.2.0.7... Note: 300604.1
4. An Oracle ORA-00600 [3020]: Bug.2322620 - RECOVERY HALTED ON PHYSICAL STANDBY WITH ORA-600 AND ORA-10564 Details: ORA-600 [3020] can occur on recovery of LOB data. Fixed in 8.1.7.5 Patch available. This error occurred on our database recovery on the primary server.
5. ora-00600 [12333]: Most probably a network error that may have caused bad data to be received or the client application may have sent the wrong data in the network buffer.
6. Oracle ORA-00600 [12700]: Indicates corruption in index, table or the mapping between them. Upon this discovery, oracle marks the whole block as corrupt and users will not have normal access to the data from the corrupted block. This can also break referential integrity and should be referred to oracle.
7. ORA-00600 [lxhcsn]: Shutdown may dump ORA-7445. Known Bug 2311529
Conclusion: This must be taken as a serious issue and contact Oracle support or raise a SR for unknown issues
Regards,
Sowmya (OSR)
Explanation: Any ORA-600 error indicates Oracle has detected an internal inconsistency or a problem which it doesn’t know how best to address. These are not necessarily bugs and can occur for reasons such as Operating System, IO Problems, and Running out of some resources and so on.
Cause: This generic internal error number for oracle program exceptions. This indicates that a process has encountered an exceptional condition.
Note: This type of errors is raised from kernel code of the Oracle RDBMS software when there is an inconsistency in detected or unexpected condition is noticed. This situation need not be considered as bug all the time; it might be caused by problems with operating system, lack of resources etc.
With the ORA-600 error comes with a list of arguments in square brackets. The first of these arguments tells us from where in the code the error was caught and this is the key information for identifying the problem. This can be either numbers or arguments.
Whenever there is an ORA-600 error is raised a trace file is generated in either Udump or Bdump depending on whether the error was caught in a user or a background process. The information is also present in the alert log.
Remedy: Contact Oracle Support.
• Report as bug if the first argument is the internal error number
The most useful items are:
• Check the alert log for the instance or any other instances in a parallel server environment. Alert log is present in Background_dump_dest
• Check the trace file for the failing session and other related trace files
• Check whether anything got changed recently in the system environment such OS been upgraded, patches, database files got restored etc.
• Make sure whether the problem is reproducible or one time occurrence
The following is partial list of the ORA-00600 known bugs and issues. You can find complete list and details about these bugs at Oracle Metalink.
• Partial list of ORA-00600 known bugs and issues:
1. An Oracle ORA-00600 [729]: indicates a memory leak in the UGA. Setting diagnostic events 10262 ignores this error, that is, no ORA-00600 is produced even if a memory leak is detected.
2. ORA-00600 [16365]: indicates a Half Duplex Violation. Shared servers and dispatchers cannot both be writing to the same virtual circuit at the same time because they communicate with a half-duplex protocol (TTC protocol).
3. ORA-00600 [kghpih:ds] : and ORA-00600 [17175] may occur in oracle 9.2.0.6 when two processes work on the same library cache object. Fixed in oracle 9.2.0.7... Note: 300604.1
4. An Oracle ORA-00600 [3020]: Bug.2322620 - RECOVERY HALTED ON PHYSICAL STANDBY WITH ORA-600 AND ORA-10564 Details: ORA-600 [3020] can occur on recovery of LOB data. Fixed in 8.1.7.5 Patch available. This error occurred on our database recovery on the primary server.
5. ora-00600 [12333]: Most probably a network error that may have caused bad data to be received or the client application may have sent the wrong data in the network buffer.
6. Oracle ORA-00600 [12700]: Indicates corruption in index, table or the mapping between them. Upon this discovery, oracle marks the whole block as corrupt and users will not have normal access to the data from the corrupted block. This can also break referential integrity and should be referred to oracle.
7. ORA-00600 [lxhcsn]: Shutdown may dump ORA-7445. Known Bug 2311529
Conclusion: This must be taken as a serious issue and contact Oracle support or raise a SR for unknown issues
Regards,
Sowmya (OSR)
Wednesday, April 28, 2010
Oracle Grid- Creating User Accounts
Hello,
I am going to discuss more about Oracle Grid - creating administrator user, super administrator user, creating targets, generating reports, scheduling jobs, setting alerts,setting email notification and so on.
In this post i am going to create two administrator user accounts (warning_admin, critical_admin) and super administrator user account (super_admin) using Oracle Grid.
Administrator User Accounts: Adminsitrator are deatabase users defined in the management repository who can log into Enterprise Manager to perform management tasks.
Super Administrator User Account:The Super Administrator can choose to grant certain privileges to certain administrators to access and manage certain targets based on the requirement. Thus, the superadministrator can divide the work intelligently among the adminsitrators.
Creating Super Administrator User Super_Admin using Oracle Grid
Super_Admin
• Go to Setup -> click on Administrators
• Click Createlike -> provide the following information
• Name: Super_Admin
• Password:
• Confirm password:
• Check on Super Administrator
• Click Finish
Creating Adminsitrator User Accounts Warning_Admin and Critical_Admin
Warning_Admin
Login to Grid user Super Administrator user account Super_Admin
Create 2 User Accounts
Note: This 2 users are created just to differentiate critical alerts and warning alerts and set email notifications to groups based on the priority. It is sometimes needed to avoid your boss getting all warning emails, so make sure that you have 2 email groups with your boss included in criticals alerts and not in warning alerts.
Warning_Admin: This account is created to set for all the alerts which come under warning messages. A warning email is sent to everyone in the team except your Boss.
• Go to setup -> Adminitrators
• Click on Createlike -> Provide following information
• Name: warning_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next Roles Page
• Make the role as public
• Click next to system privileges
• Check on view any target
• Click next to Target
• Browse Type-> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click Finish
Critical_Admin
Login into Oracle Grid using Super_Admin
Create administrator user Critical_Admin
Critical_Admin: This account is created to set for all the alerts which come under critical messages. A Critical email is sent to everyone in the team including your Boss. The critical email is .......
• Go to setup -> Administrators
• Click on createlike -> Provide following information
• Name: critical_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next roles Page
• Make the role as Public
• Click Next to system privileges
• Check on view any target
• Click next to Target
• Browse Type -> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click finish
Regards,
Sowmya (OSR)
I am going to discuss more about Oracle Grid - creating administrator user, super administrator user, creating targets, generating reports, scheduling jobs, setting alerts,setting email notification and so on.
In this post i am going to create two administrator user accounts (warning_admin, critical_admin) and super administrator user account (super_admin) using Oracle Grid.
Administrator User Accounts: Adminsitrator are deatabase users defined in the management repository who can log into Enterprise Manager to perform management tasks.
Super Administrator User Account:The Super Administrator can choose to grant certain privileges to certain administrators to access and manage certain targets based on the requirement. Thus, the superadministrator can divide the work intelligently among the adminsitrators.
Creating Super Administrator User Super_Admin using Oracle Grid
Super_Admin
• Go to Setup -> click on Administrators
• Click Createlike -> provide the following information
• Name: Super_Admin
• Password:
• Confirm password:
• Check on Super Administrator
• Click Finish
Creating Adminsitrator User Accounts Warning_Admin and Critical_Admin
Warning_Admin
Login to Grid user Super Administrator user account Super_Admin
Create 2 User Accounts
Note: This 2 users are created just to differentiate critical alerts and warning alerts and set email notifications to groups based on the priority. It is sometimes needed to avoid your boss getting all warning emails, so make sure that you have 2 email groups with your boss included in criticals alerts and not in warning alerts.
Warning_Admin: This account is created to set for all the alerts which come under warning messages. A warning email is sent to everyone in the team except your Boss.
• Go to setup -> Adminitrators
• Click on Createlike -> Provide following information
• Name: warning_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next Roles Page
• Make the role as public
• Click next to system privileges
• Check on view any target
• Click next to Target
• Browse Type-> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click Finish
Critical_Admin
Login into Oracle Grid using Super_Admin
Create administrator user Critical_Admin
Critical_Admin: This account is created to set for all the alerts which come under critical messages. A Critical email is sent to everyone in the team including your Boss. The critical email is .......
• Go to setup -> Administrators
• Click on createlike -> Provide following information
• Name: critical_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next roles Page
• Make the role as Public
• Click Next to system privileges
• Check on view any target
• Click next to Target
• Browse Type -> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click finish
Regards,
Sowmya (OSR)
Tuesday, April 20, 2010
RMAN- Block Change Tracking
Hello,
Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.
How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;
How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking
How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;
Regards,
Sowmya (OSR)
Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.
How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;
How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking
How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;
Regards,
Sowmya (OSR)
Monday, April 19, 2010
RMAN-Incremental Backup
Hello,
Incremental Backups using RMAN:
Incremental Backups: RMAN incremental backups back up only data file block that have been changed since last backup. One can make incremental backups at Database level, Tablespace level or datafile level.
The main theme of incremental backup is to back up only those data blocks that have been changed since previous backup.
The reasons for going incremental backup as a part of our backup strategy are:
1. To reduce the amount of time needed for taking backups.
2. To save network bandwidth when backing up over network.
3. For better backup performance.
4. To reduce backup size for Non-Archive log databases, instead of going with full database backup one can go with incremental backups.
Incremental Backup Algorithm:
Data is stored in the data blocks. Each data block in a data file contains system change number SCN, which is the SCN at which most recent changes are made to the block. During incremental backup RMAN reads the SCN of each data block in the input file and compares this with the checkpoint SCN of the parent incremental backup. If the SCN in the input data block Is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
Level 0 and Level 1 Backups:
Level 0: Level 0 is just like a full backup which is the base of the subsequent incremental backups, copies all blocks containing data, backing the data file up into a backup set.
The main difference between the full backup and incremental level 0 backups is , full backup is never included in the incremental backup strategy.
Level 1: With reference to the previous level 0 backup or level 1 backup it backs up the modified data blocks.
Types of Level 1 incremental backups:
Differential Backup: Backus up all blocks changed after the most recent incremental backup at level 1 or 0
Cumulative backup: Backups up all blocks changed after the most recent incremental backup at level 0
By default incremental backups are differential.
RMAN> Backup Incremental level 1 Database; (example of differential backup)
RMAN> Backup Incremental level 1 cumulative database; (blocks changed since level 0- Cumulative Incremental Backups.
Incremental Backup Strategy:
Choose a backup schema according to an acceptable MTTR (mean time to recover). For example, you can implement a three –level backup schema so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily.
How often we can take full or incremental backups?
When deciding how often we can take full or incremental backups, a good rule of thumb is to take a new level 0 backup whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine new level 0 is appropriate.
The following query will help you to determine the rate of changed blocks.
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Making Incremental Backups : Backup Incremental
After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 0 DATABASE;
This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
BACKUP INCREMENTAL LEVEL 1
TABLESPACE SYSTEM
DATAFILE 'ora_home/oradata/trgt/tools01.dbf';
This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
TABLESPACE users;
Regards,
Sowmya (OSR)
http://usdba.blogspot.com/2010/04/rman-incremental-backup.html
Incremental Backups using RMAN:
Incremental Backups: RMAN incremental backups back up only data file block that have been changed since last backup. One can make incremental backups at Database level, Tablespace level or datafile level.
The main theme of incremental backup is to back up only those data blocks that have been changed since previous backup.
The reasons for going incremental backup as a part of our backup strategy are:
1. To reduce the amount of time needed for taking backups.
2. To save network bandwidth when backing up over network.
3. For better backup performance.
4. To reduce backup size for Non-Archive log databases, instead of going with full database backup one can go with incremental backups.
Incremental Backup Algorithm:
Data is stored in the data blocks. Each data block in a data file contains system change number SCN, which is the SCN at which most recent changes are made to the block. During incremental backup RMAN reads the SCN of each data block in the input file and compares this with the checkpoint SCN of the parent incremental backup. If the SCN in the input data block Is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
Level 0 and Level 1 Backups:
Level 0: Level 0 is just like a full backup which is the base of the subsequent incremental backups, copies all blocks containing data, backing the data file up into a backup set.
The main difference between the full backup and incremental level 0 backups is , full backup is never included in the incremental backup strategy.
Level 1: With reference to the previous level 0 backup or level 1 backup it backs up the modified data blocks.
Types of Level 1 incremental backups:
Differential Backup: Backus up all blocks changed after the most recent incremental backup at level 1 or 0
Cumulative backup: Backups up all blocks changed after the most recent incremental backup at level 0
By default incremental backups are differential.
RMAN> Backup Incremental level 1 Database; (example of differential backup)
RMAN> Backup Incremental level 1 cumulative database; (blocks changed since level 0- Cumulative Incremental Backups.
Incremental Backup Strategy:
Choose a backup schema according to an acceptable MTTR (mean time to recover). For example, you can implement a three –level backup schema so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily.
How often we can take full or incremental backups?
When deciding how often we can take full or incremental backups, a good rule of thumb is to take a new level 0 backup whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine new level 0 is appropriate.
The following query will help you to determine the rate of changed blocks.
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
Making Incremental Backups : Backup Incremental
After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 0 DATABASE;
This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
BACKUP INCREMENTAL LEVEL 1
TABLESPACE SYSTEM
DATAFILE 'ora_home/oradata/trgt/tools01.dbf';
This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
TABLESPACE users;
Regards,
Sowmya (OSR)
http://usdba.blogspot.com/2010/04/rman-incremental-backup.html
Thursday, April 1, 2010
RMAN -Practicals
Hello,
Connecting to RMAN
1.Open the DOS command prompt.
2.C:>rman
3.RMAN>
4.RMAN>exit
Connecting to the target database
1.Open the DOS command prompt.
2.C:>SET ORACLE_SID=kendb
3.RMAN> connect target
Or
rman target / NOCATALOG
Or
rman target/
4.RMAN>exit
Creating the Repository in the Recovery Catalog Database
RMAN repository in a separate database called RCMDB.
Add entry in tnsnames.ora file.
Make sure RCMDB is in ArchiveLog Mode.
Make sure Target Database (KENDB) is in archivelog mode.
1. sqlplus to RCMDB as SYSDBA
Set oracle_sid=RCMDB
Sqlplus “/as sysdba”
Sql>select name from v$database; (Verify you connected to rcmdb)
2. CREATE USER RMANUSER
IDENTIFIED BY RMANUSER
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP;
3. sql>grant connect, resource,recovery_catalog_owner to rmanuser;
4.sql>exit;
5. Invoke RMAN executable
RMAN
RMAN>connect catalog rmanuser/rmanuser@RCMDB
6. Create catalog for RMANUSER
RMAN>CREATE CATALOG;
7. RMAN>EXIT;
Connecting to Target and Recovery Catalog Database
1. C:>set oracle_sid=TESTDB
2. C:>RMAN
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB
5. RMAN>EXIT;
Or
Instead or step3,4 and 5 You can use following:
C:>RMAN TARGET / CATALOG RMANUSER/RMANUSER@RCMDB
Registering the Target Database.
1. C:>SET ORACLE_SID=TESTDB
2. C:>RMAN
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB
5. RMAN>REGISTER DATABASE;
6. RMAN>EXIT;
Setting the Retention Policy
1. Launch RMAN again
2. C:>rman target/
3. RMAN>configure retention policy to redundancy 2;
4. RMAN>show retention policy;
5. RMAN>configure retention policy to recovery window of 3 days;
6. RMAN>configure retention policy clear;
7. RMAN>exit;
Configuring the Default Device Type
1. C>rman target/
2. RMAN>configure default device type to sbt; (for tape)
3. RMAN>configure default device type to disk;
4. RMAN>show default device type;
5. RMAN>configure default device type clear;
6. RMAN>exit;
format for device type disk
Configuring Controlfile Autobackup
1. C>rman target/
2. RMAN>configure controlfile autobackup on;
3. RMAN> configure controlfile autobackup format for device type disk to 'f:\backup\kendb\rman\%F';
4. RMAN>show controlfile autobackup;
5. RMAN>show controlfile autobackup format;
6. RMAN>configure controlfile autobackup clear;
7. RMAN>configure controlfile autobackup format for device type disk clear;
8. RMAN>exit;
Configuring the Default Backup Type
1. C>rman target/
2. RMAN>configure device type disk backup type to backupset;
3. RMAN>configure device type disk backup type to copy;
4. RMAN>configure device type disk backup type to compressed backupset;
5. RMAN>show device type;
6. RMAN>backup database;
7. RMAN>list backup;
8. RMAN>configure device type disk clear;
9. RMAN>exit;
Parallelizing backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>configure device type disk parallelism 4;
4. RMAN>backup datafile 2, 3;
RMAN>exit
Backup entire Database
1. C>set oracle_sid=kendb
2. C>rman
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB;
5. RMAN>CONFIGURE DEVICE TYPE DISK CLEAR;
6. RMAN>BACKUP DATABASE;
OR
7. RMAN> BACKUP DATABASE FORMAT ‘O:\ORABACKUP\KENDB\BACKUP.RMAN\db_%u_%d_%s_%p’;
OR
8. RMAN>BACKUP DATABASE MAXSETSIZE=500M;
9. RMAN>EXIT;
Backup Tablespace
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup tablespace system;
4. RMAN>backup tablespace system, sysaux;
Backup Datafile
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>report schema;
4. RMAN>backup datafile 3;
5. RMAN>backup datafile 2,4;
6. RMAN>exit;
Backup Controlfile
1. C>rman target/
2. RMAN>backup current controlfile;
3. RMAN>exit;
Creating Compressed Backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup as compressed backupset datafile 3;
RMAN>exit;
Backup ArchiveLogs
1. C>set oracle_sid=kendb
2. C>rman
3. RMAN>backup archivelog all;
OR
4. RMAN>backup archivelog all delete input;
(This deletes archivelog files after they have been backed up.)
5. Backup Archivelogs generated in the past 2 days.
6. RMAN>backup archivelog from time ‘sysdate-2’ until time ‘sysdate’;
7. RMAN>exit;
Performing Incremental Backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup incremental level 0 database;
Incremental backups may be taken at Tablespace or datafile level.
OR
4. RMAN>BACKUP INCREMENTAL LEVEL 0 DATAFILE 2, 4;
5. RMAN>BACKUP INCREMENTAL LEVEL 0 TABLESPACE USERS;
Create Incremental Level 1 backups
6. RMAN>backup incremental level 1 tablespace users;
7. RMAN>backup incremental level 1 database;
Create Cumulative Level 1 backups
8. RMAN>backup incremental level 1 cumulative tablespace users;
9. RMAN>backup incremental level 1 cumulative database;
10. RMAN>backup incremental level 1 cumulative datafile 2, 4;
11. RMAN>exit;
Create Image Copies
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup as copy database;
OR
4. RMAN>backup as copy tag “users_wkly_bkp” tablespace users;
OR
5. RMAN>backup as copy datafile 4;
6. RMAN>exit;
LIST COMMAND
Which backups are currently available.
1. C>set oracle_sid=kendb
2. C>rman target/
3. rman>list backup by backup;
4. rman>list backupset by backup;
5. rman>list backup by backup verbose;
6. rman>list backup by backup summary;
7. rman>list backup by file;
8. rman>list copy of database; image copies
9. rman>list copy of tablespace users;
10. rman>list copy of datafile 3;
11. rman>list incarnation; to view different incarnation
Identify files that are in need of backup
1. c>set oracle_sid=kendb
2. c>rman target/
3. rman>configure retention policy to redundancy 2;
4. rman> report need backup;
5. rman> report need backup days=7 database; files that need a minimum of 7 days of logfiles to be applied in the event of recovery.
6. rman>exit;
Report Obsolete
1. c>set oracle_sid=kendb
2. c> rman target/
3. rman> report obsolete;
4. rman> delete obsolete;
5. rman>exit;
Regards,
Sowmya (OSR)
Connecting to RMAN
1.Open the DOS command prompt.
2.C:>rman
3.RMAN>
4.RMAN>exit
Connecting to the target database
1.Open the DOS command prompt.
2.C:>SET ORACLE_SID=kendb
3.RMAN> connect target
Or
rman target / NOCATALOG
Or
rman target/
4.RMAN>exit
Creating the Repository in the Recovery Catalog Database
RMAN repository in a separate database called RCMDB.
Add entry in tnsnames.ora file.
Make sure RCMDB is in ArchiveLog Mode.
Make sure Target Database (KENDB) is in archivelog mode.
1. sqlplus to RCMDB as SYSDBA
Set oracle_sid=RCMDB
Sqlplus “/as sysdba”
Sql>select name from v$database; (Verify you connected to rcmdb)
2. CREATE USER RMANUSER
IDENTIFIED BY RMANUSER
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP;
3. sql>grant connect, resource,recovery_catalog_owner to rmanuser;
4.sql>exit;
5. Invoke RMAN executable
RMAN
RMAN>connect catalog rmanuser/rmanuser@RCMDB
6. Create catalog for RMANUSER
RMAN>CREATE CATALOG;
7. RMAN>EXIT;
Connecting to Target and Recovery Catalog Database
1. C:>set oracle_sid=TESTDB
2. C:>RMAN
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB
5. RMAN>EXIT;
Or
Instead or step3,4 and 5 You can use following:
C:>RMAN TARGET / CATALOG RMANUSER/RMANUSER@RCMDB
Registering the Target Database.
1. C:>SET ORACLE_SID=TESTDB
2. C:>RMAN
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB
5. RMAN>REGISTER DATABASE;
6. RMAN>EXIT;
Setting the Retention Policy
1. Launch RMAN again
2. C:>rman target/
3. RMAN>configure retention policy to redundancy 2;
4. RMAN>show retention policy;
5. RMAN>configure retention policy to recovery window of 3 days;
6. RMAN>configure retention policy clear;
7. RMAN>exit;
Configuring the Default Device Type
1. C>rman target/
2. RMAN>configure default device type to sbt; (for tape)
3. RMAN>configure default device type to disk;
4. RMAN>show default device type;
5. RMAN>configure default device type clear;
6. RMAN>exit;
format for device type disk
Configuring Controlfile Autobackup
1. C>rman target/
2. RMAN>configure controlfile autobackup on;
3. RMAN> configure controlfile autobackup format for device type disk to 'f:\backup\kendb\rman\%F';
4. RMAN>show controlfile autobackup;
5. RMAN>show controlfile autobackup format;
6. RMAN>configure controlfile autobackup clear;
7. RMAN>configure controlfile autobackup format for device type disk clear;
8. RMAN>exit;
Configuring the Default Backup Type
1. C>rman target/
2. RMAN>configure device type disk backup type to backupset;
3. RMAN>configure device type disk backup type to copy;
4. RMAN>configure device type disk backup type to compressed backupset;
5. RMAN>show device type;
6. RMAN>backup database;
7. RMAN>list backup;
8. RMAN>configure device type disk clear;
9. RMAN>exit;
Parallelizing backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>configure device type disk parallelism 4;
4. RMAN>backup datafile 2, 3;
RMAN>exit
Backup entire Database
1. C>set oracle_sid=kendb
2. C>rman
3. RMAN>CONNECT TARGET
4. RMAN>CONNECT CATALOG RMANUSER/RMANUSER@RCMDB;
5. RMAN>CONFIGURE DEVICE TYPE DISK CLEAR;
6. RMAN>BACKUP DATABASE;
OR
7. RMAN> BACKUP DATABASE FORMAT ‘O:\ORABACKUP\KENDB\BACKUP.RMAN\db_%u_%d_%s_%p’;
OR
8. RMAN>BACKUP DATABASE MAXSETSIZE=500M;
9. RMAN>EXIT;
Backup Tablespace
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup tablespace system;
4. RMAN>backup tablespace system, sysaux;
Backup Datafile
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>report schema;
4. RMAN>backup datafile 3;
5. RMAN>backup datafile 2,4;
6. RMAN>exit;
Backup Controlfile
1. C>rman target/
2. RMAN>backup current controlfile;
3. RMAN>exit;
Creating Compressed Backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup as compressed backupset datafile 3;
RMAN>exit;
Backup ArchiveLogs
1. C>set oracle_sid=kendb
2. C>rman
3. RMAN>backup archivelog all;
OR
4. RMAN>backup archivelog all delete input;
(This deletes archivelog files after they have been backed up.)
5. Backup Archivelogs generated in the past 2 days.
6. RMAN>backup archivelog from time ‘sysdate-2’ until time ‘sysdate’;
7. RMAN>exit;
Performing Incremental Backups
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup incremental level 0 database;
Incremental backups may be taken at Tablespace or datafile level.
OR
4. RMAN>BACKUP INCREMENTAL LEVEL 0 DATAFILE 2, 4;
5. RMAN>BACKUP INCREMENTAL LEVEL 0 TABLESPACE USERS;
Create Incremental Level 1 backups
6. RMAN>backup incremental level 1 tablespace users;
7. RMAN>backup incremental level 1 database;
Create Cumulative Level 1 backups
8. RMAN>backup incremental level 1 cumulative tablespace users;
9. RMAN>backup incremental level 1 cumulative database;
10. RMAN>backup incremental level 1 cumulative datafile 2, 4;
11. RMAN>exit;
Create Image Copies
1. C>set oracle_sid=kendb
2. C>rman target/
3. RMAN>backup as copy database;
OR
4. RMAN>backup as copy tag “users_wkly_bkp” tablespace users;
OR
5. RMAN>backup as copy datafile 4;
6. RMAN>exit;
LIST COMMAND
Which backups are currently available.
1. C>set oracle_sid=kendb
2. C>rman target/
3. rman>list backup by backup;
4. rman>list backupset by backup;
5. rman>list backup by backup verbose;
6. rman>list backup by backup summary;
7. rman>list backup by file;
8. rman>list copy of database; image copies
9. rman>list copy of tablespace users;
10. rman>list copy of datafile 3;
11. rman>list incarnation; to view different incarnation
Identify files that are in need of backup
1. c>set oracle_sid=kendb
2. c>rman target/
3. rman>configure retention policy to redundancy 2;
4. rman> report need backup;
5. rman> report need backup days=7 database; files that need a minimum of 7 days of logfiles to be applied in the event of recovery.
6. rman>exit;
Report Obsolete
1. c>set oracle_sid=kendb
2. c> rman target/
3. rman> report obsolete;
4. rman> delete obsolete;
5. rman>exit;
Regards,
Sowmya (OSR)
RMAN -Introduction
Hello,
RMAN : Recovery Manager
RMAN Components
RMAN is an Oracle Utility that can backup, restore and recover database files. The RMAN executable is named rman and can be found in $ORACLE_HOME/bin directory and is installed by default with Oracle 10g, 9i and 8i Standard and Enterprise editions.
Every Target Database has to be registered in the catalog database. We can invoke command line executables from the operating system.
In case of cold and hot backup it is like blind backup or mirror image
i.e size of backup is equal to database size.
But in case of RMAN, t is content copy backup i.e only used blocks backup.
Example:
In 100 GB database only 60% are used blocks i.e used blocks =60 GB
In that 60 GB of used blocks, changed or modified blocks are of only 1 GB
Complete backup includes 60GB
Incremental backup includes 1GB.
Concepts:
1. RMAN keeps a record of all target databases that are registered with catalog database.
2. When RMAN is invoked to backup or recover a database, it verifies the largest database with stored database if available in the RMAN Catalog.
3. If its matches RMAN establishes two server session with target database.
4. A channel is a remote procedural call to the target database using pl/sql interace to perform backup and recovery operations.
5. RMAN Compiles the command and executed it on the target database with the summary of actions. Backedup to these files are called backup pieces. To backup to tape drive/tape library we need the media manager.
6. Execution of every RMAN backup command produces a backup set that is a logical grouping of one more physical files called backup pieces.
7. RMAN Backup s going to be content backup not mirror image but only used blocks are backed up. Thus RMAN is smaller and much faster when compared to other backups.
Advantages:
1. Centralized backup and recovery and this recovery is across enterprise database.
2. Incremental backups –only those blocks which are changed since last backup
3. Corrupt block identification
4. Automated scheduled backups
5. Cross check – Once backup is over again checking the consistency of original files.
6. Can generate logs on backup and recovery.
7. Can report the summary information.
8. Can store preconfigured scripts which can be run at any time.
9. A log can specify to the backup file / files set which can be used even at the time of restore instead of filename.
10. Tablespace point in time recovery is possible .
11. Create duplicate database of production database for developers /QA people/
12. Multiple destinations can be mentioned.
13. Cleanup procedures can be implemented after performing RMAN successful backup. let’s get rid of obsolete archive log files.
Pre –requirements:
1. Every Target must be registered in the catalog database
2. In the catalog database a separate tablespace must be created to hold catalog schema owner.
3. A user should be created in the catalog database with connect, resource, recovery_catalog_owner privileges.
Regards,
Sowmya (OSR)
RMAN : Recovery Manager
RMAN Components
RMAN is an Oracle Utility that can backup, restore and recover database files. The RMAN executable is named rman and can be found in $ORACLE_HOME/bin directory and is installed by default with Oracle 10g, 9i and 8i Standard and Enterprise editions.
Every Target Database has to be registered in the catalog database. We can invoke command line executables from the operating system.
In case of cold and hot backup it is like blind backup or mirror image
i.e size of backup is equal to database size.
But in case of RMAN, t is content copy backup i.e only used blocks backup.
Example:
In 100 GB database only 60% are used blocks i.e used blocks =60 GB
In that 60 GB of used blocks, changed or modified blocks are of only 1 GB
Complete backup includes 60GB
Incremental backup includes 1GB.
Concepts:
1. RMAN keeps a record of all target databases that are registered with catalog database.
2. When RMAN is invoked to backup or recover a database, it verifies the largest database with stored database if available in the RMAN Catalog.
3. If its matches RMAN establishes two server session with target database.
4. A channel is a remote procedural call to the target database using pl/sql interace to perform backup and recovery operations.
5. RMAN Compiles the command and executed it on the target database with the summary of actions. Backedup to these files are called backup pieces. To backup to tape drive/tape library we need the media manager.
6. Execution of every RMAN backup command produces a backup set that is a logical grouping of one more physical files called backup pieces.
7. RMAN Backup s going to be content backup not mirror image but only used blocks are backed up. Thus RMAN is smaller and much faster when compared to other backups.
Advantages:
1. Centralized backup and recovery and this recovery is across enterprise database.
2. Incremental backups –only those blocks which are changed since last backup
3. Corrupt block identification
4. Automated scheduled backups
5. Cross check – Once backup is over again checking the consistency of original files.
6. Can generate logs on backup and recovery.
7. Can report the summary information.
8. Can store preconfigured scripts which can be run at any time.
9. A log can specify to the backup file / files set which can be used even at the time of restore instead of filename.
10. Tablespace point in time recovery is possible .
11. Create duplicate database of production database for developers /QA people/
12. Multiple destinations can be mentioned.
13. Cleanup procedures can be implemented after performing RMAN successful backup. let’s get rid of obsolete archive log files.
Pre –requirements:
1. Every Target must be registered in the catalog database
2. In the catalog database a separate tablespace must be created to hold catalog schema owner.
3. A user should be created in the catalog database with connect, resource, recovery_catalog_owner privileges.
Regards,
Sowmya (OSR)
Monday, March 8, 2010
Oracle V$ View's
Oracle’s V$Views
The following V$ Views are part of the data dictionary which are commonly used by DBA’s
One can view the columns of these views by
SQL>desc v$
1. V$archive_dest: Shows all archive redo log destinations. Use this view to find out which place archived redo logs are copied: select dest_id, destination from v$archive_dest.
2. V$archive_dest_status: this view allows to find status and error for each of the defined.
3. V$archived_log: Displays successfully archived redo logs.shows received logs on a primary standby database.
4. V$archive_gap: Lists sequence numbers of the archived logs that are known to be missing for each thread on a physical standby database (highest gap pnly)
5. V$archive_processes: This view provides information on the archive processes. It can be used to find out if an ARCH process is active or not.
6. V$Controlfile: Displays the location and status of each controlfile in the database.
7. V$bh: This dynamic view has an entry for each block in the database buffer cache.
The column status can be:
Free:This block is not in use
Xcur:Block held exclusively by this instance
Scur:Block held in cache, shared with other instance
Cr:Block for consistent read
Read:Block being read from disk
Mrec:Block in media recovery mode
Irec:Block in instance (crash) recovery mode
8. V$buffer_pool
9. V$buffer_pool_statistics
10. V$database: This views allows you to access database information. For example you can check whether the database is in archivelog mode or not.
SQL> select log_mode from v$database;
Log mode
Archivelog
11. V$datafile: This view contains an entry for each datafile of the database.
SQL> select name from v$datafile;
12. V$datafile_header:
13. V$dataguard_status: shows error messages in the dataguard environment.
14. V$db_object_cache:this view displays objects that are cached(pinned) in the library cache.
15. V$enqueue_stat: If there are a lot of enqueue waits in v$session_event or v$system_event, v$enqueue_stst allows to break down those enqueues in enqueue classes. For each such class, the gets, waits, failures and the cumulative sum of waited time can be found.
16. V$eventmetric: This view is new in Oracle 10g and allows improved timings and statistics.
17. V$event_name: Contains a record for each wait event
18. V$filemetric: This view is new in Oracle 10g and allows improved timings and statistics.
19. V$filestat
20. V$fixed_table: This view contains the name of all v$, GV$, X$ tables.
21. V$flash_recovery_area_usage: It contains following columns
File_type, percent_space_used, percent_space_reclaimabl,number_of_files.
22. V$instance: it contains information about instance. These are the columns present in v$instance—instance_number, instance_name,host_name,version,startup_time,status,paralled,thread#,archiver,log_switch_wait,logins,shutdown_pending, database_status,instance_status,instance_role, active_state,blocked.
23. V$instance_recovery: It can be used to determine the optimal size of the redo logs.
24. V$latch: Oracle collects statistics for the activity of all latches and stores these in this view. Gets is the number of successful willing to wait requests for a latch. Similarly, misses is how many times a process didn't successfully request a latch. Spin_gets: number of times a latch is obtained after spinning at least once. Sleeps indicates how many times a willing to wait process slept. Waiters_woken tells how often a sleeping process was 'disturbed'
25. V$librarycache
26. V$lock: This view stores all information relating to locks in the database. The interesting columns in this view are sid (identifying the session holding or aquiring the lock), type, and the lmode/request pair. Important possible values of type are TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock mode. If lmode is not 0 or 1, then the session has aquired the lock, while it waits to aquire the lock if request is other than 0 or 1. The possible values for lmode and request are: null, row share(SS), Row Exclusive (SX), share(S), Sahre Row Exclusive (SSX) and Exclusive (X). If the lock type is TM, the column id1 is the object's id and the name of the object can then be queried like so: select name from sys.obj$ where obj# = id1
27. V$locked_object: Who is locking and what?
Use this query to find out who is locking and wht
Select oracle_username, os_user_name, locked_mode, object_name.object_type from v$locked_object a, dba_objects b where a.object_id = b.object_id.
28. V$log : contains information on each log group. Common values for status column are unused, current, active, inactive.
29. V$logfile: this view can be queried to find the filenames, group numbers, and states of redo logfiles. For example to find all files of group 2 use
Select member from v$logfile wher group# = 2
30. V$log_history: This view contains an entry for each log switch that occurred. The column first_time indicates the time of the first entry
On physical standby database , this view shows applied logs.
31. V$logstdby: Can be used to verify that archived redo logs are being applied to standby databases.
32. V$managed_standby: monitors the progress of a standby database in managed recovery mode, it displays information about the activities of log transport service and log apply service.
33. V$nls_parameters: The NLS parameters that are in effect for the session quering this view. This view NLS_SESSION_PARAMETERS is based on v$nls_parameters.
34. V$object_usage: This gathers information about used indexes when an index is monitored using alter index monitoring usage.
35. V$open_cursor
36. V$option: This view lets you see which option are installed in the server.
37. V$Parameter: lists the name-value pairs of the init.ora file. For example if you want to find the db block size use the following query
Select value from v$parameter where name =’db_blokc_size.
38.
39. V$pgastat
40. V$Process: join v$process’s addr with v$session paddr
The column traceif is equal to the value used in alter session set.
41. V$pwfile_users: Lists all users who have been granted sysdba or sysoper privileges.
42. V$recover_file: useful to find out which datafiles need recovery.
43. V$recovery_file_dest:
44. V$reserved_words
45. V$resource_limit: these are the following columns resource_name, current_utilization, max_utilization, initial_allocation, limit_value.
46. V$rollname: the name of the online rollback segments. .This view’s usn filed can be joined with v$rollstats’s usn field and with v$transaction’s xidusn field
47. V$transaction can be used to tract undo by session.
48. V$rollstat: statistics for rollback segments
49. V$session: It contains all information about present session. A record in v$session contains sid and serial#. These numbers can be used kill a session (alter system kill session). Query v$session to find the necessary columns.
Some of the important columns are : SID, Serial#, user, username, lockwait,server,status,osuser,process, blocking_session, blocking_instance, event, sql_trace.
Join sid with v$sesstat if you want to get some statistical information for a particular sesssion.
50. V$sessmetric: This view is new in Oracle 10g and allows improved timing and statistics.
51. V$session_event: this view is similar to v$system_event. However, it breaks it down to currently connected sessions.
V$session_event has also the columns max_wait that shows the maximum time waited for a wait event.
52. V$session_longops: use v$session_longops if you have a long running pl/sql procedure and want to give feedback on how far the procedure proceeded. If there is any procedure which is running it will report its progress in v$session_longops. The procedure will also set the module attribute in v$session which makes it possible to find the sid, serial # of the session.
For example : If the procedure long_proc is run, you can issue the following query to get feedback on its progress:
Use the following sql statement:
Select time_remaining, sofar,elapsed_seconds from v$session_lonops l, v$session s where l.sid =s.sid and l.serial# =s.serial# and s.module =’long_proc’
53. V$session_wait: This views shows what wait event each session is waiting for, or what the last event was that it waited for.
In contrast, v$session_event lists the cumulative history of events waited for in a session. The columns P1, P2 and P3 are parameters that are dependant on the event. With Oracle 10g, v$session_wait's information will be exposed within v$session as well. Since 10g, Oracle displays the v$session_wait information also in the v$session view.
54. V$session_wait_history: This view is new in oracle 10g and allows improved timing and statistics.
55. V$sesstat: This view is similar to v$mystat except that it shows cumulated statistics for all sessions.
Join sid with v$session and join statistic# with v$statname.
V$sesstat is also similar to v$systat except that v$sysstat accumulated the statistics as soon as a session terminated.
56. V$sga: shows how much memory the shared global area uses.
Selecting * from v$sga is roughly the same as typing show sga
57. V$sgastat: shows free space in sga
58. V$sga_dynamic_components: it contains information about sga resize operation since startup. This is also used to find the granule size of SGA.
59. V$sga_resize_ops
60. V$sort_usage:
61. V$sort_segments
62. V$spparameter: Returns the values for spfile.
63. V$SQL: v$sql is same as v$sqlarea, the main difference being that v$sql drills down to select * from x$kglob whereas v$sqlarea drills down to select sum from x$kglob.
64. V$sqlarea: It tells you the sql text of currently running sql statements.
The field version_count indicated how many versions an sql statement has
Select sql_text from v$sqlarea where users_executing > 0;
65. V$sqL text,
66. V$sql_plan
67. V$sql_text_with_newlines
68. V$sql_bind_data:
69. V$sql_bind_capture: This view captures bind variables for all sessions.
70. V$sql_cursor
71. V$sqL_workarea
72. V$standby_log
73. V$statname: use this view to get the decoded names for the statistic# filed of v$sysstat , v$mystat, and sessstat
74. V$sysmetric: This view is new in oracle 10g and allows improved timing and statistics.
75. V$sysmetric_history: New in oracle 10g and allows improved timings and statistcs.
76. $sysstat: v$sysstat is similar to v$sesstat. While v$sesstat displays statitics for the current session, v$sysstat displays the cumulated statitics since startup of the database.
For example, it is possible to find out the CPU time (name = 'CPU used by this session')
This view is (among others) used to calculate the Hit Ratio.
77. v$system_event
This view displays the count (total_waits) of all wait events since startup of the instance. If timed_statistics is set to true, the sum of the wait times for all events are also displayed in the column time_waited.
The unit of time_waited is one hundreth of a second. Since 10g, an additional column (time_waited_micro) measures wait times in millionth of a second.
total_waits where event='buffer busy waits' is equal the sum of count in v$waitstat.
v$enqueue_stat can be used to break down waits on the enqueue wait event.
While this view totals all events in an instance, v$session_event breaks it down to all currently connected sessions.
78. V$undostat
79. V$tempfile
80. V$tempstat
81. V$thread
82. V$timer
83. V$transaction
84. V$trasnportable_platform
85. V$version: use this view to find out what version you actually work on
select * from v$version
86. V$waitstat
Regards,
Sowmya (OSR)
http://usdba.blogspot.com/2010/03/oracle-v-views.html
The following V$ Views are part of the data dictionary which are commonly used by DBA’s
One can view the columns of these views by
SQL>desc v$
1. V$archive_dest: Shows all archive redo log destinations. Use this view to find out which place archived redo logs are copied: select dest_id, destination from v$archive_dest.
2. V$archive_dest_status: this view allows to find status and error for each of the defined.
3. V$archived_log: Displays successfully archived redo logs.shows received logs on a primary standby database.
4. V$archive_gap: Lists sequence numbers of the archived logs that are known to be missing for each thread on a physical standby database (highest gap pnly)
5. V$archive_processes: This view provides information on the archive processes. It can be used to find out if an ARCH process is active or not.
6. V$Controlfile: Displays the location and status of each controlfile in the database.
7. V$bh: This dynamic view has an entry for each block in the database buffer cache.
The column status can be:
Free:This block is not in use
Xcur:Block held exclusively by this instance
Scur:Block held in cache, shared with other instance
Cr:Block for consistent read
Read:Block being read from disk
Mrec:Block in media recovery mode
Irec:Block in instance (crash) recovery mode
8. V$buffer_pool
9. V$buffer_pool_statistics
10. V$database: This views allows you to access database information. For example you can check whether the database is in archivelog mode or not.
SQL> select log_mode from v$database;
Log mode
Archivelog
11. V$datafile: This view contains an entry for each datafile of the database.
SQL> select name from v$datafile;
12. V$datafile_header:
13. V$dataguard_status: shows error messages in the dataguard environment.
14. V$db_object_cache:this view displays objects that are cached(pinned) in the library cache.
15. V$enqueue_stat: If there are a lot of enqueue waits in v$session_event or v$system_event, v$enqueue_stst allows to break down those enqueues in enqueue classes. For each such class, the gets, waits, failures and the cumulative sum of waited time can be found.
16. V$eventmetric: This view is new in Oracle 10g and allows improved timings and statistics.
17. V$event_name: Contains a record for each wait event
18. V$filemetric: This view is new in Oracle 10g and allows improved timings and statistics.
19. V$filestat
20. V$fixed_table: This view contains the name of all v$, GV$, X$ tables.
21. V$flash_recovery_area_usage: It contains following columns
File_type, percent_space_used, percent_space_reclaimabl,number_of_files.
22. V$instance: it contains information about instance. These are the columns present in v$instance—instance_number, instance_name,host_name,version,startup_time,status,paralled,thread#,archiver,log_switch_wait,logins,shutdown_pending, database_status,instance_status,instance_role, active_state,blocked.
23. V$instance_recovery: It can be used to determine the optimal size of the redo logs.
24. V$latch: Oracle collects statistics for the activity of all latches and stores these in this view. Gets is the number of successful willing to wait requests for a latch. Similarly, misses is how many times a process didn't successfully request a latch. Spin_gets: number of times a latch is obtained after spinning at least once. Sleeps indicates how many times a willing to wait process slept. Waiters_woken tells how often a sleeping process was 'disturbed'
25. V$librarycache
26. V$lock: This view stores all information relating to locks in the database. The interesting columns in this view are sid (identifying the session holding or aquiring the lock), type, and the lmode/request pair. Important possible values of type are TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock mode. If lmode is not 0 or 1, then the session has aquired the lock, while it waits to aquire the lock if request is other than 0 or 1. The possible values for lmode and request are: null, row share(SS), Row Exclusive (SX), share(S), Sahre Row Exclusive (SSX) and Exclusive (X). If the lock type is TM, the column id1 is the object's id and the name of the object can then be queried like so: select name from sys.obj$ where obj# = id1
27. V$locked_object: Who is locking and what?
Use this query to find out who is locking and wht
Select oracle_username, os_user_name, locked_mode, object_name.object_type from v$locked_object a, dba_objects b where a.object_id = b.object_id.
28. V$log : contains information on each log group. Common values for status column are unused, current, active, inactive.
29. V$logfile: this view can be queried to find the filenames, group numbers, and states of redo logfiles. For example to find all files of group 2 use
Select member from v$logfile wher group# = 2
30. V$log_history: This view contains an entry for each log switch that occurred. The column first_time indicates the time of the first entry
On physical standby database , this view shows applied logs.
31. V$logstdby: Can be used to verify that archived redo logs are being applied to standby databases.
32. V$managed_standby: monitors the progress of a standby database in managed recovery mode, it displays information about the activities of log transport service and log apply service.
33. V$nls_parameters: The NLS parameters that are in effect for the session quering this view. This view NLS_SESSION_PARAMETERS is based on v$nls_parameters.
34. V$object_usage: This gathers information about used indexes when an index is monitored using alter index monitoring usage.
35. V$open_cursor
36. V$option: This view lets you see which option are installed in the server.
37. V$Parameter: lists the name-value pairs of the init.ora file. For example if you want to find the db block size use the following query
Select value from v$parameter where name =’db_blokc_size.
38.
39. V$pgastat
40. V$Process: join v$process’s addr with v$session paddr
The column traceif is equal to the value used in alter session set.
41. V$pwfile_users: Lists all users who have been granted sysdba or sysoper privileges.
42. V$recover_file: useful to find out which datafiles need recovery.
43. V$recovery_file_dest:
44. V$reserved_words
45. V$resource_limit: these are the following columns resource_name, current_utilization, max_utilization, initial_allocation, limit_value.
46. V$rollname: the name of the online rollback segments. .This view’s usn filed can be joined with v$rollstats’s usn field and with v$transaction’s xidusn field
47. V$transaction can be used to tract undo by session.
48. V$rollstat: statistics for rollback segments
49. V$session: It contains all information about present session. A record in v$session contains sid and serial#. These numbers can be used kill a session (alter system kill session). Query v$session to find the necessary columns.
Some of the important columns are : SID, Serial#, user, username, lockwait,server,status,osuser,process, blocking_session, blocking_instance, event, sql_trace.
Join sid with v$sesstat if you want to get some statistical information for a particular sesssion.
50. V$sessmetric: This view is new in Oracle 10g and allows improved timing and statistics.
51. V$session_event: this view is similar to v$system_event. However, it breaks it down to currently connected sessions.
V$session_event has also the columns max_wait that shows the maximum time waited for a wait event.
52. V$session_longops: use v$session_longops if you have a long running pl/sql procedure and want to give feedback on how far the procedure proceeded. If there is any procedure which is running it will report its progress in v$session_longops. The procedure will also set the module attribute in v$session which makes it possible to find the sid, serial # of the session.
For example : If the procedure long_proc is run, you can issue the following query to get feedback on its progress:
Use the following sql statement:
Select time_remaining, sofar,elapsed_seconds from v$session_lonops l, v$session s where l.sid =s.sid and l.serial# =s.serial# and s.module =’long_proc’
53. V$session_wait: This views shows what wait event each session is waiting for, or what the last event was that it waited for.
In contrast, v$session_event lists the cumulative history of events waited for in a session. The columns P1, P2 and P3 are parameters that are dependant on the event. With Oracle 10g, v$session_wait's information will be exposed within v$session as well. Since 10g, Oracle displays the v$session_wait information also in the v$session view.
54. V$session_wait_history: This view is new in oracle 10g and allows improved timing and statistics.
55. V$sesstat: This view is similar to v$mystat except that it shows cumulated statistics for all sessions.
Join sid with v$session and join statistic# with v$statname.
V$sesstat is also similar to v$systat except that v$sysstat accumulated the statistics as soon as a session terminated.
56. V$sga: shows how much memory the shared global area uses.
Selecting * from v$sga is roughly the same as typing show sga
57. V$sgastat: shows free space in sga
58. V$sga_dynamic_components: it contains information about sga resize operation since startup. This is also used to find the granule size of SGA.
59. V$sga_resize_ops
60. V$sort_usage:
61. V$sort_segments
62. V$spparameter: Returns the values for spfile.
63. V$SQL: v$sql is same as v$sqlarea, the main difference being that v$sql drills down to select * from x$kglob whereas v$sqlarea drills down to select sum from x$kglob.
64. V$sqlarea: It tells you the sql text of currently running sql statements.
The field version_count indicated how many versions an sql statement has
Select sql_text from v$sqlarea where users_executing > 0;
65. V$sqL text,
66. V$sql_plan
67. V$sql_text_with_newlines
68. V$sql_bind_data:
69. V$sql_bind_capture: This view captures bind variables for all sessions.
70. V$sql_cursor
71. V$sqL_workarea
72. V$standby_log
73. V$statname: use this view to get the decoded names for the statistic# filed of v$sysstat , v$mystat, and sessstat
74. V$sysmetric: This view is new in oracle 10g and allows improved timing and statistics.
75. V$sysmetric_history: New in oracle 10g and allows improved timings and statistcs.
76. $sysstat: v$sysstat is similar to v$sesstat. While v$sesstat displays statitics for the current session, v$sysstat displays the cumulated statitics since startup of the database.
For example, it is possible to find out the CPU time (name = 'CPU used by this session')
This view is (among others) used to calculate the Hit Ratio.
77. v$system_event
This view displays the count (total_waits) of all wait events since startup of the instance. If timed_statistics is set to true, the sum of the wait times for all events are also displayed in the column time_waited.
The unit of time_waited is one hundreth of a second. Since 10g, an additional column (time_waited_micro) measures wait times in millionth of a second.
total_waits where event='buffer busy waits' is equal the sum of count in v$waitstat.
v$enqueue_stat can be used to break down waits on the enqueue wait event.
While this view totals all events in an instance, v$session_event breaks it down to all currently connected sessions.
78. V$undostat
79. V$tempfile
80. V$tempstat
81. V$thread
82. V$timer
83. V$transaction
84. V$trasnportable_platform
85. V$version: use this view to find out what version you actually work on
select * from v$version
86. V$waitstat
Regards,
Sowmya (OSR)
http://usdba.blogspot.com/2010/03/oracle-v-views.html
Tuesday, March 2, 2010
SQL * Plus
Hello Everyone,
Here is my post for the week
SQL * Plus
SQL * Plus is a command-line interface and which is a most familiar face of the oracle database. It’s a client –server application which allows you to execute SQL statements and PL/SQL blocks. One of the most common use of SQL * plus is as an ad hoc query tool. It is used for database administration and operation functions, including creating database objects and structures, managing oracle user id’s and object security, starting up and shutting down database instances and so on. You can also type different sql statements, execute it and see what result comes back from the database.
Different ways of invoking SQL *Plus
Starting SQL * Plus:
1. Make sure that SQL * Plus is installed on your computer
2. /log on to the host operating system
3. Enter command, SQL Plus and press Return
SQL * plus displays version number, the current date, copyright information and prompts you for the authentication (username and password)
4. Enter user and password and press return.
The process of entering the username and password is called logging in.
5. Next it displays the SQL *Plus command prompt
SQL>
Starting SQL *Plus using password on OS Platform.
Well most of us starts SQL * Plus with a password on UNIX and Linux platforms without considering the security threats.
For example
$ Sqlplus ml/ml@testdb
Here the password entered on the command prompt is easily available to all the operating system users on the host machine to view and hack the password which is not a recommended method.
The intruders can easily hack the password by using
$ps –ef
grep Sqlplus
oracle 14490 2190 0 16:31:53 pts/5 0:00 sqlplus ml/ml@testdb
oracle 14493 14491 0 16:32:01 pts/5 0:00 grep Sqlplus
Best way of invoking Sqlplus without using the password is
$ Sqlplus ml@testdb
Enter Password: **
If you want to make it more safe you can also invoke Sqlplus using
$ Sqlplus
Enter username : ML@testdb
Enter Password : **
Or
$ Sqlplus /nolog
SQL > connect ml/ml@testdb
Or
$ Sqlplus /nolog
SQL> connect ml@testdb
Enter password: **
Note: All this depends on the OS make sure that you use a secure OS. In late 9i and in 10g the executable code became secure.
will appreciate your comments.
Regards,
Sowmya(OSR)
Here is my post for the week
SQL * Plus
SQL * Plus is a command-line interface and which is a most familiar face of the oracle database. It’s a client –server application which allows you to execute SQL statements and PL/SQL blocks. One of the most common use of SQL * plus is as an ad hoc query tool. It is used for database administration and operation functions, including creating database objects and structures, managing oracle user id’s and object security, starting up and shutting down database instances and so on. You can also type different sql statements, execute it and see what result comes back from the database.
Different ways of invoking SQL *Plus
Starting SQL * Plus:
1. Make sure that SQL * Plus is installed on your computer
2. /log on to the host operating system
3. Enter command, SQL Plus and press Return
SQL * plus displays version number, the current date, copyright information and prompts you for the authentication (username and password)
4. Enter user and password and press return.
The process of entering the username and password is called logging in.
5. Next it displays the SQL *Plus command prompt
SQL>
Starting SQL *Plus using password on OS Platform.
Well most of us starts SQL * Plus with a password on UNIX and Linux platforms without considering the security threats.
For example
$ Sqlplus ml/ml@testdb
Here the password entered on the command prompt is easily available to all the operating system users on the host machine to view and hack the password which is not a recommended method.
The intruders can easily hack the password by using
$ps –ef
grep Sqlplus
oracle 14490 2190 0 16:31:53 pts/5 0:00 sqlplus ml/ml@testdb
oracle 14493 14491 0 16:32:01 pts/5 0:00 grep Sqlplus
Best way of invoking Sqlplus without using the password is
$ Sqlplus ml@testdb
Enter Password: **
If you want to make it more safe you can also invoke Sqlplus using
$ Sqlplus
Enter username : ML@testdb
Enter Password : **
Or
$ Sqlplus /nolog
SQL > connect ml/ml@testdb
Or
$ Sqlplus /nolog
SQL> connect ml@testdb
Enter password: **
Note: All this depends on the OS make sure that you use a secure OS. In late 9i and in 10g the executable code became secure.
will appreciate your comments.
Regards,
Sowmya(OSR)
Monday, March 1, 2010
Subscribe to:
Posts (Atom)