Hello,
In this blog i am going to explain creating the notification rules.
I have created different rules for critical alerts and warning rules, and i am going to assign all critical alerts to critical admin and warning alerts to warning admin.
here i am just creating the rules and assigning them to public. Later i am going to subscribe them to email by logging into critical admin and warning admin
Notification Rules are setup for the following
1. Monitoring Agent
2. Monitoring Cluster
3. Monitoring Listener
4. Monitoring ASM_Critical
5. Monitoring ASM_Warning
6. Monitoring Cluster_Database_Critical
7. Monitoring Cluster_Database_Warning
8. Monitoring Database_Instance_Critical
9. Monitoring Database_Instance_Warning
10. Monitoring Host_Critical
11. Monitoring HOST_Warning
Creating Notification Rules
1. Setting up notification rule for Monitoring Agent
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create Like-> Provide Name: MONITOR_AGENT, enter Description, make the rule as public, target type –agent and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
2. Setting up notification rule for Monitoring Cluster
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_CLUSTER, enter Description, make the rule as public, target type –cluster and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
3. Setting up notification rule for Monitoring Listener
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_LISTENER, enter Description, make the rule as public, target type –listener and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable Resolved and click on finish.
4. Setting up notification rule for Monitoring ASM_Critical
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_CRITICAL, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Critical
Click ok and finish
5. Setting up notification rule for Monitoring ASM_Warning
Login into Oracle Grid using Dbwatch
Click on preferences -> Rules
Click on Create like ->Provide Name: MONITOR_ASM_Warning, enter Description, make the rule as public, target type –ASM and select apply rule to all agent targets
Select Availability Tab-> select Agent Unreachable and Agent Unreachable
Click on Metrics Tab and select the following metrics (select all metrics which are needed and I have mentioned few of them)
a. Alert Log Error Stack
b. Archive Hung Error Stack
c. Archiver Hung Alert Log Error Status
d. Data Block Corruption Alert Log Error Status
e. Data Block Corruption Error Stack
f. Disk Mode Status
g. Generic Alert Log Error Status
h. Media Failure Alert Log Error Status
i. Media Failure Error Stack
j. Offline Disk Count
Select Severity as Warning
Click ok and finish
Regards,
Sowmya (OSR)
Friday, July 23, 2010
Tuesday, June 22, 2010
Data Guard - Setting up Physcial Standby Database
Hello,
Setting Up Physical Standby Database
Setting up a Physical Standby database is easy when all prerequisites and setup steps are met. It is easy to build a Data guard Physical Standby Database using Oracle 10g Grid Control Console.
Note:
1. Identical Hardware, OS on both Primary and Secondary
2. Oracle Database 10g Enterprise Edition on Primary and Secondary
3. SSH is configured for user oracle on both nodes
4. Oracle Home is on identical path on both nodes
Implementation Notes:
Follow these steps when once you have your primary database up and running
Primary Database
1. Enable Force Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Standby Database
1. Create a control file for the standby database
2. Backup the Primary Database and transfer a copy to the standby node
3. Prepare an initialization Parameter file for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to support the standby database on the standby node
6. Start the physical standby database
7. Verify the physical standby database is performing properly
Step by Step implementation of a Physical Standby environment
Primary Database Steps:
1. Check the name of the database by querying the database
SQL>desc v$database
BBMLPROD
SQL>select name from v$database;
2. Check the data file location on primary side
SQL>desc v$dba_data_files
SQL>select filename from v$dba_data_files
3. Check the online log file location on the primary side
SQL>desc v$log, v$logfile
SQL>select group3, member from v$logfile;
SQL>select * from v$log;
4. SQL>show parameter unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string BBMLPROD
5. Enable force Logging
In order to implement standby database enable ‘forced Logging’. This ensures that even in the event that a ‘no logging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.
SQL>alter database force logging;
Database altered
6. Create a Password File
In order to be able to ship and apply archive log files from primary to standby database a password file must be created on the primary and copied the same to the secondary.
Note: Sys password must be identical on both sites.
cd $ORACLE_HOME/dbs
[/opt/oracle]>orapwd file=orapwbbmlprod password=oracle force=y
7. Configure standby redo log is added to enable Data guard Maximum availability and Maximum Protection modes. Configure standby redo logs (SRL) with same size as online redo logs.
SQL> select * from v$logfile;
I have noticed there are 3 groups
SQL> select bytes from v$log;
Bytes
--------
52443300
SQL> alter database add standby logfile group 4 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M;
Database altered
Now make sure that u have three more log groups
Use this query
SQL> select * from v$logfile;
6 rows selected
8. Enable Archive log mode
Check whether database is in archive log if not alter the database to archive log mode.
SQL>archive log list
Change the database to archive log mode
SQL> startup mount;
SQL> alter database archive log;
SQL>alter database open
SQL>archive log list
9. Set primary Database init.ora Parameter
Initialization Parameter controls the behavior of a Data Guard environment. Data guard must use spfile, in order to configure this first we create and configure standby parameters on a regular pfile and later we convert this to spfile
SQL> create pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’ from spfile;
Edit the pfile to add the standby parameters, few parameters are listed which needs to be modified.
Db_unique_name=’BBMLPROD’
Log_Archive_config=’DG_CONFIG=BBMLPROD, BBMLDR’
Log_Archive_Dest_1=’Location=/../../archdest/
Valid_for=(All_logfiles, All_roles)
Db_unique_Name=’BBMLPROD’
Log_Archive_Dest_2=
‘Service=’BBMLDR LGWR ASYNC
Valid_for=(online_logfiles, primary_role)
Db_unique_name=BBMLDR’
Log_archive_dest_state_1=Enable
Log_Archive_Format=%t_%s_%r.arc
Log_Archive_Max_Processes=30
#standby role Parameters
Fal_server=BBMLDR
Fal_client=BBMLPROD
Standby_file_management=auto
Db_file_name_convert=’BBMLDR/BBMLDR’,’ BBMLPROD/BBMLPROD’
Log_file_name_convert=’/../../../’,’/…/../../’
Once the parameter file is ready create spfile from this new pfile
SQL> shutdown immediate;
SQL>startup nomount pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL> create spfile from pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL>shutdown immediate;
SQL>startup;
Standby Database Steps
1. Standby database will use the control file that is generated on the primary database
SQL>startup mount;
SQL>alter database create standby controlfile as ‘/oradisk/app01/oracle/product/10gDB/dbs/BBMLDR.ctl’;
SQL> Alter Database Open;
2. Backup the Primary Database and transfer a copy to the standby node.
Note: Backup the primary database is nothing your .dbf, .log, .tmp files. Perform copy of those files and using scp move that on standby database.
3. On the standby node create necessary directories to get the datafiles, logfiles and so on.
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/datafile/
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/onlinelog/
[/opt/oracle/]#chown –R oracle:dba/oradisk/oracle
4. Now Copy the entire database from primary to standby while the main database is down or in backup mode. Copying database includes copying all .dbf,.log, .tmp files
5. Prepare an initialization parameter file for the standby database.
Following list shows the parameter
Edit the following
Db_unique_name=’BBMLDR’
audit_file_dest=’/oradisk/app01/oracle/admin/bbmldr/adump’
background_dump_dest,=’/oradisk/app01/oracle/admin/BBMLDR/bdump’
core_dump_dest=’/oradisk/app01/oracle/admin/BBMLDR/cdump’
user_dump_dest, control_files=’/oradisk/app01/oracle/admin/BBMLDR/udump’
db_create_file_dest=’/oradisk/oracle/BBMLDR’
db_file_name_convert=’/oradisk/oracle/bbmldr/datafile/’,’/../../../bbmlprod/datafile/’
log_file_name_convert=’/oradisk/oracle/bbmldr/onlinelog/’,’/../../../bbmlprod/onlinelog/’
fal_server=’bbmlprod’
fal_client=’bbmldr’
log_archive_dest_1=’location=/oradisk/oracle/bbmldr/archives/’
valid_for=(all_logfiles, all_roles)
db_unique_name=’bbmldr’
log_archive_dest_2=’service=bbmlprod lgwr async
valid_for=(online_logfiles, primary_role)
db_unique_name=’bbmlprod’
6. Create all required directories for dump directories and archived log destination
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/budmp
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/cdump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/udump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/archives/
7. Copy from the primary controlfile to standby control file to its destination using scp –p
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_02.ctl
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_01.ctl
8. Configure the listener and tnsnames to support the database on both nodes
#ON sblbgp01
LISTENER_BBMLPROD=
(DESCRIPTION=
(DESCRIPTION=
(ADDRESS =(PROTOCOL = TCP)(HOST=sblbgp01)(PORT=1520)(IP=FIRST))
)
)
SID_LIST_LISTENER_BBMLPROD
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME= BBMLPROD)
(ORACLE_HOME = /oradisk/appl01/oracle/product/10gDB)
(SID_NAME = bbmlprod)
)
)
#drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT=1520) (IP=FIRST)
)
)
SID_LIST_LISTENER_BBMLDR =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = BBMLDR)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB)
(SID_NAME = BBMLDR)
)
)
Configure tnsnames.ora on both servers to hold entries for both databases.
# ON sblbgp01
LISTENER_BBMLPROD =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = sblbgp01) (PORT = 1520) (IP = FIRST)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =sblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLPROD)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =BBMLDR)
)
)
#ON drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST =drsblbgp01) (PORT =1520) (IP = FIRST)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = drsblbgp01) (PORT =1520))
(CONNECT_DATA=
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLDR)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =SBLBGP01) (PORT =1520))
(CONNECT_DATA =
(SERVER =DEDICATED)
(SERVICE_NAME =BBMLPROD)
)
)
9. Start the listener and check tnsping on both nodes to both
On node 1
[sblbgp01]>Lsnrctl start bbmlprod
[sblbgp01]>Lsnrctl start bbmldr
[sblbgp01]>tnsping bbmlprod
[sblbgp01]>tnsping bbmldr
10. Set up the environment to support the standby database on the standby node.
Copy the password file from primary to standby, sys password must be identical
[sblbgp01]>scp orapwbbmlprod drsblbgp01:/oradisk/app01/oracle/product/10gDB/dbs/orapwbbmldr
Setup the environment variable to point to the standby database
ORACLE_HOME=/oradisk/app01/oracle/product/10gDB
ORACLE_SID=bbmldr
PATH=$ORACLE_HOME/bin:$PATH
Startup nomount the standby database and generate an spfile
[drsblbgp01]> cd $ORACLE_HOME
[drsblbgp01]>Sqlplus /nolog
[drsblbgp01]>conn sys/as sysdba
SQL>startup nomount pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL>create spfile from pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL> shutdown immediate;
SQL> startup mount
SQL> Alter Database Recover Managed standby database disconnect from session;
Database altered
Check the alert log of both primary and standby which will give you complete information
Verifying the Physical standby database is performing properly
Check archived redo log on standby
SQL>show parameters db_unique_name;
SQL> Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Switch logfile on primary
SQL>alter system switch logfile;
SQL> archive log list
SQL>alter system switch logfile
SQL>archive log list
Check archived redo log on standby
SQL>Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Regards,
Sowmya Bandaru (OSR)
Setting Up Physical Standby Database
Setting up a Physical Standby database is easy when all prerequisites and setup steps are met. It is easy to build a Data guard Physical Standby Database using Oracle 10g Grid Control Console.
Note:
1. Identical Hardware, OS on both Primary and Secondary
2. Oracle Database 10g Enterprise Edition on Primary and Secondary
3. SSH is configured for user oracle on both nodes
4. Oracle Home is on identical path on both nodes
Implementation Notes:
Follow these steps when once you have your primary database up and running
Primary Database
1. Enable Force Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Standby Database
1. Create a control file for the standby database
2. Backup the Primary Database and transfer a copy to the standby node
3. Prepare an initialization Parameter file for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to support the standby database on the standby node
6. Start the physical standby database
7. Verify the physical standby database is performing properly
Step by Step implementation of a Physical Standby environment
Primary Database Steps:
1. Check the name of the database by querying the database
SQL>desc v$database
BBMLPROD
SQL>select name from v$database;
2. Check the data file location on primary side
SQL>desc v$dba_data_files
SQL>select filename from v$dba_data_files
3. Check the online log file location on the primary side
SQL>desc v$log, v$logfile
SQL>select group3, member from v$logfile;
SQL>select * from v$log;
4. SQL>show parameter unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string BBMLPROD
5. Enable force Logging
In order to implement standby database enable ‘forced Logging’. This ensures that even in the event that a ‘no logging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.
SQL>alter database force logging;
Database altered
6. Create a Password File
In order to be able to ship and apply archive log files from primary to standby database a password file must be created on the primary and copied the same to the secondary.
Note: Sys password must be identical on both sites.
cd $ORACLE_HOME/dbs
[/opt/oracle]>orapwd file=orapwbbmlprod password=oracle force=y
7. Configure standby redo log is added to enable Data guard Maximum availability and Maximum Protection modes. Configure standby redo logs (SRL) with same size as online redo logs.
SQL> select * from v$logfile;
I have noticed there are 3 groups
SQL> select bytes from v$log;
Bytes
--------
52443300
SQL> alter database add standby logfile group 4 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M
Database altered
SQL>alter database add standby logfile group 5 size 50M;
Database altered
Now make sure that u have three more log groups
Use this query
SQL> select * from v$logfile;
6 rows selected
8. Enable Archive log mode
Check whether database is in archive log if not alter the database to archive log mode.
SQL>archive log list
Change the database to archive log mode
SQL> startup mount;
SQL> alter database archive log;
SQL>alter database open
SQL>archive log list
9. Set primary Database init.ora Parameter
Initialization Parameter controls the behavior of a Data Guard environment. Data guard must use spfile, in order to configure this first we create and configure standby parameters on a regular pfile and later we convert this to spfile
SQL> create pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’ from spfile;
Edit the pfile to add the standby parameters, few parameters are listed which needs to be modified.
Db_unique_name=’BBMLPROD’
Log_Archive_config=’DG_CONFIG=BBMLPROD, BBMLDR’
Log_Archive_Dest_1=’Location=/../../archdest/
Valid_for=(All_logfiles, All_roles)
Db_unique_Name=’BBMLPROD’
Log_Archive_Dest_2=
‘Service=’BBMLDR LGWR ASYNC
Valid_for=(online_logfiles, primary_role)
Db_unique_name=BBMLDR’
Log_archive_dest_state_1=Enable
Log_Archive_Format=%t_%s_%r.arc
Log_Archive_Max_Processes=30
#standby role Parameters
Fal_server=BBMLDR
Fal_client=BBMLPROD
Standby_file_management=auto
Db_file_name_convert=’BBMLDR/BBMLDR’,’ BBMLPROD/BBMLPROD’
Log_file_name_convert=’/../../../’,’/…/../../’
Once the parameter file is ready create spfile from this new pfile
SQL> shutdown immediate;
SQL>startup nomount pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL> create spfile from pfile=’/oradisk/app01/oracle/product/10gDB/dbs/pfileBBMLPROD.ora’;
SQL>shutdown immediate;
SQL>startup;
Standby Database Steps
1. Standby database will use the control file that is generated on the primary database
SQL>startup mount;
SQL>alter database create standby controlfile as ‘/oradisk/app01/oracle/product/10gDB/dbs/BBMLDR.ctl’;
SQL> Alter Database Open;
2. Backup the Primary Database and transfer a copy to the standby node.
Note: Backup the primary database is nothing your .dbf, .log, .tmp files. Perform copy of those files and using scp move that on standby database.
3. On the standby node create necessary directories to get the datafiles, logfiles and so on.
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/datafile/
[/opt/oracle]#mkdir –p /oradisk/oracle/BBMLDR/onlinelog/
[/opt/oracle/]#chown –R oracle:dba/oradisk/oracle
4. Now Copy the entire database from primary to standby while the main database is down or in backup mode. Copying database includes copying all .dbf,.log, .tmp files
5. Prepare an initialization parameter file for the standby database.
Following list shows the parameter
Edit the following
Db_unique_name=’BBMLDR’
audit_file_dest=’/oradisk/app01/oracle/admin/bbmldr/adump’
background_dump_dest,=’/oradisk/app01/oracle/admin/BBMLDR/bdump’
core_dump_dest=’/oradisk/app01/oracle/admin/BBMLDR/cdump’
user_dump_dest, control_files=’/oradisk/app01/oracle/admin/BBMLDR/udump’
db_create_file_dest=’/oradisk/oracle/BBMLDR’
db_file_name_convert=’/oradisk/oracle/bbmldr/datafile/’,’/../../../bbmlprod/datafile/’
log_file_name_convert=’/oradisk/oracle/bbmldr/onlinelog/’,’/../../../bbmlprod/onlinelog/’
fal_server=’bbmlprod’
fal_client=’bbmldr’
log_archive_dest_1=’location=/oradisk/oracle/bbmldr/archives/’
valid_for=(all_logfiles, all_roles)
db_unique_name=’bbmldr’
log_archive_dest_2=’service=bbmlprod lgwr async
valid_for=(online_logfiles, primary_role)
db_unique_name=’bbmlprod’
6. Create all required directories for dump directories and archived log destination
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/budmp
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/cdump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/udump
[/opt/oracle]#mkdir –p /oradisk/app01/oracle/admin/BBMLDR/archives/
7. Copy from the primary controlfile to standby control file to its destination using scp –p
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_02.ctl
[sblbgp01]>scp –p bbmldr.ctl drsblbgp01:/oradisk/BBMLDR/controlfile/bbmldr_01.ctl
8. Configure the listener and tnsnames to support the database on both nodes
#ON sblbgp01
LISTENER_BBMLPROD=
(DESCRIPTION=
(DESCRIPTION=
(ADDRESS =(PROTOCOL = TCP)(HOST=sblbgp01)(PORT=1520)(IP=FIRST))
)
)
SID_LIST_LISTENER_BBMLPROD
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME= BBMLPROD)
(ORACLE_HOME = /oradisk/appl01/oracle/product/10gDB)
(SID_NAME = bbmlprod)
)
)
#drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT=1520) (IP=FIRST)
)
)
SID_LIST_LISTENER_BBMLDR =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = BBMLDR)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB)
(SID_NAME = BBMLDR)
)
)
Configure tnsnames.ora on both servers to hold entries for both databases.
# ON sblbgp01
LISTENER_BBMLPROD =
(DESCRIPTION_LIST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = sblbgp01) (PORT = 1520) (IP = FIRST)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =sblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLPROD)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST = drsblbgp01) (PORT = 1520))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME =BBMLDR)
)
)
#ON drsblbgp01
LISTENER_BBMLDR =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST =drsblbgp01) (PORT =1520) (IP = FIRST)
)
)
BBMLDR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL =TCP) (HOST = drsblbgp01) (PORT =1520))
(CONNECT_DATA=
(SERVER = DEDICATED)
(SERVICE_NAME = BBMLDR)
)
)
BBMLPROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP) (HOST =SBLBGP01) (PORT =1520))
(CONNECT_DATA =
(SERVER =DEDICATED)
(SERVICE_NAME =BBMLPROD)
)
)
9. Start the listener and check tnsping on both nodes to both
On node 1
[sblbgp01]>Lsnrctl start bbmlprod
[sblbgp01]>Lsnrctl start bbmldr
[sblbgp01]>tnsping bbmlprod
[sblbgp01]>tnsping bbmldr
10. Set up the environment to support the standby database on the standby node.
Copy the password file from primary to standby, sys password must be identical
[sblbgp01]>scp orapwbbmlprod drsblbgp01:/oradisk/app01/oracle/product/10gDB/dbs/orapwbbmldr
Setup the environment variable to point to the standby database
ORACLE_HOME=/oradisk/app01/oracle/product/10gDB
ORACLE_SID=bbmldr
PATH=$ORACLE_HOME/bin:$PATH
Startup nomount the standby database and generate an spfile
[drsblbgp01]> cd $ORACLE_HOME
[drsblbgp01]>Sqlplus /nolog
[drsblbgp01]>conn sys/as sysdba
SQL>startup nomount pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL>create spfile from pfile =’/oradisk/app01/oracle/product/10gDB/dbs/initbbmldr.ora’;
SQL> shutdown immediate;
SQL> startup mount
SQL> Alter Database Recover Managed standby database disconnect from session;
Database altered
Check the alert log of both primary and standby which will give you complete information
Verifying the Physical standby database is performing properly
Check archived redo log on standby
SQL>show parameters db_unique_name;
SQL> Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Switch logfile on primary
SQL>alter system switch logfile;
SQL> archive log list
SQL>alter system switch logfile
SQL>archive log list
Check archived redo log on standby
SQL>Select thread#, sequence#, name, first_time, next_time, completion_time, applied from V$ARCHIVED_LOG ORDER BY first_time, thread#;
Regards,
Sowmya Bandaru (OSR)
Friday, May 28, 2010
Queries Running Very Slow
Query Running very slow:
This is the one which we often get, and the immediate answer would be, just go and take plan of the query and create some indexes if necessary. Lot of times this will not help you. There are so many possible reasons that we need to look.
If the query is running very slow collect some basic information before doing anything, this will be the first step to resolve this issue.
Information needs to collect when query is running slow
1. First execute the query and check how long it took? Is this query running very slow since it developed or it happened today? I mean yesterday or sometime before it was running fine but today it is very slow. The first thing you have to do is check the network traffic/Network errors for something when wrong in the network side, the best way to check is access some share drive which is located somewhere in network or use tnsping, traceroute, try pinging or ask network admin to check the packet size. If you notice any slow access then contacts network support. Sometimes this has been a problem. Which is out of scope of oracle database but created problem to the database, recently I faced the same kind of issue.
2. Network Issue:
Network is good no problems noticed then next step needs to check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS
3. Statistics are fine. Still the query is slow. Next step needs to check is whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.
4. I/O Issue:
Sometimes queries will be very slow due to more I/O from database to client.
Mechanism to avoid disk I/O and make the physical reads and writes more efficient
First get the largest amount of physical reads and logical reads and the simple way to get them are:
For Physical Reads:
Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;
For Logical Reads:
Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by
Buffer_gets desc;
Multiple approaches must be made to cover all types of queries
5. Memory Issue:
Pin (Cache) most used PL/SQL statement into memory
Syntax: dbms_shared_Pool.keep(Object_name)
Few queries which would help you to resolve the memory related issues are:
Select name, value from gv$parameter where name in (‘db_block_buffers’, ‘db_block_size’, ‘shared_pool_size’, ‘sort_area_size’);
Check the following:
select 1-(sum(decode(name, 'physical reads', value,0))/ (sum(decode(name, 'db block gets', value,0)) + (sum(decode(name, 'consistent gets', value,0))))) * 100 "Read Hit Ratio" from gv$sysstat;
Read Hit Ratio
98.99
Hit ratio below 90-95% is signed as poor indexing;
Check the Shared Pool Size:
It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.
How to find the data dictionary cache miss ratio?
Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”
From gv$rowcache;
Gets Misses HitRate
11000 558 96.5
This is a good ratio does not require any further action.
How to find library Cache Hit Ratio?
select sum(pins) Executions, sum(pinhits) "Execution Hits", ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;
EXECUTIONS Execution Hits PHITRAT MISSES HITRAT
---------- -------------- ---------- ---------- ----------
1126456014 1121589121 99.5679465 102177 99.9909302
How much memory is left for shared_pool_size?
select to_number(gv$parameter.value) value, gv$sgastat.bytes, (gv$sgastat.bytes/gv$parameter.value)*100 “Percent Free
from gv$sgastat, gv$parameter where gv$sgastat.name = 'free memory and gv$ parameter .name = ‘shared_pool_size;
Shared Pool Size Free Bytes Percent Free
100,000,000 82,278,960 82.278
Regards,
Sowmya (OSR)
This is the one which we often get, and the immediate answer would be, just go and take plan of the query and create some indexes if necessary. Lot of times this will not help you. There are so many possible reasons that we need to look.
If the query is running very slow collect some basic information before doing anything, this will be the first step to resolve this issue.
Information needs to collect when query is running slow
1. First execute the query and check how long it took? Is this query running very slow since it developed or it happened today? I mean yesterday or sometime before it was running fine but today it is very slow. The first thing you have to do is check the network traffic/Network errors for something when wrong in the network side, the best way to check is access some share drive which is located somewhere in network or use tnsping, traceroute, try pinging or ask network admin to check the packet size. If you notice any slow access then contacts network support. Sometimes this has been a problem. Which is out of scope of oracle database but created problem to the database, recently I faced the same kind of issue.
2. Network Issue:
Network is good no problems noticed then next step needs to check for any massive data loads done during last past night, are there any code changes, when was the last time the tables got analyzed, check the tables in the queries for proper indexes creation, check for tables, indexes and other objects for recent statistics, avoid using database functions, avoid using full table scan as much as possible, create partitions if necessary for faster access , use merge statements for huge updates and deletes, and so on. Check whether database statistics ran after database load. Simple way to run statistics is using DBMS_STAT.GATHER_TABLE_STATS
3. Statistics are fine. Still the query is slow. Next step needs to check is whether any database upgrades are done recently or any patches applied to higher version. Same queries with same data will run very slow in newer versions when upgrade from lower version. Then you might run with database CBO Bug. Contact Oracle support for more details regarding this bug.
4. I/O Issue:
Sometimes queries will be very slow due to more I/O from database to client.
Mechanism to avoid disk I/O and make the physical reads and writes more efficient
First get the largest amount of physical reads and logical reads and the simple way to get them are:
For Physical Reads:
Select disk_reads, sql_text from v$sqlarea where disk_reads > 10000 order by disk_reads desc;
For Logical Reads:
Select Buffer_gets, sql_text from v$sqlarea where buffer_gets > 200000 order by
Buffer_gets desc;
Multiple approaches must be made to cover all types of queries
5. Memory Issue:
Pin (Cache) most used PL/SQL statement into memory
Syntax: dbms_shared_Pool.keep(Object_name)
Few queries which would help you to resolve the memory related issues are:
Select name, value from gv$parameter where name in (‘db_block_buffers’, ‘db_block_size’, ‘shared_pool_size’, ‘sort_area_size’);
Check the following:
select 1-(sum(decode(name, 'physical reads', value,0))/ (sum(decode(name, 'db block gets', value,0)) + (sum(decode(name, 'consistent gets', value,0))))) * 100 "Read Hit Ratio" from gv$sysstat;
Read Hit Ratio
98.99
Hit ratio below 90-95% is signed as poor indexing;
Check the Shared Pool Size:
It is important to look at the shared_pool_size for proper sizing. With a greater amount of procedures, packages and triggers being utilized with Oracle, the shared_pool_size makes up a much greater portion of the Oracle SGA. This is the memory allocated for the library and data dictionary cache. If the shared_pool_size is set too low then you will not get the full advantage of your DB_BLOCK_BUFFERS.
How to find the data dictionary cache miss ratio?
Select sum(gets) “Gets”, sum(getmisses) “Misses”, (1-(sum(getmisses)/(sum(gets)+ sum(getmisses)))) * 100 “HitRate”
From gv$rowcache;
Gets Misses HitRate
11000 558 96.5
This is a good ratio does not require any further action.
How to find library Cache Hit Ratio?
select sum(pins) Executions, sum(pinhits) "Execution Hits", ((sum(pinhits)/sum(pins)) * 100)phitrat, sum(reloads)Misses, ((Sum(pins)/(sum(pins) + sum(reloads))) * 100) hitrat from gv$librarycache;
EXECUTIONS Execution Hits PHITRAT MISSES HITRAT
---------- -------------- ---------- ---------- ----------
1126456014 1121589121 99.5679465 102177 99.9909302
How much memory is left for shared_pool_size?
select to_number(gv$parameter.value) value, gv$sgastat.bytes, (gv$sgastat.bytes/gv$parameter.value)*100 “Percent Free
from gv$sgastat, gv$parameter where gv$sgastat.name = 'free memory and gv$ parameter .name = ‘shared_pool_size;
Shared Pool Size Free Bytes Percent Free
100,000,000 82,278,960 82.278
Regards,
Sowmya (OSR)
Wednesday, May 12, 2010
ORA-0600: Internal Code Error
ORA-0600: Internal Code Error
Explanation: Any ORA-600 error indicates Oracle has detected an internal inconsistency or a problem which it doesn’t know how best to address. These are not necessarily bugs and can occur for reasons such as Operating System, IO Problems, and Running out of some resources and so on.
Cause: This generic internal error number for oracle program exceptions. This indicates that a process has encountered an exceptional condition.
Note: This type of errors is raised from kernel code of the Oracle RDBMS software when there is an inconsistency in detected or unexpected condition is noticed. This situation need not be considered as bug all the time; it might be caused by problems with operating system, lack of resources etc.
With the ORA-600 error comes with a list of arguments in square brackets. The first of these arguments tells us from where in the code the error was caught and this is the key information for identifying the problem. This can be either numbers or arguments.
Whenever there is an ORA-600 error is raised a trace file is generated in either Udump or Bdump depending on whether the error was caught in a user or a background process. The information is also present in the alert log.
Remedy: Contact Oracle Support.
• Report as bug if the first argument is the internal error number
The most useful items are:
• Check the alert log for the instance or any other instances in a parallel server environment. Alert log is present in Background_dump_dest
• Check the trace file for the failing session and other related trace files
• Check whether anything got changed recently in the system environment such OS been upgraded, patches, database files got restored etc.
• Make sure whether the problem is reproducible or one time occurrence
The following is partial list of the ORA-00600 known bugs and issues. You can find complete list and details about these bugs at Oracle Metalink.
• Partial list of ORA-00600 known bugs and issues:
1. An Oracle ORA-00600 [729]: indicates a memory leak in the UGA. Setting diagnostic events 10262 ignores this error, that is, no ORA-00600 is produced even if a memory leak is detected.
2. ORA-00600 [16365]: indicates a Half Duplex Violation. Shared servers and dispatchers cannot both be writing to the same virtual circuit at the same time because they communicate with a half-duplex protocol (TTC protocol).
3. ORA-00600 [kghpih:ds] : and ORA-00600 [17175] may occur in oracle 9.2.0.6 when two processes work on the same library cache object. Fixed in oracle 9.2.0.7... Note: 300604.1
4. An Oracle ORA-00600 [3020]: Bug.2322620 - RECOVERY HALTED ON PHYSICAL STANDBY WITH ORA-600 AND ORA-10564 Details: ORA-600 [3020] can occur on recovery of LOB data. Fixed in 8.1.7.5 Patch available. This error occurred on our database recovery on the primary server.
5. ora-00600 [12333]: Most probably a network error that may have caused bad data to be received or the client application may have sent the wrong data in the network buffer.
6. Oracle ORA-00600 [12700]: Indicates corruption in index, table or the mapping between them. Upon this discovery, oracle marks the whole block as corrupt and users will not have normal access to the data from the corrupted block. This can also break referential integrity and should be referred to oracle.
7. ORA-00600 [lxhcsn]: Shutdown may dump ORA-7445. Known Bug 2311529
Conclusion: This must be taken as a serious issue and contact Oracle support or raise a SR for unknown issues
Regards,
Sowmya (OSR)
Explanation: Any ORA-600 error indicates Oracle has detected an internal inconsistency or a problem which it doesn’t know how best to address. These are not necessarily bugs and can occur for reasons such as Operating System, IO Problems, and Running out of some resources and so on.
Cause: This generic internal error number for oracle program exceptions. This indicates that a process has encountered an exceptional condition.
Note: This type of errors is raised from kernel code of the Oracle RDBMS software when there is an inconsistency in detected or unexpected condition is noticed. This situation need not be considered as bug all the time; it might be caused by problems with operating system, lack of resources etc.
With the ORA-600 error comes with a list of arguments in square brackets. The first of these arguments tells us from where in the code the error was caught and this is the key information for identifying the problem. This can be either numbers or arguments.
Whenever there is an ORA-600 error is raised a trace file is generated in either Udump or Bdump depending on whether the error was caught in a user or a background process. The information is also present in the alert log.
Remedy: Contact Oracle Support.
• Report as bug if the first argument is the internal error number
The most useful items are:
• Check the alert log for the instance or any other instances in a parallel server environment. Alert log is present in Background_dump_dest
• Check the trace file for the failing session and other related trace files
• Check whether anything got changed recently in the system environment such OS been upgraded, patches, database files got restored etc.
• Make sure whether the problem is reproducible or one time occurrence
The following is partial list of the ORA-00600 known bugs and issues. You can find complete list and details about these bugs at Oracle Metalink.
• Partial list of ORA-00600 known bugs and issues:
1. An Oracle ORA-00600 [729]: indicates a memory leak in the UGA. Setting diagnostic events 10262 ignores this error, that is, no ORA-00600 is produced even if a memory leak is detected.
2. ORA-00600 [16365]: indicates a Half Duplex Violation. Shared servers and dispatchers cannot both be writing to the same virtual circuit at the same time because they communicate with a half-duplex protocol (TTC protocol).
3. ORA-00600 [kghpih:ds] : and ORA-00600 [17175] may occur in oracle 9.2.0.6 when two processes work on the same library cache object. Fixed in oracle 9.2.0.7... Note: 300604.1
4. An Oracle ORA-00600 [3020]: Bug.2322620 - RECOVERY HALTED ON PHYSICAL STANDBY WITH ORA-600 AND ORA-10564 Details: ORA-600 [3020] can occur on recovery of LOB data. Fixed in 8.1.7.5 Patch available. This error occurred on our database recovery on the primary server.
5. ora-00600 [12333]: Most probably a network error that may have caused bad data to be received or the client application may have sent the wrong data in the network buffer.
6. Oracle ORA-00600 [12700]: Indicates corruption in index, table or the mapping between them. Upon this discovery, oracle marks the whole block as corrupt and users will not have normal access to the data from the corrupted block. This can also break referential integrity and should be referred to oracle.
7. ORA-00600 [lxhcsn]: Shutdown may dump ORA-7445. Known Bug 2311529
Conclusion: This must be taken as a serious issue and contact Oracle support or raise a SR for unknown issues
Regards,
Sowmya (OSR)
Wednesday, April 28, 2010
Oracle Grid- Creating User Accounts
Hello,
I am going to discuss more about Oracle Grid - creating administrator user, super administrator user, creating targets, generating reports, scheduling jobs, setting alerts,setting email notification and so on.
In this post i am going to create two administrator user accounts (warning_admin, critical_admin) and super administrator user account (super_admin) using Oracle Grid.
Administrator User Accounts: Adminsitrator are deatabase users defined in the management repository who can log into Enterprise Manager to perform management tasks.
Super Administrator User Account:The Super Administrator can choose to grant certain privileges to certain administrators to access and manage certain targets based on the requirement. Thus, the superadministrator can divide the work intelligently among the adminsitrators.
Creating Super Administrator User Super_Admin using Oracle Grid
Super_Admin
• Go to Setup -> click on Administrators
• Click Createlike -> provide the following information
• Name: Super_Admin
• Password:
• Confirm password:
• Check on Super Administrator
• Click Finish
Creating Adminsitrator User Accounts Warning_Admin and Critical_Admin
Warning_Admin
Login to Grid user Super Administrator user account Super_Admin
Create 2 User Accounts
Note: This 2 users are created just to differentiate critical alerts and warning alerts and set email notifications to groups based on the priority. It is sometimes needed to avoid your boss getting all warning emails, so make sure that you have 2 email groups with your boss included in criticals alerts and not in warning alerts.
Warning_Admin: This account is created to set for all the alerts which come under warning messages. A warning email is sent to everyone in the team except your Boss.
• Go to setup -> Adminitrators
• Click on Createlike -> Provide following information
• Name: warning_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next Roles Page
• Make the role as public
• Click next to system privileges
• Check on view any target
• Click next to Target
• Browse Type-> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click Finish
Critical_Admin
Login into Oracle Grid using Super_Admin
Create administrator user Critical_Admin
Critical_Admin: This account is created to set for all the alerts which come under critical messages. A Critical email is sent to everyone in the team including your Boss. The critical email is .......
• Go to setup -> Administrators
• Click on createlike -> Provide following information
• Name: critical_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next roles Page
• Make the role as Public
• Click Next to system privileges
• Check on view any target
• Click next to Target
• Browse Type -> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click finish
Regards,
Sowmya (OSR)
I am going to discuss more about Oracle Grid - creating administrator user, super administrator user, creating targets, generating reports, scheduling jobs, setting alerts,setting email notification and so on.
In this post i am going to create two administrator user accounts (warning_admin, critical_admin) and super administrator user account (super_admin) using Oracle Grid.
Administrator User Accounts: Adminsitrator are deatabase users defined in the management repository who can log into Enterprise Manager to perform management tasks.
Super Administrator User Account:The Super Administrator can choose to grant certain privileges to certain administrators to access and manage certain targets based on the requirement. Thus, the superadministrator can divide the work intelligently among the adminsitrators.
Creating Super Administrator User Super_Admin using Oracle Grid
Super_Admin
• Go to Setup -> click on Administrators
• Click Createlike -> provide the following information
• Name: Super_Admin
• Password:
• Confirm password:
• Check on Super Administrator
• Click Finish
Creating Adminsitrator User Accounts Warning_Admin and Critical_Admin
Warning_Admin
Login to Grid user Super Administrator user account Super_Admin
Create 2 User Accounts
Note: This 2 users are created just to differentiate critical alerts and warning alerts and set email notifications to groups based on the priority. It is sometimes needed to avoid your boss getting all warning emails, so make sure that you have 2 email groups with your boss included in criticals alerts and not in warning alerts.
Warning_Admin: This account is created to set for all the alerts which come under warning messages. A warning email is sent to everyone in the team except your Boss.
• Go to setup -> Adminitrators
• Click on Createlike -> Provide following information
• Name: warning_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next Roles Page
• Make the role as public
• Click next to system privileges
• Check on view any target
• Click next to Target
• Browse Type-> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click Finish
Critical_Admin
Login into Oracle Grid using Super_Admin
Create administrator user Critical_Admin
Critical_Admin: This account is created to set for all the alerts which come under critical messages. A Critical email is sent to everyone in the team including your Boss. The critical email is .......
• Go to setup -> Administrators
• Click on createlike -> Provide following information
• Name: critical_admin
• Password:
• Confirm Password:
• Email:
• Don’t check on super administrator
• Click next roles Page
• Make the role as Public
• Click Next to system privileges
• Check on view any target
• Click next to Target
• Browse Type -> All target type and move all to selected targets
• Click next to target privileges
• Click next to Review and click finish
Regards,
Sowmya (OSR)
Tuesday, April 20, 2010
RMAN- Block Change Tracking
Hello,
Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.
How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;
How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking
How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;
Regards,
Sowmya (OSR)
Block Change Tracking
As data blocks changes, the background process change tracking writer (CTWR) tracks all the changes made to the data block in a private area of memory. When we issue commit on these changes, the block change tracking information is copied to a shared area in large pool called the CTWR buffer. CTWR process writes the information from the CTWR RAM buffer to the change tracking file. Without BCT enabled or in cases when change tracking information cannot be used, RMAN has to read every block in the datafile during incremental backup. Each block contains last system change number (SCN) of its last modification. The block is copied only if its SCN is higher or equal to the base backup SCN. To be precise, the backup SCN is a checkpoint SCN that was made right before the backup. Since, usually, only handful of blocks is changed between incremental backups, RMAN does a lot of useless work reading the blocks not required for backup.
Block change tracking provides a way to identify the blocks required for backup without scanning the whole datafile. After that RMAN need only read blocks that are really required for this incremental backup.
How to Enable/Disable Block change tracking?
Enable
Syntax: SQL>Alter database enable block change tracking (syntax when OMF is enabled)
Or
Alter database enable block change tracking using file os_file_name;
Using os_file_name syntax allows you to define the location of the change tracking file on the OS or you can omit this using OMF (oracle Managed files)
By default the change tracking file is created as an Oracle Managed File (OMF) in the location pointed to by the DB_CREATE_FILE_DEST parameter.
Disable
SQL>alter database disable block change tracking;
How to monitor the status of block change tracking?
SQL>select filename, status, bytes from v$block_change_tracking
How can we view the size of CTWR dba buffer?
Select * from v$sgastat where name like ‘CTWR%’;
Regards,
Sowmya (OSR)
Subscribe to:
Posts (Atom)