Friday 31 August 2012

11g R2 - Reducing overheads in a Data Guard Environment



Hi,

My name is Mark Tiger, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g).

Gathering information for some DBA tasks can be time-consuming, even although the commands that you need to issue eventually can be over quite quickly.  I have gone through this process over and over again, and have decided to help other Oracle DBA’s in the community.

In this blog, I will give you the details of how to carry out those tasks; that typically need a lot of research, before you can do them.  I will try to present the information in an easy to understand way.  My hope is that this will save you lots of time in research, and help to make you more productive as an Oracle DBA.  The illustrations are primarily meant for Linux, since this is a Platform; that enjoys preference from Oracle.  However they are easily adaptable for versions of UNIX/AIX and windows etc.

11g R2  In a Data Guard environment to reduce overheads of fast incremental backups on the primary database

RMAN can perform backups with minimal effect on the primary database and quickly recover from the loss of individual datafiles, or the entire database.  RMAN and Data Guard can be used together to streamline the administration of a Data Guard setup.

You can’t use a logical standby database to backup the primary database, because the logical standby database is not a block for block copy of the primary database.
About RMAN File Management in a Data Guard configuration

RMAN uses a recovery catalog in a Data Guard environment, to track the filenames for all the datafiles, online redo logs, standby redo logs, tempfiles, archived redo logs, backup sets, and image copies.

Interchangeability of Backups in a Data Guard Environment

RMAN behaves transparently across various physical databases in a Data Guard environment, by using the recovery catalog metadata.  You can backup a tablespace a physical standby database, and restore and recover it on the primary database.  You can backup a tablespace on a primary database and restore and recover it on a physical standby database.  You can’t do this with Logical standby databases.

Backups of standby control files and nonstandby control files are interchangeable.  You can restore a standby control file on a primary database and a primary control file on a physical standby database.

This means you can offload control file backups to one database in a Data Guard environment.  RMAN will automatically update  filenames for database files during restore and recovery at the databases.

Association of Backups in a Data Guard Environment

In the Data Guard environment,the recovery catalog associates every database file or backup file with a DB_UNIQUE_NAME.  The database that creates the file is associated with that file.

When RMAN backs up the database called standby1, then the DB_UNIQUE_NAME standby1 is associated with this backup.  The backup will remain associated with the database that created it, unless you associate the backup with a different database, by changing the DB_UNIQUE_NAME, by invoking: CHANGE … RESET DB_UNIQUE_NAME

Accessibility of Backups in a Data Guard Environment

The accessibility of a backup is different from its association.  In a Data Guard environment, the recovery catalog considers disk backups as accessible only to the database with which it is associated, whereas tape backups created on one database are accessible to all databases.  If a backup file is not associated with any database, then the row describing it in the recovery catalog, shows null for the SITE_KEY column.  By default, RMAN associates files whose SITE_KEY is NULL with the target database.

RMAN commands such as BACKUP, RESTORE, AND CROSSCHECK work on any accessible backup.  For example, for a RECOVER  COPY operation, RMAN considers only image copies that are associated with the database as eligible to be recovered.

RMAN considers the incremental backups on disk and tape as eligible to recover the image copies.  In a database recovery, RMAN considers only the disk backups associated with the database and all files on tape as eligible to be restored.

Let’s assume that the database PROD resides on a different host to the database STANDBY1.  RMAN backs up datafile 1 on prod to /prod/backup/datafile1.dbf, as well as a copy of it to tape.  RMAN backups datafile 1 on the standby host to /standby/backup/datafile1.dbf, as well as a copy to tape.

If you are connected with RMAN to PROD, then you cannot use RMAN operations to manipulate the backup located on /standby/backup/datafile1.dbf.  However if you really needed this file, then RMAN would consider the backup of this same file on tape to be eligible to be restored.

If you don’t have a tape backup, then you can FTP a backup from the standby host to the production host, connect as TARGET to the production host, then you can CATALOG the backup.  Once you have catalogued the backup in the target database, then that backup is associated with the target database, in this case PROD database.
About RMAN Configuration in a Data Guard Environment

In a Data Guard environment, you can offload the process of backing up control files, datafiles, and archived logs to the standby system.  By doing this you minimize the effect of backups on the production system.  The backups taken in this way can be used to recover the primary or the standby database.

RMAN uses the DB_UNIQUE_NAME initialization parameter to distinguish one database site from another database site.  Therefore it is essential that the uniqueness of DB_UNIQU_NAME be maintained in a Data Guard configuration.

Only the primary database must be explicitly registered using the RMAN> REGISTER DATABASE command.  You would issue this command after connecting to the recovery catalog, and the primary database as the target.

You can use the RMAN command CONFIGURE, to set up the various configurations in RMAN.  When you use the CONFIGURE command in conjunction with the FOR DB_UNIQUE_NAME option, then RMAN sets the CONFIGURE for the site-specific database, based on the DB_UNIQUE_NAME that was specified.

RMAN> set dbid 1625818167;

Use this if you have not connect to RMAN as TARGET.

RMAN> configure default device type to SBT for db_unique_name jhb;

Recommended RMAN and Oracle Database Configurations

These configurations can contribute towards simplifying backup and recovery operations.

Configuration Assumptions

·         The standby database is a physical standby database, and backups are taken only on the standby database.

·         An RMAN recovery catalog is required so that backups taken on one database server can be restored to another database server.  It is not sufficient to use only the control file, because the RMAN repository on the primary database, will have no knowledge of backups taken on the standby database.

The RMAN recovery catalog organizes backup history and recovery related metadata in a centralized location.  The recovery catalog is configured in a database and maintains backup metadata.  A control file has space limitations, whereas the recovery catalog does not, and can store more historical data about the backups.

Ideally you should have a catalog server, physically separate from the primary and standby sites.  Oracle recommends this in a Data Guard configuration, because a disaster at either site will not affect the ability to recover the latest backups.

·         Assume that all databases are using the Oracle Database 11g release 1 or higher.

·         Oracle Secure Backup software, or 3rd party media management software is configured with RMAN to make backups to tape.

Oracle Database Configurations on Primary and Standby Databases

The following Oracle Database configurations are recommended on the primary database and all the standby databases in a Data Guard environment.

·         Configure a fast recovery area for each database.  The recovery area is local to the database.

The fast recovery area is a single storage area or location on a file system of Oracle ASM disk group, where all the files needed for a recovery reside.  These files include the control file,  online redo logs, archived logs, flashback logs, and RMAN backups.

As new backups and archived logs are created in the fast recovery area, older files are automatically deleted to make room for them.  The files are deleted according to the retention period, or when they have been backed up to tertiary storage.  Also notifications can be set up to alert the administrator when space consumption  in the fast recovery area is nearing its predefined limit.  The DBA can then take an action, such as increasing the recovery area space limit, adding disk hardware space, or decreasing the retention period.

You can set the fast recovery area with these initialization parameters:

DB_RECOVERY_FILE_DEST = <mount point or Oracle ASM Disk Group>

DB_RECOVERY_FILE_DEST_SIZE =  <disk space quota>

·         Use a server parameter file (SPFILE), so that it can be backed up, to save the instance parameters in the backup.

·         Enable Flashback Database on the primary and standby databases.

When Flashback Database is enabled, Oracle Database maintains flashback logs in the fast recovery area.  These logs can be used to roll the database back to an earlier point in time, without requiring a complete restore.

RMAN configurations at the Primary Database

You can set a number of persistent RMAN configuration settings for each database in the Data Guard environment, in order to simplify the ongoing use of RMAN in the environment.  You can configure the backup retention policy, default destinations for backups to tape or disk, default backup device type, etc.  You can use the CONFIGURE command to set and change RMAN configurations.

These configurations are recommended at the Primary database site:

·         When you connect, connect RMAN to the primary database and to the recovery catalog.

·         Configure the retention policy for the database in terms of n days.

RMAN> configure retention policy to recovery window of <n> days;

old RMAN configuration parameters:

CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

new RMAN configuration parameters:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 5 DAYS;

new RMAN configuration parameters are successfully stored

This configuration lets you keep the backups necessary to perform database recovery to any point in time within the specified number of days.

Use the DELETE OBSOLETE command to delete any backups that are not required to perform recovery within the specified number of days.

·         Use the CONFIGURE ARCHIVELOG DELETION POLICY command, to specify when the archived logs can be deleted.

If you want to delete logs after ensuring that they shipped to all destinations:

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;

If you want to delete logs only after ensuring that they were applied to all standby destinations, then use the following configuration.

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED TO ALL STANDBY;

·         Configure the connect string for the primary database and all the standby databases, so that RMAN can connect remotely and perform resynchronization when the RESYNC CALTALOG FROM DB_UNIQUE_NAME command is used.

When you connect to the target database, you must provide a net service name.  This requirement applies, even if the other database instance from where the resynchronization is done is on the local host.

The Target and remote instances must use the same SYSDBA password, which means that both instances must already have password files.  You can create the password file with a single password, so you can start all the database instances with that password file.

If the TNS alias to connect to a standby database in jhb is jhb_connect_str, you can use the following command to configure the connect identifier for the jhb database site:

RMAN> configure db_unique_name jhb connect identifier ‘jhb_connect_str’;

‘jhb_connect_str’ does not include a username and password.  It contains only the Oracle Net Service name that can be used from any database site to connect to the jhb database site.

After the connect identifiers are configured for all the standby databases, you can verify the list of standbys by using:

RMAN> LIST DB_UNIQUE_NAME OF DATABASE;

RMAN configurations at a Standby Database Where Backups are Performed

·         Connect RMAN to the standby database (where backups are done), as target, and connect to the recovery catalog.

·         Enable automatic backup of the control file and the server parameter file:

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

·         Skip backing up datafiles for which there already exists a valid backup with the same checkpoint:

RMAN> CONFIGURE BACKUP OPTIMIZATION ON;

·         Configure the tape channels to create backups as required by media management software:

RMAN> CONFIGURE CHANNEL DEVICE TYPE SBT PARMS ‘<channel parameters>’;

·         Specify when the archived logs can be deleted with:

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY ...

Because the logs are backed up at the standby site, it is recommended that you configure the BACKED UP option for the log deletion policy.

RMAN configurations at a Standby where Backups are not Performed

·         Connect RMAN to the standby database as target, and to the recovery catalog.

·         Enable automatic deletion of archived logs once they are applied at the standby database:

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

MarkTiger01@gmail.com

P.S. I am busy preparing Nine books on preparing for the Oracle 11g R2 OCM DBA exam.  Watch this spot for details on the books, as they start becoming available.

Thursday 30 August 2012

11g R2 – Apply Services


Hi,

My name is Mark Tiger, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g).

Gathering information for some DBA tasks can be time-consuming, even although the commands that you need to issue eventually can be over quite quickly.  I have gone through this process over and over again, and have decided to help other Oracle DBA’s in the community.

In this blog, I will give you the details of how to carry out those tasks; that typically need a lot of research, before you can do them.  I will try to present the information in an easy to understand way.  My hope is that this will save you lots of time in research, and help to make you more productive as an Oracle DBA.  The illustrations are primarily meant for Linux, since this is a Platform; that enjoys preference from Oracle.  However they are easily adaptable for versions of UNIX/AIX and windows etc.

11g R2 – Apply Services

Apply services allows transactional consistent access to the data, by automatically applying redo to the standby database in order to maintain synchronization with the primary database.

By default, apply services will wait for a standby redo log file to be archived before applying the redo that it contains.  You can also enable real-time apply, which allows apply services to apply the redo in the current standby redo log file, as it is being filled.

Apply services use these methods to maintain physical and logical standby databases:
·         Redo Apply (this is for physical standby databases only)
Uses media recovery to keep the primary and physical standby databases synchronized.
·         SQL Apply (Logical standby databases only)
SQL apply, reconstitutes the SQL statements from the redo received from the primary database.  SQL apply then executes these statements against the logical standby database.
·         You also get real-time apply and delayed apply, which are important considerations.

Apply Services Configuration Options

Using Real-Time Apply to Apply Redo Data Immediately

If the real-time apply feature has been enabled; then apply services can apply data as it is received.  There is no need to wait for the current standby redo log file to be archived.  This will result in faster switchover and failover times.  It will be faster because the standby redo log files would have already been applied by the time the failover or switchover is initiated.
Real-time apply requires the standby database to be configured with a standby redo log configuration.  Real-Time apply requires the standby database to be in ARCHIVELOG mode.
Enable the real-time apply feature like this:
·         On a physical standby database.
SQL> alter database recover managed standby database using current logfile...;
·         ON a logical standby database.
SQL> alter database start logical standby apply immediate...;

In a Data Guard configuration, with a local destination and a standby destination.  As the Remote File Server(RFS) process writes the redo data to standby redo log files on the standby database; the apply services can recover redo from standby redo log file as they are being filled.

Specifying a Time Delay for the Application of Archived Redo Log Files

If you define a delay for a destination that has real-time apply enabled, the delay is ignored.

There are cases were you may want to create a time lag or delay between the time when the redo data is received from the primary site, and when it is applied to the standby database.  You would specify this time delay in minutes, typically to protect the standby database from corrupted or erroneous data application.  The DELAY interval specifies the delay from the time that the redo data is completely archived at the standby destination.

You the delay using the DELAY=<minutes> attribute of the LOG_ARCHIVE_DEST_n initialization parameter.  This will delay the applying of the archived redo log files to the standby database.  By default there is no time delay.  If you specify the DELAY attribute without specifying a value, the the default delay interval is 30 minutes.

You can also cancel a specified delay interval in the following way, both ways will result in the apply services immediately beginning to applay the archived redo log files to the standby database:
·         Physical Standby database: use the NODELAY keyword in the RECOVER MANAGED STANDBY DATABASE clause.
SQL> alter database recover managed standby database nodelay...;
·         Logical standby database:
SQL> alter database start logical standby apply nodelay...;

Using Flashback Database as an Alternative to Setting a Time Delay

Instead of setting up an apply delay, you can also use Flashback Database to recover from the application of corrupted or erroneous data to the standby database.  Flashback Database can quickly flashback the standby database to an arbitrary point in time.

Applying Redo Data to Physical Standby Databases

Starting Redo Apply

To start the apply services on a physical standby database, ensure the physical standby database is started and mounted and the start the Redo Apply. 
·         To start Redo Apply and run it in the foreground:
SQL> alter database recover managed standby database;
In this case, control is not returned to the command prompt until recovery is cancelled by another session.
·         To start the Redo Apply in the background, you must include the DISCONNECT keyword in the statement:
SQL> alter database recover managed standby database disconnect;
This will create a detached server process and immediately return control to the user.  While the managed recovery process continues in the background, the SQL*Plus session can disconnect or continue performing other tasks.
·         To start the real-time apply, include the USING CURRENT LOGFILE clause in the statement:
SQL> alter database recover managed standby database using current logfile;

Stopping redo Apply

To stop the redo apply to the standby database:
SQL> alter database recover managed standby database cancel;

Monitoring Redo Apply on Physical Standby Databases

You can manually query the data dictionary, or use enterprise manager to monitor the progress of the redo apply.

·         V$database
You can get the data protection mode, the data protection level, the database role, and the switchover status for a primary, physical standby or snapshot standby database.
SQL> select protection_mode, protection_level, database_role role, switchover_status
From v$database; 

The following displays the fast-start failover status:
SQL> SELECT FS_FAILOVER_STATUS "FSFO STATUS",
                FS_FAILOVER_CURRENT_TARGET TARGET,
                 FS_FAILOVER_THRESHOLD THRESHOLD,
                FS_FAILOVER_OBSERVER_PRESENT "OBSERVER PRESENT"
                FROM V$DATABASE;

·         V$managed_standby:
To display the redo apply and redo transport status on a physical standby database.
SQL> select process, status, thread#, sequence#, block#, blocks from v$managed_standby;

·         V$archived_log
Information about archived redo log files, that have been received by a physical or snapshot standby database from a primary database:
SQL> select thread#, sequence#, first_change#, next_change# from v$archived_log;

·         V$log_history
Archived log history information, the sequence# is a useful diagnostic tool.
SQL> select thread#, sequence#, first_change#, next_change# from v$log_history;

·         V$dataguard_status
Messages generated by Data Guard, events that caused a message to be written to the alert log or to a server process trace file.
SQL> select message from v$dataguard_status;

·         V$archive_dest
Show te status of each redo transport destination, and for redo transport destinations that are standby databases, the SCN of the last primary database redo applied at that standby database.
SQL> select dest_id, status, applied_scn from v$archive_dest where target =’STANDBY’;

Applying Redo Data to Logical Standby Databases


Starting SQL Apply

The logical standby database needs to be open.
SQL> alter database start logical standby apply;

To start real-time apply on the logical standby database, in order to immediately apply redo data rom the standby redo log files on the logical standby database; you need to include the IMMEDIATE keyword in the statement.
SQL> alter database start logical standby apply immediate;

Stopping SQL Apply on a Logical Standby Database

To stop SQL Apply you can enter this statement.
SQL> alter database stop logical standby apply;

When you issue this statement, SQL Apply waits until it has committed all complete transactions that were in the process of being applied.  So the command may not stop the SQL Apply process immediately, you may have to wait for it to finish.

Monitoring SQL Apply on Logical Standby Databases

You can monitor the logical standby database using enterprise manager, or by manual methods.
·         Dba_logstdby_events:  This view records interesting events that occurred during the operation of SQL Apply.  By default the view records the most recent 10,000 events.  You can change the number of events recorded by the PL/SQL procedure:
DBMS_LOGSTDBY.APPLY_SET().

Errors that cause SQL Apply to stop are recorded in this view.  Such events will also be recorded in the alert log.  You can search through the alert log with this search criteria “LOGSTDBY”, because this keyword will be recorded with any reference in the alert log.  When querying the view you should order by EVENT_TIME_STAMP, COMMIT_SCN, CURRENT_SCN.  This will ensure that the events are ordered in the proper way.

SQL> alter session set nls_date_format = 'DD-MON-YY HH24:MI:SS';
SQL> column status format a60
SQL> select event_time, status, event from dba_logstdby_events
Order by event_timestamp, commit_scn, current_scn;

·         DBA_LOGSTDBY_LOG
This view provides dynamic information about archived logs, being processed by SQL Apply.
SQL> select file_name, sequence# as SEQ#, first_change# as F_SCN#,
Next_change# as n_SCN#, timestamp,
Dict_begin as BEG, dict_end as END,
Thread# as thr#, applied from dba_logstdby_log
Order by sequence#;

·         V$dataguard_stats
This view provides stats related to the failover characteristics fo the logical standby database.
o   The time to failover
o   How current the commited data in the logical standby database is
o   What will the potential data loss be in the event of a disaster
SQL> select name, value, unit from v$dataguard_stats;

·         V$logstdby_process
Information about the current state of the various processes involved with SQL Apply
o   Identifying information (sid | serial# | spid)
o   SQL Apploy process: COORDINATOR, READER, BUILDER, PREPARER, ANALYZER, OR APPLIER(type)
o   Status of the processes current activity (status_code | status)
o   Highest redo record processed by this process (high_scn)
SQL> select sid, serial#, spid, type, high_scn from v$logstdby_process;

·         V$logstdby_progress
Detailed information regarding the progress made by SQL Apply
o   SCN and time at which all transactions that have been committed on the primary database have been applied to the logical standby database.(applied_scn, applied_time)
o   SCN and time at which SQL Apply would begin reading redo records (restart_scn, restart_time)
o   SCN and time of the latest redo record received on the logical standby database (latest_scn, latest_time)
o   SCN and time fo the latest record processed by the BUILDER prcess (minig_scn, mining_time)
SQL> select applied_scn, latest_scn, mining_scn, restart_scn from v$logstdby_progress;

·         V$logstdby_state
This view provides a synopsis of the current state of SQL Apply, including:
o   The DBID of the primary database (primary_dbid)
o   The logminer session ID allocated to SQL Apply (session_id)
o   Whether or not SQL Apply is applying in real time (realtime_apply)
SQL> select * from v$logstdby_state;

·         V$logstdby_stats
This view displays statistics, current state, andstatus information related to the SQL Apply.  No rows are returned form this view when SQL Apply is not running.  This view is only meaningful in terms of a logical standby database.
SQL> select substr(name, 1,40) as name,
Substr(value,1,32) as value from v$logstdby_stats;


P.S. I am busy preparing Nine books on preparing for the Oracle 11g R2 OCM DBA exam.  Watch this spot for details on the books, as they start becoming available.

Wednesday 29 August 2012

11g R2 – Step by step for Creating a Physical Standby Database


Hi,

My name is Mark Tiger, creator of this blog.  I am an Oracle Certified Professional (OCP DBA 11g).

Gathering information for some DBA tasks can be time-consuming, even although the commands that you need to issue eventually can be over quite quickly.  I have gone through this process over and over again, and have decided to help other Oracle DBA’s in the community.

In this blog, I will give you the details of how to carry out those tasks; that typically need a lot of research, before you can do them.  I will try to present the information in an easy to understand way.  My hope is that this will save you lots of time in research, and help to make you more productive as an Oracle DBA.  The illustrations are primarily meant for Linux, since this is a Platform; that enjoys preference from Oracle.  However they are easily adaptable for versions of UNIX/AIX and windows etc.

11g R2 – Step by step for Creating a Physical Standby Database

Create a Backup Copy of the Primary Database Datafile

Oracle recommends that you create the backup using RMAN.  However any backup copy of the primary database can be used, provided that you have the archived redo logs to completely recover the database.

Create a Control file for the Standby Database

If you are using RMAN, then you can create a standby control file with the RMAN utility.  Otherwise you can shut the database down.
SQL> startup mount;

Then you can create the control file for the standby database, and open the database again.
SQL> alter database create standby controlfile as ‘/u03/jhb.ctl’;
SQL> alter database open;

You need a standby control file, because you can’t use the same control file for both the primary and standby databases.

Create a Parameter File for the Standby Database

Firstly, you need to create a parameter file from the server parameter file.  Secondly you need to modify the parameter file for the standby database.

SQL> create pfile from spfile;
Or to keep it specific for the standby database:
SQL> create pfile=’/u03/initjhb.ora’ from spfile;

This created pfile will then be modified and used as the parameter file for the standby database.
Most of the parameters in the parameter file will be appropriate for the physical standby database, but there will be some parameters that need to be modified.  (highlighted values are the ones to be modified)

DB_NAME=pretoria
DB_UNIQUE_NAME=jhb
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(pretoria,jhb)’
CONTROL_FILES=’/arch1/jhb/control1.ctl’,’/arch2/jhb/control2.ctl’
DB_FILE_NAME_CONVERT=’pretoria’,’jhb’
LOG_FILE_NAME_CONVERT=’/arch1/pretoria/’,’/arch1/jhb/’,’/arch2/pretoria/’,’/arch2/jhb/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/jhb/
                                              VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
                                              DB_UNIQUE_NAME=jhb’
LOG_ARCHIVE_DEST_2=’SERVICE=pretoria ASYNC
                                              VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES)
                                              DB_UNIQUE_NAME=pretoria’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=pretoria

Make sure that the COMPATIBLE initialization parameter is set to the same value on both the primary and standby databases.  If the values differ then the redo transport services may not be able to transmit redo data from the primary database to the standby database.

SQL> show parameters
To examine and verify that all the parameters have the correct settings.

Parameter
Recommended Setting for Standby Database
DB_UNIQUE_NAME
Specify a unique name for this database.  This name stays with the database and does not change even if the primary and standby databases reverse riles.
CONTROL_FILES
Specify the path name for the control files on the standby database.  It is recommended that a second copy of the control file is available so an instance can be easily restarted after copying the good control file to the location of the bad control file.
DB_FILE_NAME_CONVERT
Specify the path name and filename location of the primary database datafiles followed by the standby location.  This parameter converts the path names of the primary database to the standby datafile path names.  If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required.
LOG_FILE_NAME_CONVERT
Specify the location of the primary database online redo log files followed by the standby location.  This parameter converts the path names of the primary database log files to the path names on the standby database.  If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required.
LOG_ARCHIVE_DEST_n
Specify where the redo data is to be archived.
·         LOG_ARCHIVE_DEST_1 archives redo data received from the primary database to archived redo log files in /arch1/jhb/
·         LOG_ARCHIVE_DEST_2 is currently ignored because this destination is valid only for the primary role.  If a switchover occurs and this instance becomes the primary database, then it will transmit redo data to the remote ‘pretoria’ destination.
If a fast recovery area was configured with the DB_RECOVERY_FILE_DEST initialization parameter, and you have not explicitly configured a local archive destination with the LOCATION attribute, then Data Guard automatically uses the LOG_ARCHIVE_DEST_1 initialization parameter, as the default destination for local archiving. 
FAL_SERVER
Specify the Oracle Net service name of the FAL server(typically this is the database running in the primary role).  When the ‘jhb’ database is running in the standby role, it uses the ‘pretoria’ database as the FAL server from which to fetch (request) missing archived redo log files if ‘pretoria’ is unable to automatically send the missing log files.



Always review all the initialization parameters to check if there are any additiona parameters that need to be modified.  Check for the ump destination parameters etc.

Copy Files from the Primary System to the Standby System

Use operating system command to copy all the needed files from the primary database to the standby database.
·         Backup of the datafiles
·         Standby control file
·         Initialization parameter file

Set up the Environment to Support the Standby Database

You need to follow a number of steps to set up the Standby Database environment.
1.       If you are on a windows server, then you need to create a windows based service, using the ORADIM utility.
> oradim –NEW –SID jhb  -STARTMODE manual
2.       If the primary database has a remote login password file, copy it to the appropriate directory on the physical standby database system.  Note that the password file must be re-copied each time the SYSDBA or SYSOPER privilege is granted or revoked and whenever the login password of a user with these privileges is changed.  This step is optional if OS authentication is used for administrative users and if SSL is used for redo transport authentication.
3.       Configure the listeners for the primary and standby databases.  On the Primary site, and on the Standby sites, use Oracle Net Manager to configure a listener for the databases.  Then restart the listeners, to pick up the new definitions:
% lsnrctl stop <LISTENER>
% lsnrctl start <LISTENER>
4.       Create an Oracle Net Service name, on both the Primary and Standby sites, that will be used by redo transport services.  This Oracle Net Service name must resolve to a connect descriptor, that uses the same protocol, host address, port, and service that you specified when you configured the listeners for the primary and stand databases.  A dedicated server must be used with this connect descriptor.
5.       The standby database needs to run with an SPFILE
SQL> create spfile from pfile=’/u03/initjhb.ora’;
6.       Copy the primary database encryption wallet to the standby database system.  The standby database, must then also be configured to use this wallet.  Whenever the master encryption key is changed on the primary database, the database encryption wallet, must be copied from the primary database system to the standby database system again.
Encrypted data in a standby database cannot be accessed, unless the standby database is configured to point to a database encryption wallet, or hardware security module, which must contain the current master encryption key from the primary database.


Start the Physical Standby Database

There are a few steps to carry out, in order to start the physical standby database up, and start applying the redo data to it.
·         We need to start the physical standby database in mounted mode.  Remember that we used a standby control file to create the standby database, so the standby database should already be in standby mode.
SQL> startup mount
·         The second step is to prepare the standby database to receive and archive data from the primary database.
o   Creating and Managing a Standby Redo Log: The synchronous and asynchronous redo transport modes require that a redo transport destination have a standby redo log.  Standby redo logs store data from another database, such as the primary database.  Standby redo logs are managed in the same way as redo logs.  Redo received from the primary database is written to the current standby redo log group by a RFS process.  When a log switch occurs on the source base, a corresponding switch occurs on the standby database, and incoming redo is then written to the next standby redo log group.  The previously used standby do log group is then archived by an ARCn process.
The sequential writing to redo log groups, and archiving redo log groups at the source or primary database; is mirrored at all the standby databases.
There must always be one more standby redo log group, than the number of redo groups at the source or primary database.  You can determine the size of each log file, and the number of log groups in the redo log.
SQL> select group#, bytes from v$log;
You can determine the size and number of standby redo log groups.
SQL> select group#, bytes from v$standby_log;

Oracle recommends that a standby redo log be created on the primary database in a Data Guard configuration.  This way the database is immediately ready to receive redo data following a switchover to the standby role.
SQL> alter database add standby logfile (‘/orac/logfiles/standby_log1.rdo’) size 500M;

If the source database is a RAC or Oracle One Node Database, then query the v$log view to determine how many threads exist.  You can then specify te thread numbers when adding the standby redo logs.
SQL> alter database add standby logfile thread 1 size 500m;
SQL> alter database add standby logfile thread 2 size 500m;

Remember that whenever you add a log group to the primary standby database, that you must also add a standby redo log group to the primary database, plus a logfile group to the standby database and a standby log file group to each standby database.  If you don’t do this then the standby database, may become unsynchronized after a log switch.  If the primary database is operating in maximum protection mode, it could cause the primary database to shut down.

o   Configuring Standby Redo Log Archival:
Firstly if archiving is not enabled, you need to put the database in ARCHIVELOG mode and enable archiving.  The database has to be in ARCHIVELOG mode for standby redo log archival to be performed.
SQL> shutdown immediate
SQL> startup mount;
SQL> alter database archivelog;
To verify that you are in archivelog mode:
SQL> archive log list

Secondly, you can setup standby redo log archival to a fast recovery area.
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to use the DB_RECOVERY_FILE_DEST.
Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.

LOG_ARCHIVE_DEST_2 = ‘LOCATION=DB_RECOVERY_FILE_DEST
                                                VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)’
LOG_ARCHIVE_DEST_STATE_2=ENABLE

Oracle recommends the use of the fast recovery area, because it simplifies the management of archived redo log files.
Thirdly set up standby redo log archival to a local file system location.
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to a valid pathname.
Set the VALID_FOR attribute for the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.

LOG_ARCHIVE_DEST_2=’LOCATION=/U03/ARCHIVE
                                               VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)’
LOG_ARCHIVE_DEST_STATE_2=ENABLE

o   Cases where Redo is written directly to an Archived Redo Log File:
This can occur if a standby redo log group is not available, or if the redo was sent to the standby database to resolve a redo gap.

When this situation occurs, the redo is written to the location specified by the LOCATION attribute of the LOG_ARCHIVE_DEST_n parameter that is valid for archiving redo received from the primary or source database.  The value of the LOG_ARCHIVE_DEST_n parameter is determined when the standby database is mounted, and also every time a modification is made to the LOG_ARCHIVE_DEST_n parameter.
·         Step 3 is to create online redo logs on the standby database.
This step is optional, because a standby database does not need to have redo logs configured in order to operate.  However when it transitions to a primary role; it will need the online redo logs in order to operate.  So it is best practice to create a set of redo logs that map to the primary databases redo logs, so that the standby database is ready to transition.

·         Step 4 is to start the redo apply on the standby database.
Once everything is in place you can start applying the redo data.  It is good practice to include a clause to disconnect from the session, so that the redo apply can run in a background session.  You can also include the USING CURRENT LOGFILE clause, so that the redo can start as soon as it is received.

SQL> alter database recover managed standby database using current logfile disconnect from session;

Verify the Physical Standby Database is Performing Properly

You have set up the physical standby database and configured the redo transport services.  Now you can verify that the database modifications are being successfully transmitted from the primary database to the standby database.

To check that the redo data is being received on the standby database, you identify the existing archived redo log files on the standby database.  Then you force a log switch, which will also cause an online redo log archive to occur.  Then you can check the standby database again to validate the results.  Here it is step by step:
Step 1: On the standby database, query the v$archived_log view.
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
We may have for example sequence numbers 8,9,10

Step 2: On the primary database we force a logswitch, which will also result in the archival of the current online redo logfile group.
SQL> alter system switch logfile;

Step 3: Now we can go across to the standby database side again and verify that the redo data was also archived on the standby side.
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
Now we see for example the sequence# numbers of 8,9,10,11.
This shows us that the archived redo log files are available to be applied to the physical standby database.

Step 4: Now we verify that the received redo has been applied to the standby database.
SQL> select sequence#, applied from v$archived_log order by sequence#;

The value for the “APPLIED” column will be “YES”, if that log has already been applied, or “IN-MEMORY” if it has not yet been applied.

Post Creation Steps

Now that your Data Guard configuration is up and running you can look at some additional aspects.
·         By default Data Guard is configured to be in maximum performance mode.  You may want to change this to another mode, for example you may prefer it to be maximum protection mode.
·         Enable Flashback Database: Flashback database enables you to return the database to its state at any time in the recent past.  This works much faster than a point in time recovery and does not need any datafiles etc to restore from.  You can flashback the database to a time just before a failover occurred, and avoid having to rebuild the primary database.  In a Data Guard environment you can enable flashback recovery on the primary database or on the Standby database, or on both of them.




P.S. I am busy preparing Nine books on preparing for the Oracle 11g R2 OCM DBA exam.  Watch this spot for details on the books, as they start becoming available.