Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Geographic Edition Data Replication Guide for Oracle Data Guard Oracle Solaris Cluster 4.0 |
1. Replicating Data With Oracle Data Guard Software
Replicating Data in an Oracle Data Guard Protection Group (Task Map)
Overview of Oracle Data Guard Data Replication
Oracle Data Guard Shadow Resource Groups
Oracle Data Guard Replication Resource Groups
Initially Configuring Oracle Data Guard Software
Oracle Data Guard Broker Configurations
How to Set Up Your Primary Database
How to Configure the Primary Database Listener and Naming Service
How to Prepare Your Standby Database
How to Configure the Standby Database Listener and Naming Service
How to Start and Recover Your Standby Database
How to Verify That Your Configuration Is Working Correctly
How to Complete Configuring and Integrating Your Standby Oracle RAC Database
How to Complete Configuring and Integrating Your Standby HA for Oracle Database
How to Create and Enable an Oracle Data Guard Broker Configuration
2. Administering Oracle Data Guard Protection Groups
3. Migrating Services That Use Oracle Data Guard Data Replication
A. Geographic Edition Properties for Oracle Data Guard Broker Configurations
This section describes the initial steps that you need to perform to configure Oracle Data Guard replication in the Geographic Edition product.
Note - The steps in this document that describe how to use Oracle Data Guard tools and commands, such as dgmgrl, are intended for illustration only. Consult your Oracle Data Guard documentation to determine the detailed procedures that you need to follow to satisfy the particular needs of your environment.
The example protection group, sales-pg, in this section has been configured in a partnership that consists of two (partner) clusters, cluster-paris and cluster-newyork. An Oracle RAC database, which is managed and monitored by an individual Oracle database-server resource group on each cluster, is shadowed by the shadow Oracle database-server resource group, mysales_com -rac-proxy-svr-shadow-rg. The application data is contained in the sales database and replicated by Oracle Data Guard as part of the mysales.com Oracle Data Guard Broker configuration.
The shadow Oracle database-server resource group, mysales_com-rac-proxy-svr-shadow-rg, and the Oracle Data Guard Broker configuration, mysales.com, are present on both the cluster-paris and the cluster-newyork clusters. However, the names for the Oracle database-server resource group they shadow might be different on both the cluster-paris and the cluster-newyork clusters. The sales-pg protection group protects the application data by managing the replication of data between the cluster-paris and the cluster-newyork clusters.
This section provides the following information:
How to Configure the Primary Database Listener and Naming Service
How to Configure the Standby Database Listener and Naming Service
How to Complete Configuring and Integrating Your Standby Oracle RAC Database
How to Complete Configuring and Integrating Your Standby HA for Oracle Database
How to Create and Enable an Oracle Data Guard Broker Configuration
To define Oracle Data Guard Broker configurations, you need to determine the following information:
The name of the Oracle Data Guard Broker configuration, such as mysales.com, being replicated between the cluster-paris and cluster-newyork clusters.
The unique database names that are taking part in the replication, such as sales on the cluster-paris cluster, and salesdr on the cluster-newyork cluster.
The Oracle service names for these databases, such as sales -svc on the cluster-paris cluster and salesdr-svc on the cluster-newyork cluster. These names are held in the tnsnames.ora files in the ${ORACLE_HOME}/network/admin directory of the nodes that are hosting the Oracle database that is being replicated, or in the Oracle naming service directory.
The database standby type for the Oracle Data Guard Broker configuration, which you set to either logical, physical, or snapshot.
The replication mode for the Oracle Data Guard Broker configuration, which you set to MaxPerformance, MaxAvailability, or MaxProtection.
After you configure Oracle Data Guard between a pair of primary and standby databases, you create an Oracle Data Guard Broker configuration by using the ${ORACLE_HOME}/bin/dgmgrl command to define the properties of the named replication. You can use this command to set and to retrieve the previously listed Oracle Data Guard Broker properties.
You also need to determine the names of the Oracle database-server resource groups that manage the Oracle databases on each cluster. You configure these names by using the data service configuration wizard through the clsetup command. Alternatively, follow the instructions in Registering and Configuring HA for Oracle in Oracle Solaris Cluster Data Service for Oracle Guide or Appendix D, Command-Line Alternatives, in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide.
Of the Oracle Data Guard Broker configuration properties that are listed in the following table, you can change only the Protection Mode property with the Geographic Edition software.
|
You cannot use the Geographic Edition software to modify other Oracle Data Guard Broker properties in the configuration, such as the DelayMins, MaxFailure, MaxConnections, and NetTimeout properties. You must adjust these properties manually by using the Oracle Data Guard Broker command, or by modifying the appropriate database parameters that are held in the spfile server parameter file or the init${SID}.ora file through SQL*Plus. If you change the standby_type property with the Geographic Edition software, this change does not convert the configuration between the physical and snapshot standby states. Therefore, you must always set the value of the standby_type property to a value that matches the standby state that the database currently has configured. Otherwise, the configuration will experience probe and validate errors.
The Geographic Edition software modifies the Oracle Data Guard Broker configuration role changes during switchover and takeover operations.
For more information about the Oracle Data Guard Broker configuration, refer to the Oracle Data Guard Broker documentation.
In the following steps, the primary cluster is called cluster-paris (nodes phys-paris-1 and phys-paris-2), and the standby cluster is called cluster-newyork (phys-newyork-1 and phys-newyork-2). The suffix -crs is appended to the Oracle Clusterware virtual IP host names.
The primary database on cluster-paris is called sales and has instances sales1 and sales2. The standby database on cluster-newyork is called salesdr and has instances salesdr1 and salesdr2. The suffix -svc is appended to each net naming service name for each of the databases and individual instances, for example, sales-svc or sales1 -svc.
Note - Although the following example procedure uses an Oracle 10g RAC database, the principles are the same for an Oracle 11g RAC database. This release of Geographic Edition software supports a minimum of Oracle version 11.2.0.3.
Before You Begin
Ensure that you have edited your Oracle user .profile or .cshrc file to set the correct Oracle SID, ORACLE_HOME, and PATH environment variables for the local Oracle RAC database instance. Unless otherwise stated, you only need to run the commands from a node in the primary cluster that hosts a protected database instance.
phys-paris-1# getent hosts phys-paris-1-crs 10.11.112.41 phys-paris-1-crs …
Use either the Oracle Database Configuration Assistant (dbca) or the SQL*Plus utility.
oracle (phys-paris-1)$ cd ${ORACLE_HOME}/dbs oracle (phys-paris-1)$ ls -l orapwsales1 lrwxrwxrwx 1 oracle oinstall 25 November 2 02:06 orapwsales1 -> /oradata/SALES/orapwsales
Oracle Data Guard needs a consistent Oracle password file on all participating nodes in the primary and standby clusters.
If a password file does not exist, create one as follows:
oracle (phys-paris-1)$ orapwd file=${ORACLE_HOME}/dbs/orapwsales1 \ password=sysdba_password
You can then move this file to a location on shared storage and create a symbolic link to that file from each node. Change the file name to reflect the local SID on each node. Later, you will copy this file to the standby cluster (cluster-newyork).
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter database force logging; Database altered.
Run the sqlplus command as follows, substituting the two file names with ones that suit your configuration. Ensure that these files are located on shared storage that is visible to all cluster-paris nodes.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system set dg_broker_config_file1='/oradata/SALES/dr1sales.dat' 2 scope=both sid='*'; System altered. SQL> alter system set dg_broker_config_file2='/oradata/SALES/dr2sales.dat' 2 scope=both sid='*'; System altered.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> startup mount; ORACLE instance started. Total System Global Area 532676608 bytes Fixed Size 2031416 bytes Variable Size 276824264 bytes Database Buffers 247463936 bytes Redo Buffers 6356992 bytes Database mounted. System altered. SQL> alter database archivelog; Database altered. SQL> alter database flashback on; Database altered. SQL> alter database open; Database altered.
Depending on your configuration, you might need to add a number of standby redo logs. The name, number, and size of these logs depend on a number of factors, including whether you use the Optimal Flexible Architecture (OFA), how many online redo log files you have, and the size of those log files.
The following example shows how to configure a single 50-Mbyte standby redo log file, where the OFA naming scheme is being used. A default, two-node Oracle database database normally requires that you add six log files.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter database add standby logfile size 50m; Database altered.
Depending on your configuration, you might need to alter or add one or more of the Oracle log archive destination parameters. These parameters have a number of tunable properties. Consult the Oracle documentation for details.
The following example shows two log archive destinations being set, one for the local cluster and one for the standby cluster, where OFA naming is used.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system set log_archive_dest_1='location=use_db_recovery_file_dest 2 arch mandatory valid_for=(all_logfiles,all_roles) 3 db_unique_name=sales' scope=both sid='*'; System altered. SQL> alter system set log_archive_dest_2='service=salesdr-svc 2 lgwr sync affirm valid_for=(online_logfiles,primary_role) 3 db_unique_name=salesdr' scope=both sid='*'; System altered. SQL> alter system set log_archive_dest_10='location=use_db_recovery_file_dest' 2 scope=both sid='*'; System altered. SQL> alter system set standby_file_management='AUTO' scope=both sid='*'; System altered.
For the database to know where to get missing archive redo logs on the server and where to send them on the client, you need to set the FAL system properties. These properties use the net service names of the source and destination databases. You run the following sqlplus command to set the parameters to the correct values for your configuration.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system set fal_server='salesdr-svc' scope=both sid='*'; System altered. SQL> alter system set fal_client='sales-svc' scope=both sid='*'; System altered.
Note - Perform this step on all cluster-paris nodes.
Oracle Data Guard requires that you configure a static listener. The following example uses ${ORACLE_HOME}=/oracle/oracle/product/10.2.0/db_1 and shows where to add the entry for the static listener in the ${ORACLE_HOME}/network/admin/listener.ora file. The SID_LIST_LISTENER_PHYS-PARIS-1 and (SID_NAME = sales1) lines vary from node to node, while the (GLOBAL_DBNAME=sales_DGMGRL) differs on cluster-newyork. Later, you will add these entries on the cluster-newyork nodes.
oracle (phys-paris-1)$ cat ${ORACLE_HOME}/network/admin/listener.ora SID_LIST_LISTENER_PHYS-PARIS-1 = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = sales1) (GLOBAL_DBNAME=sales_DGMGRL) (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1) ) ) oracle (phys-paris-1)$
To enable the static entries, restart the Oracle listener processes on each of the nodes on cluster-paris.
oracle (phys-paris-1)$ lsnrctl stop LISTENER_PHYS_PHYS-PARIS-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:04:56 Copyright (c) 1991, 2006, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) The command completed successfully oracle$ lsnrctl start LISTENER_PHYS_PHYS-PARIS-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:05:04 …Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "sales_DGMGRL" has 1 instance(s). Instance "sales1", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Wait while databases register with listener oracle (phys-paris-1)$ lsnrctl status LISTENER_PHYS_PHYS-PARIS-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:04:56 Copyright (c) 1991, 2006, Oracle. All rights reserved. … Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "sales" has 2 instance(s). Instance "sales1", status READY, has 2 handler(s) for this service... Instance "sales2", status READY, has 1 handler(s) for this service... Service "salesXDB" has 2 instance(s). Instance "sales1", status READY, has 1 handler(s) for this service... Instance "sales2", status READY, has 1 handler(s) for this service... Service "sales_DGB" has 2 instance(s). Instance "sales1", status READY, has 2 handler(s) for this service... Instance "sales2", status READY, has 1 handler(s) for this service... Service "sales_DGMGRL" has 1 instance(s). Instance "sales1", status UNKNOWN, has 1 handler(s) for this service... Service "sales_XPT" has 2 instance(s). Instance "sales1", status READY, has 2 handler(s) for this service... Instance "sales2", status READY, has 1 handler(s) for this service... The command completed successfully
Ensure that the naming service method that you are using, either tnsnames.ora or the directory service, has entries defined for all the Oracle database instances in both clusters.
The following example shows the type of entries that you include for the cluster-paris cluster only. Entries for the cluster-newyork cluster are added in How to Configure the Standby Database Listener and Naming Service. Also, add entries for the standby (salesdr) database instances that you create later when you modify the pfile parameter file. In the example, the sales database dynamically registers a service name of sales with the listeners (see the database service_names initialization parameter).
oracle (phys-paris-1)$ cat ${ORACLE_HOME}/network/admin/tnsnames.ora SALES1-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = sales) (INSTANCE_NAME = sales1) ) ) SALES2-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = sales) (INSTANCE_NAME = sales2) ) ) SALES-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) (LOAD_BALANCE = yes) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = sales) ) ) LISTENERS_SALES = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521)) )
The following example shows how to use the Oracle Recovery Manager (RMAN) utility to create a copy of the primary database that you can restore on the standby cluster-newyork cluster. The example also shows how to avoid performing a separate step to create a control file for the standby database. For more information about the options for completing this step, see your Oracle Database documentation.
oracle (phys-paris-1)$ rman RMAN> connect target sys/DBA_password@sales-svc; RMAN> connect auxiliary /; RMAN> backup device type disk tag 'mybkup' database include current 2> controlfile for standby; RMAN> backup device type disk tag 'mybkup' archivelog all not backed up;
Create the appropriate directory hierarchies on the cluster-newyork cluster and copy the database backup to this cluster. The actual locations that you specify for the files that are shown in the example depend on the specific choices that you made when you configured the database.
oracle (phys-newyork-1)$ mkdir -p $ORACLE_BASE/admin/salesdr oracle (phys-newyork-1)$ cd $ORACLE_BASE/admin/salesdr oracle (phys-newyork-1)$ mkdir adump bdump cdump dpdump hdump pfile udump Make the directory for the database backup oracle (phys-newyork-1)$ mkdir -p /oradata/flash_recovery_area/SALES/backupset/date Copy over the files oracle (phys-newyork-1)$ cd /oradata/flash_recovery_area/SALES/backupset/date oracle (phys-newyork-1)$ scp oracle@phys-paris-1:`pwd`/\* . Make the base directory for new database files oracle (phys-newyork-1)$ mkdir -p /oradata/SALESDR
Create a suitable server initialization file for the standby (salesdr) database. The easiest way to create this file is to copy the parameters for the primary database and modify them. The following example shows how to create a pfile parameter file:
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> CREATE PFILE='/tmp/initpfile_for_salesdr.ora' FROM SPFILE; File created. SQL> quit
Change all entries that are particular to the primary cluster to entries that are suitable for the standby cluster, as shown in the following example. Modify entries that are prefixed by an Oracle SID, that is, sales1 or sales2, to use standby database instance SID names, that is, salesdr1 and salesdr2. Depending on your configuration, you might need to make additional changes.
Note - Do not change the db_name parameter, as it must remain sales on both clusters.
You created these directories previously *.audit_file_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/adump' *.background_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/bdump' *.user_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/udump' *.core_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/cdump' Remove the following entry *.control_files='...list primary control files...' Add this entry *.db_unique_name='salesdr' *.dg_broker_config_file1='/oradata/SALESDR/dr1salesdr.dat' *.dg_broker_config_file2='/oradata/SALESDR/dr2salesdr.dat' *.dispatchers='(PROTOCOL=TCP) (SERVICE=salesdrXDB)' Switch the client and server entries around, as shown in the following entries *.fal_client='salesdr-svc' *.fal_server='sales-svc' *.remote_listener='LISTENERS_SALESDR' Switch the log archive destinations *.log_archive_dest_1='location=use_db_recovery_file_dest arch mandatory valid_for=(all_logfiles,all_roles) db_unique_name=salesdr' *.log_archive_dest_2='service=sales-svc lgwr sync affirm valid_for=(online_logfiles,primary_role) db_unique_name=sales'
oracle (phys-newyork-1)$ ORACLE_SID=salesdr1 export ORACLE_SID oracle (phys-newyork-1)$ sqlplus '/ as sysdba' SQL> startup nomount pfile='/tmp/initpfile_for_salesdr.ora'; SQL> create spfile='/oradata/SALESDR/spfilesalesdr.ora' 2> from pfile='/tmp/initpfile_for_salesdr.ora'; SQL> shutdown
oracle (phys-newyork-1) cat ${ORACLE_HOME}/dbs/initsalesdr1.ora SPFILE='/oradata/SALESDR/spfilesalesdr.ora'
oracle (phys-newyork-1) sqlplus '/ as sysdba' You are now starting from the spfile SQL> startup nomount ORACLE instance started. Total System Global Area 532676608 bytes Fixed Size 2031416 bytes Variable Size 289407176 bytes Database Buffers 234881024 bytes Redo Buffers 6356992 bytes
Place the file on shared storage on the cluster-newyork cluster.
Again change the name of the symbolic link to reflect the Oracle SID on the local standby node.
Note - Perform this step on all cluster-newyork nodes.
Oracle Data Guard requires that you configure a static listener.
The following example uses ${ORACLE_HOME}=/oracle/oracle/product/10.2.0/db_1 and shows where to add the entry for the static listener in the ${ORACLE_HOME}/network/admin/listener.ora file. The SID_LIST_LISTENER_PHYS-NEWYORK-1 and (SID_NAME = salesdr1) lines vary from node to node, while the (GLOBAL_DBNAME=salesdr_DGMGRL) differs on cluster-paris.
oracle (phys-newyork-1)$ cat ${ORACLE_HOME}/network/admin/listener.ora SID_LIST_LISTENER_PHYS-NEWYORK-1 = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = salesdr1) (GLOBAL_DBNAME=salesdr_DGMGRL) (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1) ) ) oracle (phys-newyork-1)$
To enable the static entries, restart the Oracle listener processes on each of the nodes on cluster-newyork.
oracle (phys-newyork-1)$ lsnrctl stop LISTENER_PHYS_PHYS-NEWYORK-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:04:56 Copyright (c) 1991, 2006, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) The command completed successfully oracle$ lsnrctl start LISTENER_PHYS_PHYS-NEWYORK-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:05:04 Copyright (c) 1991, 2006, Oracle. All rights reserved. Starting /oracle/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Solaris: Version 10.2.0.4.0 - Production Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "salesdr_DGMGRL" has 1 instance(s). Instance "salesdr1", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Wait while databases register with listener oracle (phys-newyork-1)$ lsnrctl status LISTENER_PHYS_PHYS-NEWYORK-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:04:56 Copyright (c) 1991, 2006, Oracle. All rights reserved.… Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "salesdr" has 2 instance(s). Instance "salesdr1", status READY, has 2 handler(s) for this service... Instance "salesdr2", status READY, has 1 handler(s) for this service... Service "salesdrXDB" has 2 instance(s). Instance "salesdr1", status READY, has 1 handler(s) for this service... Instance "salesdr2", status READY, has 1 handler(s) for this service... Service "salesdr_DGB" has 2 instance(s). Instance "salesdr1", status READY, has 2 handler(s) for this service... Instance "salesdr2", status READY, has 1 handler(s) for this service... Service "salesdr_DGMGRL" has 1 instance(s). Instance "salesdr1", status UNKNOWN, has 1 handler(s) for this service... Service "salesdr_XPT" has 2 instance(s). Instance "salesdr1", status READY, has 2 handler(s) for this service... Instance "salesdr2", status READY, has 1 handler(s) for this service... The command completed successfully
Ensure that the naming service method that you are using, either tnsnames.ora or the directory service, has entries defined for all the Oracle database instances in both clusters.
The following example shows the type of entries that you include for the cluster-newyork cluster only. Entries for the cluster-paris cluster are added in How to Configure the Primary Database Listener and Naming Service. In the example, the salesdr database dynamically registers a service name of salesdr with the listeners (see the database service_names initialization parameter).
oracle (phys-newyork-1)$ cat ${ORACLE_HOME}/network/admin/tnsnames.ora SALESDR1-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = salesdr) (INSTANCE_NAME = salesdr1) ) ) SALESDR2-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = salesdr) (INSTANCE_NAME = salesdr2) ) ) SALESDR-SVC = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2-crs)(PORT = 1521) (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535)) (LOAD_BALANCE = yes) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = salesdr) ) ) LISTENERS_SALESDR = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2-crs)(PORT = 1521)) )
Ensure that these files include the static Oracle Data Guard listener entry and the naming service entries for the primary and standby cluster database service. If you are not using the Oracle directory naming service lookup, you need to include the entries in tnsnames.ora.
oracle (phys-newyork-1)$ lsnrctl stop LISTENER_PHYS-NEWYORK-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:04:56 Copyright (c) 1991, 2006, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) The command completed successfully oracle$ lsnrctl start LISTENER_PHYS-NEWYORK-1 LSNRCTL for Solaris: Version 10.2.0.4.0 - Production on 29-OCT-2008 02:05:04 Copyright (c) 1991, 2006, Oracle. All rights reserved. Starting /oracle/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Solaris: Version 10.2.0.4.0 - Production … Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Service "salesdr_DGMGRL" has 1 instance(s). Instance "salesdr1", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully
Continuing to work on the cluster-newyork cluster, you can now restore the data from the backup of the primary database to the standby database.
The following example shows how to use the Oracle Recovery Manager (RMAN) utility.
oracle (phys-newyork-1) rman RMAN> connect target sys/oracle@sales-svc; RMAN> connect auxiliary /; RMAN> duplicate target database for standby nofilenamecheck; …
The exact requirements that you must meet depend on your configuration. The steps you follow are identical to those that you followed for the primary cluster.
oracle (phys-newyork-1)$ sqlplus '/ as sysdba' SQL> alter database flashback on; Datbase altered. SQL> shutdown immediate; SQL> startup mount; ORACLE instance started. …
oracle (phys-newyork-1) sqlplus '/ as sysdba' SQL> alter database recover managed standby database using current logfile disconnect;
When the SQL> prompt is displayed, log in to one of the database instances on the cluster-paris cluster and perform a couple log switches.
oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system switch logfile; SQL> alter system switch logfile;
If there are errors, correct them. This process might take time. You can check that the network connectivity is correct by using the following command:
oracle (phys-paris-1)$ tnsping salesdr-svc oracle (phys-newyork-1)$ tnsping sales-svc
Perform this procedure if you are using an Oracle RAC database. If you are using an HA for Oracle database, instead go to How to Complete Configuring and Integrating Your Standby HA for Oracle Database.
Place the standby database under Oracle Clusterware control and configure it to open when Oracle Clusterware starts.
oracle (phys-newyork-1)$ srvctl add database -d salesdr \ -r PHYSICAL_STANDBY -o $ORACLE_HOME -s open; oracle (phys-newyork-1)$ srvctl add instance -d salesdr \ -i salesdr1 -n $phys-newyork-1; oracle (phys-newyork-1)$ srvctl add instance -d salesdr \ -i salesdr2 -n $phys-newyork-2;
Integrate the standby Oracle RAC database with Oracle Solaris Cluster. You can use either the data service configuration wizard that is available through the clsetup utility or use Oracle Solaris Cluster maintenance commands. Follow procedures in How to Enable Oracle Solaris Cluster and Oracle Grid Infrastructure to Interoperate in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide or Creating Resources for Interoperation With Oracle Grid Infrastructure by Using Oracle Solaris Cluster Maintenance Commands in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide.
By integrating the standby database, you allow the standby to be managed as the primary database is, should a failover or takeover be necessary.
Note - The resource and resource group that you create are used by the Geographic Edition Oracle Data Guard integration.
Perform the following commands on only one node in each cluster (cluster-paris and cluster-newyork).
oracle (phys-newyork-1)$ sqlplus '/ as sysdba' SQL> alter system set dg_broker_start=true scope=both sid='*'; SQL> quit oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system set dg_broker_start=true scope=both sid='*'; SQL> quit
Perform this procedure if you are using an HA for Oracle database. If you are using an Oracle RAC database, instead go to How to Complete Configuring and Integrating Your Standby Oracle RAC Database.
Place the standby Oracle database under Oracle Clusterware control and configure it to open when Oracle Clusterware starts.
oracle (phys-newyork-1)$ srvctl add database -d salesdr \ -r PHYSICAL_STANDBY -o $ORACLE_HOME -s open;
Integrate the standby HA for Oracle database with the data service configuration wizard that is available through the clsetup utility. Follow procedures in Registering and Configuring HA for Oracle in Oracle Solaris Cluster Data Service for Oracle Guide.
By integrating the standby database, you allow the standby to be managed as the primary database is, should a failover or takeover be necessary.
Note - The resource and resource group that you create are used by the Geographic Edition Oracle Data Guard integration.
Perform the following commands on only one node in each cluster (cluster-paris and cluster-newyork).
oracle (phys-newyork-1)$ sqlplus '/ as sysdba' SQL> alter system set dg_broker_start=true scope=both sid='*'; SQL> quit oracle (phys-paris-1)$ sqlplus '/ as sysdba' SQL> alter system set dg_broker_start=true scope=both sid='*'; SQL> quit
To use Oracle Data Guard with Geographic Edition, you need to create an Oracle Data Guard Broker configuration.
In the following example procedure, the Oracle Data Guard Broker configuration is called mysales.com. The salesdr database is a physical copy of the sales database.
You use the dgmgrl command to create the Oracle Data Guard Broker configuration. You need to know the name of the Oracle Data Guard Broker configuration that you want to create, the name of the primary database, and the net service name through which to connect. You will need to know these properties again, when you specify the configuration to Geographic Edition.
oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> create configuration mysales.com as primary DGMGRL> database is sales connect identifier is sales-svc;
If you find errors when you connect to the Oracle Data Guard Broker, check the ${ORACLE_HOME}/admin/sales/bdump/alert_prim_sid.log file. You can check that the configuration has been created by using the following command:
oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> show configuration; Configuration Name: mysales.com Enabled: NO Protection Mode: MaxPerformance Fast-Start Failover: DISABLED Databases: sales - Primary database Current status for "mysales.com": DISABLED
You need to know the name of the standby database, the net service name through which to connect, and the type of standby (physical or logical).
oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> add database salesdr as connect identifier is salesdr-svc maintained as physical;
If the standby database is also a multi-instance Oracle RAC database, you can specify the instance on which you would prefer the transmitted archive redo logs to be applied. Before you enable the configuration, issue the following command:
oracle$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> edit database salesdr set property PreferredApplyInstance='salesdr1';
oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> enable configuration;
If you have successfully performed all steps, you can check the status of the configuration by using the following command:
oracle$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> show configuration; Configuration Name: mysales.com Enabled: YES Protection Mode: MaxPerformance Fast-Start Failover: DISABLED Databases: sales - Primary database salesdr - Physical standby database Current status for "mysales.com": SUCCESS
Before you add the Oracle Data Guard Broker configuration to Geographic Edition, you need to verify that you can perform a switchover of the database from the primary to the standby and back again. If this switchover does not work, Geographic Edition will not be able to perform this operation either.
oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc DGMGRL> switchover to salesdr Performing switchover NOW, please wait... Operation requires shutdown of instance "sales1" on database "sales" Shutting down instance "sales1"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires shutdown of instance "salesdr1" on database "salesdr" Shutting down instance "salesdr1"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires startup of instance "sales1" on database "sales" Starting instance "sales1"... ORACLE instance started. Database mounted. Operation requires startup of instance "salesdr1" on database "salesdr" Starting instance "salesdr1"... ORACLE instance started. Database mounted. Switchover succeeded, new primary is "salesdr" DGMGRL> switchover to sales; Performing switchover NOW, please wait... Operation requires shutdown of instance "salesdr1" on database "salesdr" Shutting down instance "salesdr1"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires shutdown of instance "sales1" on database "sales" Shutting down instance "sales1"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires startup of instance "salesdr1" on database "salesdr" Starting instance "salesdr1"... ORACLE instance started. Database mounted. Operation requires startup of instance "sales1" on database "sales" Starting instance "sales1"... ORACLE instance started. Database mounted. Switchover succeeded, new primary is "sales"