This chapter uses the Oracle SOA Suite enterprise deployment and the Oracle Identity Management enterprise deployment topologies as examples to illustrate the steps required to set up the production site and standby site.
It includes the following topics:
This section provides the steps to create the production site. The Oracle SOA enterprise deployment topology and the Oracle Identity Management Enterprise deployment topology are used as examples.
Ensure that you have performed the following prerequisites before you start creating the production site:
Set up the host name aliases for the middle tier hosts, which was described in Section 3.1.1, "Planning Host Names."
Create the required volumes on the shared storage on the production site, which is described in Section 4.1.1, "Directory Structure and Volume Design."
Create the mount points and the symbolic links (if required). Refer to Section 3.2.3, "Storage Replication" to determine whether you must create symbolic links for the production site.
The Oracle Data Guard configuration used should be decided based on the data loss requirements of the database as well as the network considerations such as the available bandwidth and latency when compared to the redo generation. Ensure that this is determined correctly before setting up the Oracle Data Guard configuration.
Please refer to Oracle Data Guard Concepts and Administration as well as related Maximum Availability Architecture collateral at the following URL for more information:
http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
The following section details the directory structure recommended by Oracle. The end user is free to choose other directory layouts, but the model adopted here enables maximum availability, providing the best isolation of components and symmetry in the configuration, and facilitating backup and disaster recovery.
This list describes directories and directory environment variables:
ORACLE_BASE: This environment variable and related directory path refers to the base directory under which Oracle products are installed.
MW_HOME: This environment variable and related directory path refers to the location where Oracle Fusion Middleware resides.
WL_HOME: This environment variable and related directory path contains installed files necessary to host a WebLogic Server.
ORACLE_HOME: This environment variable and related directory path refers to the location where a product suite (such as Oracle Fusion Middleware SOA Suite, Oracle WebCenter, or Oracle Identity Management) is installed.
DOMAIN directory: This directory path refers to the location where the Oracle WebLogic Domain information (configuration artifacts) is stored. Different WebLogic Servers can use different domain directories even when in the same node as described below.
ORACLE_INSTANCE: An Oracle instance contains one or more system components, such as Oracle Web Cache, Oracle HTTP Server, or Oracle Internet Directory. An Oracle instance directory contains updatable files, such as configuration files, log files, and temporary files.
Oracle Fusion Middleware 11g allows creating multiple SOA Managed Servers from one single binary installation. This allows the installation of binaries in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations. In this model, two MW HOMEs (each of which has a WL_HOME and an ORACLE_HOME for each product suite) are installed in a shared storage. Additional servers (when scaling out or up) of the same type can use either one of these two locations without requiring more installations. Ideally, users should use two different volumes for redundant binary location, thus isolating as much as possible the failures in each volume. For additional protection, Oracle recommends using storage replication for these volumes. If multiple volumes are not available, Oracle recommends using mount points to simulate the same mount location in a different directory in the shared storage. Although this does not guarantee the protection that multiple volumes provide, it does allow protection from user deletions and individual file corruption.
Oracle also recommends separating the domain directory used by the Administration Server from the domain directory used by Managed Servers. This allows a symmetric configuration for the domain directories used by Managed Servers, and isolates the failover of the Administration Server. The domain directory for the Administration Server must reside in a shared storage to allow failover to another node with the same configuration. It is also recommended to have the Managed Servers' domain directories on a shared storage, even though having them on the local file system is also supported. This is especially true when designing a production site with the disaster recovery site in mind. Figure 4-1 represents the directory structure layout for Oracle SOA Suite.
Detailed information about setting up this directory structure is included in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite and in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter.
Table 4-1 explains what the color-coded elements in Figure 4-1 mean. The directory structure in Figure 4-1 does not show other required internal directories such as oracle_common and jrockit.
Table 4-1 Directory Structure Elements
Element | Explanation |
---|---|
![]() |
The Administration Server domain directories, applications, deployment plans, file adapter control directory, JMS and TX logs, and the entire MW_HOME are on shared storage. |
![]() |
The Managed Server domain directories can be on a local disk or shared storage. Further, if you want to share the Managed Server domain directories on multiple nodes, then you must mount the same shared storage location across the nodes. The |
![]() |
Fixed name. |
![]() |
Installation-dependent name. |
Figure 4-2 shows an Oracle SOA Suite topology diagram. The volume design described in this section is for this Oracle SOA Suite topology. Detailed instructions for installing and configuring this topology are provided in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.
Figure 4-2 MySOACompany Topology with Oracle Access Manager
For disaster recovery of this Oracle SOA Suite topology, Oracle recommends the following volume design:
Provision two volumes for two Middleware Homes that contain redundant product binaries (VOLFMW1 and VOLFMW2 in Table 4-2)
Provision one volume for the Administration Server domain directory (VOLADMIN in Table 4-2)
Provision one volume on each node for the Managed Server domain directory (VOLSOA1 and VOLSOA2 in Table 4-2). This directory is shared between all the Managed Servers on that node.
Provision one volume for the JMS file-store and JTA transaction logs (VOLDATA in Table 4-2). There will be one volume for the entire domain that is mounted on all the nodes in the domain.
Provision one volume on each node for the Oracle HTTP Server Oracle home (VOLWEB1 and VOLWEB2 in Table 4-2).
Provision one volume on each node for the Oracle HTTP Server Oracle instance (VOLWEBINST1 and VOLWEBINST2 in Table 4-2).
Table 4-2 provides a summary of Oracle recommendations for volume design for the Oracle SOA Suite topology shown in Figure 4-2:
Table 4-2 Volume Design Recommendations for Oracle SOA Suite
Tier | Volume Name | Mounted on Host | Mount Point | Comments |
---|---|---|---|---|
Web |
VOLWEB1 |
WEBHOST1 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEB2 |
WEBHOST2 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEBINST1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLWEBINST2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLSTATIC1Foot 1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Web |
VOLSTATIC2Foot 2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Application |
VOLFMW1 |
SOAHOST1 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle SOA Suite binaries |
Application |
VOLFMW2 |
SOAHOST2 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle SOA Suite binaries. |
Application |
VOLADMIN |
SOAHOST1 |
/u01/app/oracle/admin/soaDomain/admin |
Volume for Administration Server domain directory |
Application |
VOLSOA1 |
SOAHOST1 |
/u01/app/oracle/admin/soaDomain/mng1 |
Volume for Managed Server domain directory |
Application |
VOLSOA2 |
SOAHOST2 |
/u01/app/oracle/admin/soaDomain/mng2 |
Volume for Managed Server domain directory |
Application |
VOLDATA |
SOAHOST1, SOAHOST2 |
/u01/app/oracle/admin/soaDomain/soaCluster/jms /u01/app/oracle/admin/soaDomain/soaCluster/tlogs |
Volume for transaction logs and JMS data |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Oracle recommends the following consistency groups for the Oracle SOA Suite topology:
Create one consistency group with the volumes containing the domain directories for the Administration Server and Managed Servers as members (DOMAINGROUP in Table 4-3).
Create one consistency group with the volume containing the JMS file store and transaction log data as members (DATAGROUP in Table 4-3).
Create one consistency group with the volume containing the Middleware Homes as members (FMWHOMEGROUP in Table 4-3).
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle homes as members (WEBHOMEGROUP in Table 4-3).
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle instances as members (WEBINSTANCEGROUP in Table 4-3).
Table 4-3 provides a summary of Oracle recommendations for consistency groups for the Oracle SOA Suite topology shown in Figure 4-2.
Table 4-3 Consistency Groups for Oracle SOA Suite
Tier | Group Name | Members | Comments |
---|---|---|---|
Application |
DOMAINGROUP |
VOLADMIN VOLSOA1 VOLSOA2 |
Consistency group for the Administration Server, Managed Server domain directory |
Application |
DATAGROUP |
VOLDATA |
Consistency group for the JMS file store and transaction log data |
Application |
FMWHOMEGROUP |
VOLFMW1 VOLFMW2 |
Consistency group for the Middleware homes |
Web |
WEBHOMEGROUP |
VOLWEB1 VOLWEB2 |
Consistency group for the Oracle HTTP Server Oracle homes |
Web |
WEBINSTANCEGROUP |
VOLWEBINST1 VOLWEBINST2 VOLSTATIC1Foot 1 VOLSTATIC2Foot 2 |
Consistency group for the Oracle HTTP Server Oracle instances |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Oracle Fusion Middleware 11g allows creating multiple WebCenter Managed Servers from one single binary installation. This allows the installation of binaries in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations. In this model, two MW HOMEs (each of which has a WL_HOME and an ORACLE_HOME for each product suite) are installed in a shared storage. Additional servers (when scaling out or up) of the same type can use either one of these two locations without requiring more installations. Ideally, users should use two different volumes for redundant binary location, thus isolating as much as possible the failures in each volume. For additional protection, Oracle recommends using storage replication for these volumes. If multiple volumes are not available, Oracle recommends using mount points to simulate the same mount location in a different directory in the shared storage. Although this does not guarantee the protection that multiple volumes provide, it does allow protection from user deletions and individual file corruption.
Oracle also recommends separating the domain directory used by the Administration Server from the domain directory used by Managed Servers. This allows a symmetric configuration for the domain directories used by Managed Servers, and isolates the failover of the Administration Server. The domain directory for the Administration Server must reside in a shared storage to allow failover to another node with the same configuration. It is also recommended to have the Managed Servers' domain directories on a shared storage, even though having them on the local file system is also supported. This is especially true when designing a production site with the disaster recovery site in mind. Figure 4-1 shows the directory structure layout for Oracle WebCenter (the same directory structure layout is used for both Oracle SOA Suite and Oracle WebCenter).
Figure 4-3 shows an Oracle WebCenter topology diagram. The volume design described in this section is for this Oracle WebCenter topology. Instructions for installing and configuring this topology are provided in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter.
Figure 4-3 MyWCCompany Topology with Oracle Access Manager
For disaster recovery of this Oracle WebCenter topology, Oracle recommends the following volume design:
Provision two volumes for two Middleware Homes that contain redundant product binaries (VOLFMW1 and VOLFMW2 in Table 4-4)
Provision one volume for the Administration Server domain directory (VOLADMIN in Table 4-4)
Provision one volume on each node for the Managed Server domain directory for SOA (VOLSOA1 and VOLSOA2 in Table 4-4). This directory is shared between all the Managed Servers on that node.
Provision one volume on each node for the Managed Server domain directory for WebCenter (VOLWC1 and VOLWC2 in Table 4-4). This directory is shared between all the Managed Servers on that node.
Provision one volume for the JMS file-store and JTA transaction logs (VOLDATA in Table 4-4). There will be one volume for the entire domain that is mounted on all the nodes in the domain.
Provision one volume on each node for the Oracle HTTP Server Oracle home (VOLWEB1 and VOLWEB2 in Table 4-4).
Provision one volume on each node for the Oracle HTTP Server Oracle instance (VOLWEBINST1 and VOLWEBINST2 in Table 4-4).
Table 4-4 provides a summary of Oracle recommendations for volume design for the Oracle WebCenter topology shown in Figure 4-3:
Table 4-4 Volume Design Recommendations for Oracle WebCenter
Tier | Volume Name | Mounted on Host | Mount Point | Comments |
---|---|---|---|---|
Web |
VOLWEB1 |
WEBHOST1 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEB2 |
WEBHOST2 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEBINST1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLWEBINST2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLSTATIC1Foot 1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Web |
VOLSTATIC2Foot 2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Application |
VOLFMW1 |
SOAHOST1 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle SOA Suite binaries |
Application |
VOLFMW2 |
SOAHOST2 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle SOA Suite binaries. |
Application |
VOLADMIN |
SOAHOST1 |
/u01/app/oracle/admin/soaDomain/admin |
Volume for Administration Server domain directory |
Application |
VOLSOA1 |
SOAHOST1 |
/u01/app/oracle/admin/soaDomain/mng1 |
Volume for Managed Server domain directory for SOA |
Application |
VOLSOA2 |
SOAHOST2 |
/u01/app/oracle/admin/soaDomain/mng2 |
Volume for Managed Server domain directory for SOA |
Application |
VOLWC1 |
WCHOST1 |
/u01/app/oracle/admin/wcDomain/mng1 |
Volume for Managed Server domain directory for WebCenter. |
Application |
VOLWC2 |
WCHOST2 |
/u01/app/oracle/admin/wcDomain/mng2 |
Volume for Managed Server domain directory for WebCenter. |
Application |
VOLDATA |
SOAHOST1, SOAHOST2, WCHOST1, WCHOST2 |
/u01/app/oracle/admin/soaDomain/soaCluster/jms /u01/app/oracle/admin/soaDomain/soaCluster/tlogs |
Volume for transaction logs and JMS data |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Oracle recommends the following consistency groups for the Oracle WebCenter topology:
Create one consistency group with the volumes containing the domain directories for the Administration Server and Managed Servers as members (DOMAINGROUP in Table 4-5).
Create one consistency group with the volume containing the JMS file store and transaction log data as members (DATAGROUP in Table 4-5).
Create one consistency group with the volume containing the Middleware Homes as members (FMWHOMEGROUP in Table 4-5).
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle homes as members (WEBHOMEGROUP in Table 4-5).
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle instances as members (WEBINSTANCEGROUP in Table 4-5).
Table 4-5 provides a summary of Oracle recommendations for consistency groups for the Oracle WebCenter topology shown in Figure 4-3.
Table 4-5 Consistency Groups for Oracle WebCenter
Tier | Group Name | Members | Comments |
---|---|---|---|
Application |
DOMAINGROUP |
VOLADMIN VOLSOA1 VOLSOA2 |
Consistency group for the Administration Server, Managed Server domain directory |
Application |
DATAGROUP |
VOLDATA |
Consistency group for the JMS file store and transaction log data |
Application |
FMWHOMEGROUP |
VOLFMW1 VOLFMW2 |
Consistency group for the Middleware homes |
Web |
WEBHOMEGROUP |
VOLWEB1 VOLWEB2 |
Consistency group for the Oracle HTTP Server Oracle homes |
Web |
WEBINSTANCEGROUP |
VOLWEBINST1 VOLWEBINST2 VOLSTATIC1Foot 1 VOLSTATIC2Foot 2 |
Consistency group for the Oracle HTTP Server Oracle instances |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Oracle Fusion Middleware 11g allows the separation of the product binaries and the run-time artifacts for Oracle Identity Management components. The product binaries are under the ORACLE_HOME directory and the run-time time artifacts are located under the ORACLE_INSTANCE directory.
In this model, for the web tier and the data tier, it is recommended to have one ORACLE_HOME (for product binaries) per host and one ORACLE_INSTANCE for an instance, installed on the shared storage. The ORACLE_HOME is shared among all the instances running on the host, whereas each instance has its own ORACLE_INSTANCE location. Additional, servers (when scaling out or up) of the same type can use either one of the same location without requiring more installations.
For the application tier, it is recommended to have one Middleware Home (MW HOME) per host (each of which has a WLS HOME and an ORACLE_HOME for each product suite) installed on the shared storage. Additional servers (when scaling out or up) of the same type can use the same location without requiring more installations.
Separation of the domain directory and the MW_HOME is not supported. The domain directory is under the MW_HOME and is shared between all the Administration Servers and Managed Servers running on the host. Section 4.1.1.3, "Directory Structure for Oracle Identity Management" shows the directory structure layout for Oracle Identity Management:
Figure 4-4 Directory Structure for Oracle Identity Management
The Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management manual describes how to set up the Oracle Identity Management enterprise deployment shown in Figure 4-4. The directory structure in Figure 4-4 does not show other required internal directories such as oracle_common and jrockit
Figure 4-5 shows an Oracle Identity Management topology diagram. The volume design described in this section is for this Oracle Identity Management topology. Instructions for installing and configuring this topology are provided in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management.
Figure 4-5 MyIMCompany Topology with Oracle Access Manager
The Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management manual describes how to set up the Oracle Identity Management enterprise deployment shown in Figure 4-5.
Oracle recommends the following volume design for Oracle Identity Management:
Provision one volume on each of the Identity Management nodes for the Middleware Homes. This volume will also contain the WebLogic Server Home, Identity Management Oracle home, domain directory for the Administration Server and Managed Server running on that host. These are VOLIDM1 and VOLIDM2 in Table 4-6.
Provision one volume on each node for the Oracle homes in the directory tier and web tier. These are VOLWEB1, VOLWEB2, VOLOID1, VOLOID2, VOLOVD1, and VOLOVD2 in Table 4-6.
Provision one volume on each node for the Oracle instance home in the directory tier and web tier. These are VOLWEBINST1, VOLWEBINST2, VOLOIDINST1, VOLOIDINST2, VOLOVDINST1, and VOLIOVDINST2 in Table 4-6.
Provision one volume on each node for the Identity Management Oracle instances in the application tier. This volume is shared by the Administration Server and Managed Server instances. These are VOLIDMINST1 and VOLIDMINST2 in Table 4-6.
Provision one volume on each Oracle Access Manager node for the Oracle Access Manager homes. This volume contains the Identity Server and Access Server homes. These are VOLOAM1 and VOLOAM2 in Table 4-6.
Provision one volume on the OAMADMINHOST for the Oracle HTTP Server Oracle home, Oracle HTTP Server Oracle instance, WebGate, WebPass and Policy Manager homes. This is VOLOAMADMIN in Table 4-6.
Table 4-6 provides a summary of Oracle recommendations for volume design for the Oracle Identity Management topology shown in Figure 4-5:
Table 4-6 Volume Recommendations for Oracle identity Management
Tier | Volume Names | Mounted on Nodes | Mount Point | Comments |
---|---|---|---|---|
Web |
VOLWEB1 |
WEBHOST1 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installations |
Web |
VOLWEB2 |
WEBHOST2 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installations |
Web |
VOLWEBINST1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instances |
Web |
VOLWEBINST2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instances |
Web |
VOLSTATIC1Foot 1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Web |
VOLSTATIC2Foot 2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Application |
VOLIDM1 |
IDMHOST1 |
/u01/app/oracle/product/fmw |
Volume for Identity Management Middleware homes |
Application |
VOLIDM2 |
IDMHOST2 |
/u01/app/oracle/product/fmw |
Volume for Identity Management Middleware homes |
Application |
VOLIDMINST1 |
IDMHOST1 |
/u01/app/oracle/admin |
Volume for Oracle instances |
Application |
VOLIDMINST2 |
IDMHOST2 |
/u01/app/oracle/admin |
Volume for Oracle instances |
Application |
VOLOAM1 |
OAMHOST1 |
/u01/app/oracle/product/fmw/oam |
Volume for Oracle Access Manager Identity Server and Access Server homes |
Application |
VOLOAM2 |
OAMHOST2 |
/u01/app/oracle/product/fmw/oam |
Volume for Oracle Access Manager Identity Server and Access Server homes |
Application |
VOLOAMADMIN |
OAMADMINHOST |
/u01/app/oracle |
Volume for Oracle Access Manager administration components |
Directory |
VOLOID1 |
OIDHOST1 |
/u01/app/oracle/product/fmw/idm |
Volume for Oracle Internet Directory Oracle homes |
Directory |
VOLOID2 |
OIDHOST2 |
/u01/app/oracle/product/fmw/idm |
Volume for Oracle Internet Directory Oracle homes |
Directory |
VOLOIDINST1 |
OIDHOST1 |
/u01/app/oracle/admin |
Volume for Oracle Internet Directory Oracle instances |
Directory |
VOLOIDINST2 |
OIDHOST2 |
/u01/app/oracle/admin |
Volume for Oracle Internet Directory Oracle instances |
Directory |
VOLOVD1 |
OVDHOST1 |
/u01/app/oracle/product/fmw/idm |
Volume for Oracle Virtual Directory Oracle homes |
Directory |
VOLOVD2 |
OVDHOST2 |
/u01/app/oracle/product/fmw/idm |
Volume for Oracle Virtual Directory Oracle homes |
Directory |
VOLOVDINST1 |
OVDHOST1 |
/u01/app/oracle/admin |
Volume for Oracle Virtual Directory Oracle instances |
Directory |
VOLOVDINST2 |
OVDHOST2 |
/u01/app/oracle/admin |
Volume for Oracle Virtual Directory Oracle instances |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Oracle recommends the following consistency groups for the Oracle Identity Management topology:
Create one consistency group with the volumes containing the application tier Middleware home directories as members. This is the IDMMWGROUP group in Table 4-7.
Create one consistency group with the volumes containing the application tier Oracle instances directories as members. This is the IDMINSTGROUP group in Table 4-7.
Create one consistency group with the volumes containing the Oracle Internet Directory Oracle homes as members. This is the OIDHOMEGROUP group in Table 4-7.
Create one consistency group with the volumes containing the Oracle Internet Directory Oracle instances as members. This is the OIDINSTGROUP group in Table 4-7.
Create one consistency group with the volumes containing the Oracle Virtual Directory Oracle homes as members. This is the OVDHOMEGROUP group in Table 4-7.
Create one consistency group with the volumes containing the Oracle Virtual Directory Oracle instances as members. This is the OVDINSTGROUP group in Table 4-7.
Create one consistency group with the volume containing the Oracle Access Manager Oracle homes for Oracle Access Manager administration components as members. This is the OAMADMINGROUP group in Table 4-7.
Create one consistency group with the volume containing the Oracle Access Manager Oracle homes for Oracle Access Manager Identity and Access Server components as members. This is the OAMGROUP group in Table 4-7.
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle homes as members. This is the WEBHOMEGROUP in Table 4-7.
Create one consistency group with the volumes containing the Oracle HTTP Server Oracle instances as members. This is the WEBINSTGROUP group in Table 4-7.
Table 4-7 provides a summary of Oracle recommendations for consistency groups for the Oracle Identity Management topology shown in Figure 4-5:
Table 4-7 Consistency Groups for Oracle Identity Management
Tier | Group Name | Members | Comments |
---|---|---|---|
Directory |
OIDHOMEGROUP |
VOLOID1 VOLOID2 |
Consistency group for Oracle Internet Directory Oracle homes |
Directory |
OIDINSTGROUP |
VOLOIDINST1 VOLOIDINST2 |
Consistency group for Oracle Internet Directory Oracle instances |
Directory |
OVDHOMEGROUP |
VOLOVD1 VOLOVD2 |
Consistency group for Oracle Virtual Directory Oracle homes |
Directory |
OVDINSTGROUP |
VOLOVDINST1 VOLOVDINST2 |
Consistency group for Oracle Virtual Directory Oracle instances |
Application |
IDMMWGROUP |
VOLIDM1 VOLIDM2 |
Consistency group for the Middleware homes |
Application |
IDMINSTGROUP |
VOLIDMINST1 VOLIDMINST2 |
Consistency group for the Identity Management instances |
Application |
OAMGROUP |
VOLOAM1 VOLOAM2 |
Consistency group for the Oracle Access Manager Identity Server and Access Server homes |
Application |
OAMADMINGROUP |
VOLOAMADMIN |
Consistency group for the Oracle Access Manager administration host components |
Web |
WEBHOMEGROUP |
VOLWEB1 VOLWEB2 |
Consistency group for the Oracle HTTP Server Oracle homes |
Web |
WEBINSTGROUP |
VOLWEBINST1 VOLWEBINST2 VOLSTATIC1Foot 1 VOLSTATIC2Foot 2 |
Consistency group for the Oracle HTTP Server Oracle instances |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Figure 4-6 shows the Oracle Portal enterprise deployment topology diagram. The volume design and consistency groups described in Section 4.1.1.4.1, "Volume Design for Oracle Portal, Forms, Reports, and Discover" and Section 4.1.1.4.2, "Consistency Group Recommendations for Oracle Portal, Forms, Reports, and Discoverer" can be used for a Disaster Recovery site that includes this Oracle Portal topology.
Detailed information about the Oracle Portal enterprise topology in Figure 4-6 is available in the 11.1.1.2 Oracle Portal Enterprise Deployment Guide. See Article ID 952068.1 "Oracle Fusion Middleware 11g (11.1.1.2) Enterprise Deployment Guides for Portal, Forms, Reports, and Discover" at My Oracle Support (formerly Oracle MetaLink) for information on obtaining the manual. The URL for My Oracle Support is:
Figure 4-6 Oracle Portal Topology Diagram
Figure 4-7 shows the Oracle Forms, Reports, and Discoverer enterprise topology diagram. The volume design and consistency groups described in Section 4.1.1.4.1, "Volume Design for Oracle Portal, Forms, Reports, and Discover" and Section 4.1.1.4.2, "Consistency Group Recommendations for Oracle Portal, Forms, Reports, and Discoverer" can be used for a Disaster Recovery site that includes this topology.
Detailed information about the Oracle Forms, Reports, and Discoverer enterprise topology in Figure 4-7 is available in the 11.1.1.2 Oracle Forms, Reports, and Discoverer Enterprise Deployment Guide. See Article ID 952068.1 "Oracle Fusion Middleware 11g (11.1.1.2) Enterprise Deployment Guides for Portal, Forms, Reports, and Discover" at My Oracle Support (formerly Oracle MetaLink) for information on obtaining the manual. The URL for My Oracle Support is:
Figure 4-7 Oracle Forms, Reports, and Discoverer Topology
Oracle recommends the following volume design for a Disaster Recovery site that includes both the Oracle Portal topology shown in Figure 4-6 and the Oracle Forms, Reports, and Discoverer topology shown in Figure 4-7:
Provision one volume on each of the application tier hosts for the Middleware Homes. This volume will also contain the WebLogic Server Home, Oracle home for the Oracle Portal, Reports, Forms, and Discoverer components, and the domain directory for the Administration Server and Managed Server running on that host. These are VOLPFRD1 and VOLPFRD2 in Table 4-8.
Provision one volume on each node for the Oracle homes in the web tier. These are VOLWEB1 and VOLWEB2 in Table 4-8.
Provision one volume on each node for the Oracle instance homes in the directory web tier. These are VOLWEBINST1 and VOLWEBINST2 in Table 4-8.
Provision one volume on each node for the Oracle Instance homes in the application tier. This volume is shared by the Administration Server and Managed Server instances. These are VOLPFRDINST1 and VOLPFRDINST2 in Table 4-8.
Provision one volume for the Oracle Reports output directory in the application tier. This volume is mounted on all the nodes running the Oracle Reports server. This is VOLREPOUT in Table 4-6.
Table 4-8 provides a summary of Oracle recommendations for volume design for a Disaster Recovery site that includes both the Oracle Portal topology shown in Figure 4-6 and the Oracle Forms, Reports, and Discoverer topology shown in Figure 4-7:
Table 4-8 Volume Design Recommendations for Oracle Portal, Reports, Forms, and Discoverer
Tier | Volume Name | Mounted on Host | Mount Point | Comments |
---|---|---|---|---|
Web |
VOLWEB1 |
WEBHOST1 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEB2 |
WEBHOST2 |
/u01/app/oracle/product/fmw/web |
Volume for Oracle HTTP Server installation |
Web |
VOLWEBINST1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLWEBINST2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance |
Volume for Oracle HTTP Server instance |
Web |
VOLSTATIC1Foot 1 |
WEBHOST1 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Web |
VOLSTATIC2Foot 2 |
WEBHOST2 |
/u01/app/oracle/admin/ohs_instance/config/static |
Volume for static HTML content |
Application |
VOLPFRD1 |
APPHOST1 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle Portal, Forms, Reports, and Discoverer binaries. |
Application |
VOLPFRD2 |
APPHOST2 |
/u01/app/oracle/product/fmw |
Volume for the WebLogic Server and Oracle Portal, Forms, Reports, and Discoverer binaries. |
Application |
VOLPFRDINST1 |
APPHOST1 |
/u01/app/oracle/admin |
Volume for Oracle instances |
Application |
VOLPFRDINST2 |
APPHOST2 |
/u01/app/oracle/admin |
Volume for Oracle instances |
Application |
VOLREPOUT |
APPHOST1, APPHOST2 |
/u01/app/oracle/admin |
Volume for report output |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it
Oracle recommends the following consistency groups for a Disaster Recovery site that includes both the Oracle Portal topology shown in Figure 4-6 and the Oracle Forms, Reports, and Discoverer topology shown in Figure 4-7:
Create one consistency group with the volumes containing the web tier Oracle homes as members. This is WEBHOMEGROUP in Table 4-9.
Create one consistency group with the volumes containing the web tier Oracle Instances as members. This is WEBINSTGROUP in Table 4-9.
Create one consistency group with the volumes containing the application tier Middleware homes. This is PFRDMWGROUP in Table 4-9.
Create one consistency group with the volumes containing the application tier Oracle instance homes. This is PFRDINSTGROUP in Table 4-9.
Create one consistency group with the volume containing the Oracle Reports output directory as a member. This is REPOUTGROUP in Table 4-9.
Table 4-9 summarizes the consistency group recommendations for a Disaster Recovery site that includes both the Oracle Portal topology shown in Figure 4-6 and the Oracle Forms, Reports, and Discoverer topology shown in Figure 4-7.
Table 4-9 Consistency Groups for Oracle Portal, Forms, Reports, and Discoverer
Tier | Volume Name | Members | Comments |
---|---|---|---|
Application |
PFRDMWGROUP |
VOLPFRD2 VOLPFRD2 |
Consistency group for Middleware homes |
Application |
PFRDINSTGROUP |
VOLPFRDINST1 VOLPFRDINST2 |
Consistency group for the instance homes |
Application |
REPOUTGROUP |
VOLREPOUT |
Consistency group for the Reports output directory |
Web |
WEBHOMEGROUP |
VOLWEB1 VOLWEB2 |
Consistency group for the Oracle HTTP Server Oracle homes |
Web |
WEBINSTGROUP |
VOLWEBINST1 VOLWEBINST2 VOLSTATIC1Foot 1 VOLSTATIC2Foot 2 |
Consistency group for the Oracle HTTP Server Oracle instance |
Footnote 1 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Footnote 2 This volume for static HTML data is optional. Oracle Fusion Middleware will operate normally without it.
Follow these steps to set up storage replication for the Oracle Fusion Middleware Disaster Recovery topology:
On the standby site, ensure that aliases host names are created that are the same as the physical host names used for the peer hosts at the production site.
On the shared storage at the standby site, create the same volumes as were created on the shared storage at the production site.
On the standby site, create the same mount points and symbolic links that you created at the production site (note that symbolic links only need to be set up on the standby site if you set up symbolic links at the production site). Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
It is not necessary to install the same Oracle Fusion Middleware instances at the standby site as were installed at the production site. When the production site storage is replicated to the standby site storage, the Oracle software installed on the production site volumes will be replicated at the standby site volumes.
Perform any other necessary configuration required by the shared storage vendor to enable storage replication between the production site shared storage and the standby site shared storage.
Create the baseline snapshot copy of the production site shared storage that sets up the replication between the production site and standby site shared storage. Create the initial baseline copy and subsequent snapshot copies using asynchronous replication mode. After the baseline snapshot copy is performed, validate that all the directories inside the standby site volumes have the same contents as the directories inside the production site volumes.
Set up the frequency of subsequent copies of the production site shared storage, which will be replicated at the standby site. When asynchronous replication mode is used, then at the requested frequency the changed data blocks at the production site shared storage (based on comparison to the previous snapshot copy) become the new snapshot copy, and the snapshot copy is transferred to the standby site shared storage.
Ensure that disaster protection for any database that is included in the Oracle Fusion Middleware Disaster Recovery production site is provided by Oracle Data Guard. Do not use storage replication technology to provide disaster protection for Oracle databases.
The standby site shared storage receives snapshots transferred on a periodic basis from the production site shared storage. After the snapshots are applied, the standby site shared storage will include all the data up to and including the data contained in the last snapshot transferred from the production site before the failover or switchover.
It is strongly recommended to manually force a synchronization operation whenever a change is made to the middle tier at the production site (for example, when a new application is deployed at the production site). Follow the vendor-specific instructions for forcing a synchronization using storage replication technology.
See Section 3.3, "Database Considerations" for recommendations and considerations for setting up Oracle databases that will be used in the Oracle Fusion Middleware Disaster Recovery topology.
Oracle Data Guard should be set up between the Oracle Fusion Middleware Repository databases on the primary site and standby site. The databases on the standby site should be set up as Physical Standby Databases. This section describes the setup and configuration of the data tier on the standby site.
For more information regarding Oracle Data Guard, refer to Oracle Data Guard Concepts and Administration in the Oracle Database documentation set.
The Oracle Data Guard setup and configuration steps below assume that the following conditions are met:
The RAC cluster and ASM instances on the standby site have been created.
The RAC databases on the standby site and the production site are using a Flash Recovery Area.
The database hosts on the standby site already have Oracle software installed.
The physical path for the DB_HOME on the standby site matches that of the production site.
The Oracle Data Guard steps use the environment variables shown inTable 4-10 for the SOA database at the production site.
Table 4-10 Environment Variables Used for SOA Databases at the Production Site
Variable | Value |
---|---|
SOA Database Host Names |
soadbhost1.mycompany.com soadbhost2.mycompany.com |
ORACLE_HOME |
/u01/app/oracle/product/db_1 |
SOA_DBNAME |
PSOA |
SOA_DB_UNIQUE_NAME |
PSOA |
SOA_DB_INSTANCE_NAMES |
PSOA1, PSOA2 |
The Oracle Data Guard steps use the environment variables shown in Table 4-11 for the SOA database at the standby site.
Table 4-11 Environment Variables Used for SOA Databases at the Standby Site
Variable | Value |
---|---|
SOA Database Host Names |
soadbhost1.mycompany.com soadbhost2.mycompany.com |
ORACLE_HOME |
/u01/app/oracle/product/db_1 |
SOA_DBNAME |
SSOA |
SOA_DB_UNIQUE_NAME |
SSOA |
SOA_DB_INSTANCE_NAMES |
SSOA1, SSOA2 |
These high level steps for setting up Oracle Data Guard are described in detail in the following sections:
Follow these steps to gather files and perform the database backup:
On the SOADBHOST1 of the primary site, create a directory for staging purposes. For example:
$ mkdir -p /u01/app/stage/psoa
Create the exact path on SOADBHOST1 of the standby site. Follow the example shown in step 1.
On the SOADBHOST1 of the primary site, connect to the database instance psoa1 and create a pfile from the spfile. For example:
SQL > create pfile='/u01/app/stage/psoa/initpsoa.ora' from spfile;
On the SOADBHOST1 of the primary site, connect to RMAN, perform a backup of the database, and place the backup files in the stage directory. For example:
$ $ORACLE_HOME/bin/rman target / RMAN> backup device type disk format '/u01/app/stage/psoa/%U' database plus archivelog; RMAN> backup device type disk format '/u01/app/stage/psoa/%U' current controlfile for standby;
Follow the steps below to validate that the backups created by RMAN are valid.
Connect to RMAN on SOADBHOST1 of the primary site and then list the backup summary.
Validate the backup sets created by RMAN in step 4:
RMAN> list backup summary; using target database control file instead of recovery catalog List of Backups =============== Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag ------- -- -- - ----------- --------------- ------- ------- ---------- --- 93 B A A DISK 14-MAY-07 1 1 NO TAG20070514T122312 94 B F A DISK 14-MAY-07 1 1 NO TAG20070514T122315 95 B F A DISK 14-MAY-07 1 1 NO TAG20070514T122315 96 B A A DISK 14-MAY-07 1 1 NO TAG20070514T122629 97 B F A DISK 14-MAY-07 1 1 NO TAG20070514T123220 RMAN> validate backupset 93; allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=451 instance=psoa1 devtype=DISK channel ORA_DISK_1: starting validation of archive log backupset channel ORA_DISK_1: reading from backup piece /u01/app/stage/psoa/34ihmtdg_1_1 channel ORA_DISK_1: restored backup piece 1 piece handle=/u01/app/stage/psoa/34ihmtdg_1_1 tag=TAG20070514T122312 channel ORA_DISK_1: validation complete, elapsed time: 00:00:02
On SOADBHOST1 of the primary site, copy the listener.ora
, sqlnet.ora
, and tnsnames.ora
files from the $ORACLE_HOME
/network/admin
directory to the staging directory.
Using operating system utilities, copy the contents of staging directory on SOADBHOST1of the primary site to the staging directory on SOADBHOST1 of the standby site.
Follow these steps to configure Oracle Net Services on the standby site:
Copy the listener.ora
, sqlnet.ora
, and tnsnames.ora
files from the staging directory on SOADBHOST1 on the primary site to the $ORACLE_HOME
/network/admin
directory on all the nodes of the standby site.
Modify the listener.ora
file on each of the standby host to contain the virtual IP of that host.
Modify the tnsnames.ora
file on each node, including the primary RAC nodes and standby RAC nodes, to contain all primary and standby net service names.
Modify the Oracle Net aliases that are used for the local_listener and remote_listener parameters to point to the listener on each standby host. The example below shows excerpts from the tnsnames.ora
file:
#local_listener PSOA = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = soadbhost1-vip) (HOST = soadbhost2-vip) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = psoa) ) ) #remote_listener SSOA = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = soadbhost1-vip) (HOST = soadbhost2-vip) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ssoa) ) )
Start the listeners on the standby database hosts.
Follow these steps to create instances and the database on the standby site:
To enable secure transmission of redo data, make sure the databases on the primary and standby sites use a password file, and make sure the password for the SYS user is identical on every system. Create a password file on both the nodes of the standby databases. For example:
On SOADBHOST1 of the standby site
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwpsoa1 password=welcome1
On SOADBHOST2 of the standby site
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwpsoa2 password=welcome1
Copy and rename the pfile from the staging area to the $ORACLE_HOME
/dbs
directory on SOADBHOST1 of the standby site. For example:
$ cp /u01/app/stage/psoa/initpsoa.ora $ORACLE_HOME/dbs/initpsoa1.ora
Modify the standby initialization parameter file copied from the primary node to include the parameters shown Table 4-12:
Table 4-12 Parameters to Specify in the Standby Initialization Parameter File
Parameter | Value |
---|---|
RAC Parameters |
*.cluster_database=true PSOA1.instance_name=PSOA1 PSOA2.instance_name=PSOA2 PSOA1.instance_number=1 PSOA2.instance_number=2 PSOA1.thread=1 PSOA1.thread=2 PSOA1.undo_tablespace=UNDOTBS1 PSOA2.undo_tablespace=UNDOTBS2 *.remote_listener=LISTENERS_PSOA |
Data Guard Parameters |
*.db_unique_name=SSOA *.log_archive_config='dg_config=(SSOA,PSOA)' *.log_archive_dest_2='service=PSOA valid_for=(online_logfiles,primary_role) db_unique_name=PSOA' *.db_file_name_convert='+DATA/PSOA/','+DATA/SSOA/','+RECO/PSOA','+RECO/SSOA' *.log_file_name_convert='+DATA/PSOA/','+DATA/SSOA/','+RECO/PSOA','+RECO/SSOA' *.standby_file_management=auto *.fal_server='PSOA' *.fal_client='SSOA' |
Miscellaneous Parameters |
*.background_dump_dest=/u01/app/admin/PSOA/bdump *.core_dump_dest=/u01/app/admin/PSOA/cdump *.user_dump_dest=/u01/app/admin/PSOA/udump *.audit_file_dest=/u01/app/admin/PSOA/adump *.db_recovery_dest='+RECO' *.log_archive_dest_3='LOCATION=USE_DB_RECOVERY_FILE_DEST' *.dispatchers=PSOAXDB |
Connect to the ASM instance on SOADBHOST1 of the standby site, and create a directory within the DATA disk group that has the same name as the DB_UNIQUE_NAME of the standby database. For example:
SQL> alter diskgroup data add directory '+DATA/SSOA';
Connect to the standby database on SOADBHOST1 of the standby site, with the standby database in the IDLE state, and create an SPFILE in the standby DATA disk group. For example:
SQL> CREATE SPFILE='+DATA/SSOA/spfilepsoa.ora' FROM PFILE='?/dbs/initpsoa1.ora';
In the $ORACLE_HOME
/dbs
directory on SOADBHOST1 and SOADBHOST2 of the standby site, create a PFILE that contains a pointer to the SPFILE. The PFILE should follow the naming convention init<OracleSID>.ora. For example:
On SOADBHOST1:
$ cd $ORACLE_HOME/dbs
$ echo "SPFILE='+DATA/SSOA/spfilepsoa.ora'" > initpsoa1.ora
On SOADBHOST2:
$ cd $ORACLE_HOME/dbs
$ echo "SPFILE='+DATA/SSOA/spfilepsoa.ora'" > initpsoa2.ora
Create the dump directories on all standby hosts as referenced in the standby initialization parameter file. For example:
$ mkdir -p $ORACLE_BASE/admin/psoa/bdump $ mkdir -p $ORACLE_BASE/admin/psoa/cdump $ mkdir -p $ORACLE_BASE/admin/psoa/udump $ mkdir -p $ORACLE_BASE/admin/psoa/adump
On SOADBHOST1 of the standby site, set the ORACLE_HOME, PATH, ORACLE_SID and startup the standby database without mounting the control file. This host should have the staging directory. For example:
SQL > startup nomount
From SOADBHOST1 of the primary site where the standby instance was just started, duplicate the primary database as a standby into the ASM disk group by using RMAN. For example:
$ rman target sys/oracle@psoa auxiliary / RMAN> duplicate target database for standby;
Use SQL*Plus to log in to the newly created database to validate that it was created correctly. For example:
$ sqlplus '/as sysdba'
Connect to the standby database on SOADBHOST1 of the standby site, and create the standby redo logs to support the standby role. For example:
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 5 SIZE 300M, GROUP 6 SIZE 300M, GROUP 7 SIZE 300M; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 8 SIZE 300M, GROUP 9 SIZE 300M, GROUP 10 SIZE 300M;
On SOADBHOST1 of the standby site, start managed recovery and real-time apply on the standby database. For example:
SQL> ALTER DATABASE recover managed standby database using current logfile disconnect;
On SOADBHOST1 and SOADBHOST2 of the standby site, register the standby database and the database instances with the Oracle Cluster Registry (OCR) using the Server Control (SRVCTL) utility. For example:
$ srvctl add database -d psoa -o /u01/app/oracle/product/10.2.0/db_1 $ srvctl add instance -d psoa -i psoa1 -n soadbhost1 $ srvctl add instance -d psoa -i psoa2 -n soadbhost2
Establish a dependency between the database and the ASM instance. For example:
$ srvctl modify instance -d psoa -i psoa1 -s +ASM1 $ srvctl modify instance -d psoa -i psoa2 -s +ASM2 $ srvctl enable asm -n stbdd03 -i +ASM1 $ srvctl enable asm -n stbdd04 -i +ASM2
Configure the primary database for Oracle Data Guard by modifying/adding the Data Guard parameters in the primary initialization file to the values shown below:
*.log_archive_config='dg_config=(SSOA,PSOA)' *.log_archive_dest_2='service=SSOA valid_for=(online_logfiles,primary_role) db_unique_name=SSOA' *.db_file_name_convert='+DATA/SSOA/','+DATA/PSOA/','+RECO/SSOA','+RECO/PSOA' *.log_file_name_convert='+DATA/SSOA/','+DATA/PSOA/','+RECO/SSOA','+RECO/PSOA' *.standby_file_management=auto *.fal_server='PSOA' *.fal_client='PSOA'
Restart the primary database after modifying the parameters.
Create the standby redo logs on the primary database to support the standby role. For example:
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 5 SIZE 300M, GROUP 6 SIZE 300M, GROUP 7 SIZE 300M; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 8 SIZE 300M, GROUP 9 SIZE 300M, GROUP 10 SIZE 300M;
Verify the Oracle Data Guard configuration by querying the V$ARCHIVED_LOG view to identify existing files in the archived redo log. For example:
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
On the primary database, issue the following SQL statement to force a log switch and archive the current online redo log file group:
SQL> alter system archive log current;
On the standby database, query the V$ARCHIVED_LOG view to verify that the redo data was received and archived on the standby database:
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
Follow these steps to test that the database switchover and switchback operation works correctly between the newly-created physical standby database and the primary RAC databases:
Shutdown all but one instance of the RAC databases (PSOA) on the primary site. For example, run the command below on SOADBHOST1 of the production site:
$ srvctl stop instance -d psoa -i psoa2
Initiate the role transition to the physical standby on the current primary database. For example, run the command below on SOADBHOST1 of the production site:
SQL > ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
Shut down the primary instance and mount the primary instance. For example, run the command below on SOADBHOST1 of the production site:
SQL > shutdown immediate SQL > startup mount
At this point, both the databases are in Physical Standby mode. To verify that both the databases are in Physical Standby mode, run this SQL query on both the databases:
SQL> select database_role from v$database; DATABASE_ROLE ---------------- PHYSICAL_STANDBY
Switch the physical standby database role to the primary role. For example, run the command below on SOADBHOST1 of the standby site:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
Now the physical standby database is the new primary.
Shut down the new primary database and start up both the RAC nodes using srvctl. For example, run the following command on the SOADBHOST1 of the standby site:
srvctl start database -d psoa
On the new physical standby database (the old primary) start the managed recovery of the database. For example, run the command below on SOADBHOST1 of the primary site:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Start sending the redo data to the new physical standby database. For example, run the command below on SOADBHOST1 of the standby site:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Check the new physical standby database to see if it is receiving the archive log files by querying the V$ARCHIVED_LOG view.
The Node Manager communicates with the Administration Server over SSL. For this communication to work correctly on the standby site, you must create SSL certificates using the physical host names. This section includes these topics:
The examples in these sections show how to perform these tasks for the Oracle SOA Suite enterprise topology shown in Figure 4-2.
Note:
Remember that when you are setting up the Oracle SOA Suite enterprise topology shown in Figure 4-2 as the production site for a Disaster Recovery topology, you must use the physical host names shown in Table 3-1 for the production site hosts instead of the host names shown in Figure 4-2.The steps in this section must performed on the application tier hosts on which WebLogic Server is installed.
Follow these steps to generate self signed certificates:
Set your environment using the setWLSenv
script located under the $WL_HOME
/server/bin
directory.
Create a user-defined directory for the certificates. For example, create the certs
directory under the $MW_HOME
/user_projects/domains/SOADomain
directory.
Run the utils.CertGen
tool from the user-defined directory to create the certificates for the application tier hosts on which WebLogic Server is installed. The syntax is:
Syntax: java utils.CertGen <key_passphrase> <cert_file_name> <key_file_name> [export|domestic] [hostname]
For example, enter these commands:
java utils.CertGen welcome1 soahost1_cert soahost1_key domestic soahost1 java utils.CertGen welcome1 soahost2_cert soahost2_key domestic soahost2
Follow these steps to create an identity keystore using the utils.ImportPrivateKey
utility:
Create a new identity keystore called appIdentityKeyStore
using the utils.ImportPrivateKey
utility.
Create this keystore under the same directory as the certificates, for example:
$MW_HOME/user_projects/domains/j2eeDomain/certs
The Identity Store is created (if none exists) when you import a certificate and the corresponding key into the Identity Store using the utils.ImportPrivateKey
utility.
Import the certificate and private key for the application tier hosts on which WebLogic Server is installed into the Identity Store; also make sure to use a different alias for each of the certificate/key pair imported. The syntax is:
Syntax: java utils.ImportPrivateKey <keystore_file> <keystore_password> <certificate_alias_to_use> <private_key_passphrase> <certificate_file> <private_key_file> [<keystore_type>]
For example, enter these commands:
java utils.ImportPrivateKey appIdentityKeyStore.jks welcome1 appIdentity1 welcome1 $MW_HOME/user_projects/domains/SOADomain/certs/soahost1_cert.pem $MW_HOME/user_projects/domains/SOADomain/certs/soahost1_key.pem java utils.ImportPrivateKey appIdentityKeyStore.jks welcome1 appIdentity2 welcome1 $MW_HOME/user_projects/domains/SOADomain/certs/soahost2_cert.pem $MW_HOME/user_projects/domains/SOADomain/certs/soahost2_key.pem
Follow these steps to create a trust keystore:
Create a new trust keystore called appTrustKeyStore
using the keytool
utility.
Use the standard java keystore to create the new trust keystore since it already contains most of the root CA certificates needed. It is recommended not to modify the standard Java trust key store directly.
Copy the standard Java keystore cacerts
located under the $WL_HOME
/server/lib
directory to the same directory as the certificates. For example:
cp $WL_HOME/server/lib/cacerts
$MW_HOME/user_projects/domains/SOADomain/certs/appTrustKeyStore.jks
The default password for the standard Java keystore is changeit
and it is always recommended to change the default password. Use the keytool utility to do this. The syntax is:
keytool -storepasswd -new <NewPassword> -keystore <TrustKeyStore> -storepass <Original Password>
For example, enter this command:
keytool -storepasswd -new welcome1 -keystore appTrustKeyStore.jks -storepass changeit
The CA certificate CertGenCA.der
is used to sign all certificates generated by utils.CertGen
tool and is located at $WL_HOME
/server/lib
directory. This CA certificate must be imported into the appTrustKeyStore using the keytool
utility. The syntax is:
keytool -import -v -noprompt -trustcacerts -alias <AliasName> -file <CAFileLocation> -keystore <KeyStoreLocation> -storepass <KeyStore Password>
For example, enter this command:
keytool -import -v -noprompt -trustcacerts -alias clientCACert -file
$WL_HOME/server/lib/CertGenCA.der -keystore appTrust.jks -storepass welcome1
Configure Node Manager on each of the nodes to use the newly-created custom keystores by editing the following lines at the end of the nodemanager.properties
file located under the $WL_HOME
/common/nodemanager
directory. These lines and their meanings are shown below:
KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=<Identity KeyStore> CustomIdentityKeyStorePassPhrase=<Identity KeyStore Password> CustomIdentityAlias=<Identity Key Store Alias> CustomIdentityPrivateKeyPassPhrase=<Private Key used when creating Certificate> CustomTrustKeyStoreFileName=<Trust KeyStore> CustomTrustKeyStorePassPhrase=<Trust KeyStore Password>
For example, make these edits in the nodemanager.properties
file on SOAHOST1:
KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=$MW_HOME/user_projects/domains/SOADomain/certs /appIdentityKeyStore.jks CustomIdentityKeyStorePassPhrase=welcome1 CustomIdentityAlias=appIdentity1 CustomIdentityPrivateKeyPassPhrase=welcome1 CustomTrustKeyStoreFileName=$MW_HOME/user_projects/domains/SOADomain/certs /appTrust.jks CustomTrustKeyStorePassPhrase=welcome1
For example, make these edits in the nodemanager.properties
file on SOAHOST2:
KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=$MW_HOME/user_projects/domains/SOADomain/certs /appIdentityKeyStore.jks CustomIdentityKeyStorePassPhrase=welcome1 CustomIdentityAlias=appIdentity2 CustomIdentityPrivateKeyPassPhrase=welcome1 CustomTrustKeyStoreFileName=$MW_HOME/user_projects/domains/SOADomain/certs /appTrust.jks CustomTrustKeyStorePassPhrase=welcome1
This section provides the steps to create the production site. The Oracle SOA enterprise deployment topology and the Oracle Identity Management Enterprise deployment topology are used as examples.
Ensure that you have performed the following prerequisites before you start creating the production site:
Set up the host name aliases for the middle tier hosts, which was described in Section 3.1.1, "Planning Host Names."
Create the required volumes on the shared storage on the production site, which was described in Section 4.1.1, "Directory Structure and Volume Design."
Create the mount points and the symbolic links (if required). Refer to Section 3.2.3, "Storage Replication" to determine whether you must create symbolic links for the production site.
The production site should be installed and configured as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite with the following variations. The steps to install and configure the production site are listed below and should be followed in the sequence listed.
Create volumes and consistency groups on the shared storage device, as described in Section 4.1.1.1.1, "Volume Design for Oracle SOA Suite."
Set up physical host names on the production site and physical host names and alias host names for the standby site. See Section 3.1.1, "Planning Host Names" for information on planning host names for the production and standby sites.
Install and configure Oracle SOA Suite as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite with the following modifications:
Install the Oracle SOA Suite components into the volumes created on the shared storage device.
Use the physical host names when installing and configuring WebLogic domain.
Create a separate volume on each site for the JMS stores and transaction logs.
After the installation and configuration of the production site, turn off host name verification. See the "Disabling Host Name Verification for the Oracle WebLogic Administration Server and the WLS_WSM1 Managed Server" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite for detailed instructions about turning off host name verification for an Administration Server and Managed Server.
If you do not plan on turning host name verification off, follow the steps in Section 4.1.4, "Node Manager" to configure Node Manager communication.
Create SSL certificates using the host name aliases on all of the Oracle Fusion Middleware hosts for proper Node Manager communication.
The production site should be installed and configured as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management with the following variations. The steps to install and configure the production site are listed below and should be followed in the sequence listed.
Create volumes and consistency groups on the shared storage device, as described in Section 4.1.1.3.1, "Volume Design for Oracle Identity Management."
Set up physical host names on the production site and physical host names and alias host names for the standby site. See Section 3.1.1, "Planning Host Names" for information on planning host names for the production and standby sites.
Install and configure Oracle Identity Management as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management with the following modifications:
Install the Oracle Identity Management components into the volumes created on the shared storage device.
Use the physical host names when installing and configuring the WebLogic domain.
Create a separate volume on each site for the JMS stores and transaction logs.
After the installation and configuration of the production site, turn off host name verification. See the "Disabling Host Name Verification for the Oracle WebLogic Administration Server and the WLS_WSM1 Managed Server" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite for detailed instructions about turning off host name verification for an Administration Server and Managed Server.
If you do not plan on turning host name verification off, follow the steps in Section 4.1.4, "Node Manager" to configure Node Manager communication.
Create SSL certificates using the host name aliases on all of the Oracle Fusion Middleware hosts for proper Node Manager communication.
The production site should be installed and configured as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter with the following variations. The steps to install and configure the production site are listed below and should be followed in the sequence listed.
Create volumes and consistency groups on the shared storage device, as described in Section 4.1.1.2.1, "Volume Design for Oracle WebCenter."
Set up physical host names on the production site and physical host names and alias host names for the standby site. See Section 3.1.1, "Planning Host Names" for information on planning host names for the production and standby sites.
Install and configure Oracle WebCenter as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter with the following modifications:
Install the Oracle WebCenter components into the volumes created on the shared storage device.
Use the physical host names when installing and configuring WebLogic domain.
After the installation and configuration of the production site, turn off host name verification. See the "Disabling Host Name Verification for the Oracle WebLogic Administration Server and the WLS_WSM1 Managed Server" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter for detailed instructions about turning off host name verification for an Administration Server and Managed Server.
If you do not plan on turning host name verification off, follow the steps in Section 4.1.4, "Node Manager" to configure Node Manager communication.
Create SSL certificates using the host name aliases on all of the Oracle Fusion Middleware hosts for proper Node Manager communication.
The production site should be installed and configured as described in the enterprise deployment manuals for the following products:
Oracle Portal:
Detailed instructions for setting up and configuring a production site that uses the Oracle Portal enterprise topology shown in Figure 4-6 are provided in the 11.1.1.2 Oracle Portal Enterprise Deployment Guide. See Article ID 952068.1 "Oracle Fusion Middleware 11g (11.1.1.2) Enterprise Deployment Guides for Portal, Forms, Reports, and Discover" at My Oracle Support (formerly Oracle MetaLink) for information on obtaining the manual. The URL for My Oracle Support is:
Oracle Forms, Reports, and Discoverer
Detailed instructions for setting up and configuring a production site that uses the Oracle Forms, Reports, and Discoverer enterprise topology shown in Figure 4-7 are provided in the 11.1.1.2 Oracle Forms, Reports, and Discoverer Enterprise Deployment Guide. See Article ID 952068.1 "Oracle Fusion Middleware 11g (11.1.1.2) Enterprise Deployment Guides for Portal, Forms, Reports, and Discover" at My Oracle Support (formerly Oracle MetaLink) for information on obtaining the manual. The URL for My Oracle Support is:
Follow the installation and configuration instructions in the manuals above, except for the following variations. The following steps should be performed in the sequence listed:
Create volumes and consistency groups on the shared storage device, as described in Section 4.1.1.4.1, "Volume Design for Oracle Portal, Forms, Reports, and Discover."
Set up physical host names on the production site and physical host names and alias host names for the standby site. See Section 3.1.1, "Planning Host Names" for information on planning host names for the production and standby sites.
Install and configure Oracle Portal, Forms, Reports, and Discoverer as described in the white papers linked to above, with the following modifications:
Install the Oracle Portal, Forms, Reports, and Discoverer components into the volumes created on the shared storage device.
Use the physical host names when installing and configuring WebLogic domain.
After the installation and configuration of the production site, turn off host name verification. See the "Disabling Host Name Verification for the Oracle WebLogic Administration Server and the WLS_WSM1 Managed Server" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter for detailed instructions about turning off host name verification for an Administration Server and Managed Server.
If you do not plan on turning host name verification off, follow the steps in Section 4.1.4, "Node Manager" to configure Node Manager communication.
Create SSL certificates using the host name aliases on all of the Oracle Fusion Middleware hosts for proper Node Manager communication.
To validate the production site setup for the Oracle SOA Suite enterprise topology, follow the validation steps in these sections of the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite:
In the "Installing Oracle HTTP Server" chapter, follow the validation steps in this section:
In the "Creating a Domain" chapter, follow the validation steps in these sections:
In the "Extending the Domain for SOA Components" chapter, follow the validation steps in these sections:
In the "Extending the Domain to Include BAM" chapter, follow the validation steps in this section:
This section provides the steps to create the standby site. The Oracle SOA enterprise deployment topology and the Oracle Identity Management Enterprise deployment topology are used as examples.
Ensure that you have performed the following prerequisites before you start creating the standby site:
On the standby site, ensure that you set up the correct alias host names and physical host names by following the instructions in Section 3.1.1, "Planning Host Names."
Ensure that each standby site host has an alias host name that is the same as the physical host name of its peer host at the production site.
On the shared storage on the standby site, create the same volumes that were created on the shared storage at the production site.
On the standby site, create the same mount points and symbolic links (if required) that you created at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
Oracle Data Guard should be set up between the Oracle Fusion Middleware Repository databases on the primary site and standby site. The databases on the standby site should be set up as physical standby databases. Refer to Section 4.1.3.1, "Setting Up Oracle Data Guard" for instructions on setting up Oracle Data Guard between databases running the metadata repositories on the primary and standby sites.
Also, ensure that the databases running the metadata repositories on the standby site are in the Managed Recovery mode. To enable the standby database to be in the managed recovery mode run the following SQL command (the disconnect option ends the SQL session after the command is completed successfully):
SQL> ALTER DATABASE RECOVERY MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
The middle tier hosts on the standby site do not require the installation or configuration of the of any Oracle Fusion Middleware or WebLogic Server software. When the production site storage is replicated to the standby site storage, the software installed on the production site volumes will be replicated at the standby site volumes.
Follow the steps below to set up the middle tier hosts on the standby site:
Create a baseline snapshot copy of shared storage on the production site, which sets up the replication between the storage devices. Create the initial baseline copy and subsequent snapshot copies using asynchronous replication mode.
Synchronize the shared storage at the production site with the shared storage at the standby site. This will transfer the initial baseline snapshot from the production site to the standby site.
Set up the frequency of subsequent copies of the production site shared storage, which will be replicated at the standby site. When asynchronous replication mode is used, then at the requested frequency the changed data blocks at the production site shared storage (based on comparison to the previous snapshot copy) become the new snapshot copy, and the snapshot copy is transferred to the standby site shared storage.
After the baseline snapshot copy is performed, validate that all the directories inside the standby site volumes have the same contents as the directories inside the production site volumes.
Validate the standby site by following the steps below:
Shut down any processes still running on the production site. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Stop the replication between the production site shared storage and the standby site shared storage.
Use Oracle Data Guard to fail over the databases.
On the standby site hots, manually start all the processes. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Use a browser client to perform post-failover testing to confirm that requests are being resolved and redirected to the standby site.
The steps in this section describe how to set up an asymmetric Oracle Fusion Middleware Disaster Recovery topology.
An asymmetric topology is a disaster recovery configuration that is different across tiers at the production site and standby site. In most asymmetric Oracle Fusion Middleware Disaster Recovery topologies, the standby site has fewer resources than the production site.
Before you read this section, be sure to read and understand the concepts and information on setting up a symmetric topology presented earlier in this manual. Many of the concepts for setting up a symmetric topology are also valid for setting up an asymmetric topology.
Section 4.4.1, "Creating the Asymmetric Standby Site" describes the basic steps for creating an asymmetric topology. It does not describe in detail applicable concepts for setting up an asymmetric topology that were previously described for symmetric topologies earlier in this chapter.
This section describes the high level steps for creating any type of asymmetric Oracle Fusion Middleware Disaster Recovery topology. The production site is the Oracle SOA Suite enterprise deployment shown in Figure 4-2. The standby site will be different from the production site.
To create an asymmetric topology:
Design the production site and the standby site. Determine the resources that will be necessary at the standby site to ensure acceptable performance when the standby site assumes the production role.
Note:
The ports for the standby site instances must use the same port numbers as the peer instances at the production site. Therefore, ensure that all the port numbers that will be required at the standby site are available (not in use at the standby site).Create the Oracle Fusion Middleware Disaster Recovery production site by performing these operations:
Create volumes on the production site's shared storage system for the Oracle Fusion Middleware instances that will be installed for the production site. For more information, see Section 4.1.1, "Directory Structure and Volume Design."
Create mount points and symbolic links on the production site hosts to the Oracle home directories for the Oracle Fusion Middleware instances on the production site's shared storage system volumes. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links. For more information about volume design, see Section 4.1.1.1.1, "Volume Design for Oracle SOA Suite."
Create mount points and symbolic links on the production site hosts to the Oracle Central Inventory directories for the Oracle Fusion Middleware instances on the production site's shared storage system volumes. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links. For more information about the Oracle Central Inventory directories, see Section 3.2.2, "Oracle Home and Oracle Inventory."
Create mount points and symbolic links on the production site hosts to the static HTML pages directories for the Oracle HTTP Server instances on the production site's shared storage system volumes, if applicable. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
Install the Oracle Fusion Middleware instances for the production site on the volumes in the production site's shared storage system. For more information, see Section 4.2.1, "Creating the Production Site for the Oracle SOA Suite Topology."
Create the same volumes with the same file and directory privileges on the standby site's shared storage system as you created for the Oracle Fusion Middleware instances on the production site's shared storage system. This step is critical because it enables you to use storage replication later to create the peer Oracle Fusion Middleware instance installations for the standby site instead of installing them using Oracle Universal Installer.
Note:
When you configure storage replication, ensure that all the volumes you set up on the production site's shared storage system are replicated to the same volumes on the standby site's shared storage system.Even though some of the instances and hosts at the production site may not exist at the standby site, you must configure storage replication for all the volumes set up for the production site's Oracle Fusion Middleware instances.
Perform any other necessary configuration required by the shared storage vendor to enable storage replication between the production site's shared storage system and the standby site's shared storage system. Configure storage replication to asynchronously copy the volumes in the production site's shared storage system to the standby site's shared storage system.
Create the initial baseline snapshot copy of the production site shared storage system to set up the replication between the production site and standby site shared storage systems. Create the initial baseline snapshot and subsequent snapshot copies using asynchronous replication mode. After the baseline snapshot copy is performed, validate that all the directories for the standby site volumes have the same contents as the directories for the production site volumes. Refer to the documentation for your shared storage vendor for information on creating the initial snapshot and enabled storage replication between the production site and standby site shared storage systems.
After the baseline snapshot has been taken, perform these steps for the Oracle Fusion Middleware instances for the standby site hosts:
Set up a mount point directory on the standby site host to the Oracle home directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. The mount point directory you set up for the peer instance on the standby site host must be the same as the mount point directory you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle home directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links. The symbolic link you set up for the peer instance on the standby site host must be the same as the symbolic link you set up for the instance on the production site host.
Set up a mount point directory on the standby site host to the Oracle Central Inventory directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. The mount point directory you set up for the peer instance on the standby site host must be the same as the mount point directory you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle Central Inventory directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links. The symbolic link you set up for the peer instance on the standby site host must be the same as the symbolic link you set up for the instance on the production site host.
Set up a mount point directory on the standby site host to the Oracle HTTP Server static HTML pages directory for the Oracle HTTP Server instance on the standby site's shared storage system. The mount point directory you set up for the peer instance on the standby site host must be the same as the mount point directory you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle HTTP Server static HTML pages directory for the Oracle HTTP Server instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links. The symbolic link you set up for the peer instance on the standby site host must be the same as the symbolic link you set up for the instance on the production site host.
After completing these steps, the Oracle Fusion Middleware instance installations for the production site have been replicated to the standby site. At the standby site, all of the following are true:
The Oracle Fusion Middleware instances are installed into the same Oracle home directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle home directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
The Oracle Central Inventory directories are located in same directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle Central Inventory directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
The Oracle HTTP Server static HTML pages directories are located in same directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle HTTP Server static HTML pages directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Section 3.2.3, "Storage Replication" for more details about symbolic links.
The same ports are used for the standby site Oracle Fusion Middleware instances as were used for the same instances at the production site.
This section describes how to create an asymmetric standby site that has fewer hosts and Oracle Fusion Middleware instances than the production site.
The production site for this Oracle Fusion Middleware Disaster Recovery topology is the Oracle SOA Suite enterprise deployment shown in Figure 4-2. Section 4.1, "Setting Up the Site" through Section 4.1.1, "Directory Structure and Volume Design" describe how to set up this production site and the volumes for its shared storage system, and how to create the necessary mount points.
Figure 4-8 shows the asymmetric standby site for the production site shown in Figure 4-2.
Figure 4-8 An Asymmetric Standby Site with Fewer Hosts and Instances
The Oracle SOA Suite asymmetric standby site shown in Figure 4-8 has fewer hosts and instances than the Oracle SOA Suite production site shown in Figure 4-2.
The hosts WEBHOST2 and SOAHOST2 and the instances on those hosts exist at the production site in Figure 4-2, but these hosts and their instances do not exist at the asymmetric standby site in Figure 4-8. The standby site therefore has fewer hosts and fewer instances than the production site.
It is important to ensure that this asymmetric standby site will have sufficient resources to provide adequate performance when it assumes the production role.
When you follow the steps in Section 4.4.1, "Creating the Asymmetric Standby Site" to set up this asymmetric standby site, the standby site should be properly configured to assume the production role.
To set up the asymmetric standby site correctly, create the same volumes and consistency groups on the standby site shared storage as you did on the production site shared storage. For example, for the Oracle SOA Suite deployment, the volume design recommendations in Table 4-4 and the consistency group recommendations in Table 4-5) were used to set up the production site shared storage. You will use these same volume design recommendations and consistency group recommendations that you used for the production site shared storage to set up the asymmetric standby site's shared storage.
Note that at an asymmetric standby site, some hosts that exist at the production site do not exist at the standby site. For example, in the case of the asymmetric standby site for Oracle SOA Suite shown in Figure 4-8, WEBHOST2 and SOAHOST2 do not exist at the standby site, therefore it is not possible or necessary for you to create mount points on these hosts to the standby site shared storage volumes.
Validate the standby site by following the steps below:
Shut down any processes still running on the production site. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Stop the replication between the production site shared storage and the standby site shared storage.
Use Oracle Data Guard to fail over the databases.
On the standby site hots, manually start all the processes. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Use a browser client to perform post-failover testing to confirm that requests are being resolved and redirected to the standby site.
This section describes operations and administration to perform on your Oracle Fusion Middleware Disaster Recovery topology.
The standby site shared storage receives snapshots transferred on a periodic basis from the production site shared storage. After the snapshots are applied, the standby site shared storage will include all the data up to and including the data contained in the last snapshot transferred from the production site before the failover or switchover.
You should manually force a synchronization operation whenever a change is made to the middle tier at the production site (for example, when a new application is deployed at the production site). Follow the vendor-specific instructions for forcing a synchronization using storage replication technology.
The synchronization of the databases in the Oracle Fusion Middleware Disaster Recovery topology is managed by Oracle Data Guard.
When you plan to take down the production site (for example, to perform maintenance) and make the current standby site the new production site, you must perform a switchover operation so that the standby site takes over the production role.
Follow these steps to perform a switchover operation:
Shut down any processes still running on the production site. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Stop the replication between the production site shared storage and the standby site shared storage.
Use Oracle Data Guard to switch over the databases.
On the standby site hots, manually start all the processes. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Ensure that all user requests are routed to the standby site by performing a global DNS push or something similar, such as updating the global load balancer.
Use a browser client to perform post-switchover testing to confirm that requests are being resolved and redirected to the standby site.
At this point, the former standby site is the new production site and the former production site is the new standby site.
Reestablish the replication between the two sites, but configure the replication so that the snapshot copies go in the opposite direction (from the current production site to the current standby site). Refer to the documentation for your shared storage to learn how to configure the replication so that snapshot copies are transferred in the opposite direction.
After these steps have been performed, the former standby site is the new production site. At this point, you can perform maintenance at the original production site. After performing the planned tasks on the original production site, you can use it again at some point in the future as either the production site or standby site.
To use the original production site as the new production site, perform the switchback steps described in Section 4.5.3, "Performing a Switchback."
After a switchover operation has been performed, a switchback operation can be performed to revert the current production site and the current standby site to the roles they had prior to the switchover operation.
Follow these steps to perform a switchback operation:
Shut down any processes running on the current production site. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Stop the replication between the current production site shared storage and standby site shared storage.
Use Oracle Data Guard to switch back the databases.
On the new production site hosts, manually start all the processes. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Ensure that all user requests are routed to the new production site by performing a global DNS push or something similar, such as updating the global load balancer.
Use a browser client to perform post-switchback testing to confirm that requests are being resolved and redirected to the new production site.
At this point, the former standby site is the new production site and the former production site is the new standby site.
Reestablish the replication between the two sites, but configure the replication so that the snapshot copies go in the opposite direction (from the new production site to the new standby site). Refer to the documentation for your shared storage to learn how to configure the replication so that snapshot copies are transferred in the opposite direction.
When the production site becomes unavailable unexpectedly, you must perform a failover operation so that the standby site takes over the production role.
Follow these steps to perform a failover operation:
Stop the replication between the production site shared storage and the standby site shared storage.
From the standby site, use Oracle Data Guard to fail over the databases.
On the standby site hosts, manually start all the processes. This includes the database instances in the data tier, Oracle Fusion Middleware instances and any other processes in the application tier and web tier.
Ensure that all user requests are routed to the standby site by performing a global DNS push or something similar, such as updating the global load balancer.
Use a browser client to perform post-failover testing to confirm that requests are being resolved and redirected to the production site.
At this point, the standby site is the new production site. You can examine the issues that caused the former production site to become unavailable.
To use the original production site as the current standby site, you must reestablish the replication between the two sites, but configure the replication so that the snapshot copies go in the opposite direction (from the current production site to the current standby site). Refer to the documentation for your shared storage system to learn how to configure the replication so that snapshot copies are transferred in the opposite direction.
To use the original production site as the new production site, perform the switchback steps in Section 4.5.3, "Performing a Switchback."
This manual describes how to set up Disaster Recovery for an Oracle Fusion Middleware production site and standby site. In a normal Oracle Fusion Middleware Disaster Recovery configuration, the following are true:
Storage replication is used to copy Oracle Fusion Middleware middle tier file systems and data from the production site shared storage to the standby site shared storage. During normal operation, the production site is active and the standby site is passive. When the production site is active, the standby site is passive and the standby site shared storage is in read-only mode; the only write operations made to the standby site shared storage are the storage replication operations from the production site shared storage to the standby site shared storage.
Oracle Data Guard is used to copy database data for the production site Oracle databases to the standby databases at standby site. By default, the production site databases are active and the standby databases at the standby site are passive. The standby databases at the standby site are in managed recovery mode while the standby site is in the standby role (is passive). When the production site is active, the only write operations made to the standby databases are the database synchronization operations performed by Oracle Data Guard.
When the production site becomes unavailable, the standby site is enabled to take over the production role. If the current production site becomes unavailable unexpectedly, then a failover operation (described in Section 4.5.4, "Performing a Failover") is performed to enable the standby site to assume the production role. Or, if the current production site is taken down intentionally (for example, for planned maintenance), then a switchover operation (described in Section 4.5.2, "Performing a Switchover") is performed to enable the standby site to assume the production role.
The usual method of testing a standby site is to shut down the current production site and perform a switchover operation to enable the standby site to assume the production role. However, some enterprises may want to perform periodic testing of their Disaster Recovery standby site without shutting down the current production site and performing a switchover operation.
An alternate method of testing the standby site without shutting down the current production site is to create a clone of the read-only standby site shared storage and then use the cloned standby site shared storage in testing. To use this alternate testing method, perform these steps:
Use the cloning technology provided by the shared storage vendor to create a clone of the standby site's read-only volumes on the shared storage at the standby site. Ensure that the cloned standby site volumes are writable. If you want to test the standby site just once, then this can be a one-time clone operation, but if you want to test the standby site regularly, you can set up periodic cloning of the standby site read-only volumes to the standby site's cloned read/write volumes.
Perform a backup of the standby site databases, then modify the Oracle Data Guard replication between the production site and standby site databases.
For 10.1 databases, break the replication by following the instructions in the 10.1 Oracle Data Guard documentation.
For 10.2 and later databases, follow these steps to establish a snapshot standby database:
If you do not have a Flash Recovery Area, set one up.
Cancel Redo Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Create a guaranteed restore point:
SQL> CREATE RESTORE POINT standbytest GUARANTEE FLASHBACK DATABASE;
Archive the current logs at the primary (production) site:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Defer the standby site destination that you will activate:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;
Activate the target standby database:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
Mount the database with the Force option if the database was opened read-only:
SQL> STARTUP MOUNT FORCE;
Lower the protection mode and open the database:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; SQL> ALTER DATABASE OPEN;
For 11g databases, use the procedure to establish a snapshot standby database in the "Managing a Snapshot Standby Database" section in Oracle Data Guard Concepts and Administration.
Use Oracle Data Guard database recovery procedures to bring the standby databases online.
On the standby site computers, modify the mount commands to point to the volumes on the standby site's cloned read/write shared storage by following these steps:
Unmount the read-only shared storage volumes.
Mount the cloned read/write volumes at the same mount point.
Before doing the standby site testing, modify the host name resolution method for the computers that will be used to perform the testing to ensure that the host names point to the standby site computers and not the production site computers. For example, on a Linux computer, change the /etc/hosts
file to point to the virtual IP of the load balancer for the standby site.
Perform the standby site testing.
After you complete the standby site testing, follow these steps to begin using the original production site as the production site again:
Modify the mount commands on the standby site computers to point to the volumes on the standby site's read-only shared storage: In other words, reset the mount commands back to what they were before the testing was performed.
Unmount the cloned read/write shared storage volume.
Mount the read-only shared storage volumes.
At this point, the mount commands are reset to what they were before the standby site testing was performed.
Configure Oracle Data Guard to perform replication between the production site databases and standby databases at the standby site. Performing this configuration puts the standby database into managed recovery mode again:
For 10.1 databases, reinstantiate the databases by following the instructions in the 10.1 Oracle Data Guard documentation.
For 10.2 and later databases, follow these steps:
Revert the activated database back to a physical standby database:
SQL> STARTUP MOUNT FORCE; SQL> FLASHBACK DATABASE TO POINT standbytest; SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; SQL> STARTUP MOUNT FORCE;
Restart managed recovery:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Reenable the standby destination and switch logs:
SQL> ALTER SYSTEM SET LOG ARCHIVE DEST STATE 2=ENABLE;
For 11g databases, set up the replication again by following the steps in the "Managing a Snapshot Standby Database" section in Oracle Data Guard Concepts and Administration.
Before using the original production site again, modify the host name resolution method for the computers that will be used to access the production site to ensure that the host names point to the production site computers and not the standby site computers. For example, on a Linux computer, change the /etc/hosts
file to point to the virtual IP of the load balancer for the production site.
As an alternative to using storage replication technology for disaster protection and disaster recovery of Oracle Fusion Middleware middle tier components, you can use peer to peer file copy mechanisms in test environments to replicate middle tier file system data from a production site host to a standby site peer host in an Oracle Fusion Middleware Disaster Recovery topology. An example of a peer to peer file copy mechanism is rsync (an open source utility for UNIX systems).
This section describes how to use rsync instead of storage replication in your Oracle Fusion Middleware Disaster Recovery topology. This section discusses rsync in the context of symmetric topologies. For more information about symmetric topologies, refer to Section 4.4, "Creating an Asymmetric Standby Site." The information provided for rsync in this section also applies to other peer to peer file copy mechanisms.
Before you read this section, read the rest of this manual to ensure that you are familiar with how to use storage replication and Oracle Data Guard in an Oracle Fusion Middleware Disaster Recovery topology. There are many similarities between using storage replication and rsync for disaster protection and disaster recovery of your Oracle Fusion Middleware components.
Note:
You can use rsync instead of storage replication technology to replicate middle tier file system data from the production site to the standby site. However, be aware that the following beneficial storage replication features are not available when you use rsync:With storage replication, you can roll changes back to the point in time when any previous snapshot was taken at the production site.
With rsync, replicated production site data overwrites the standby site data, and you cannot roll back a replication.
With storage replication, the volume you set up for each host cluster in the shared storage systems ensures data consistency for that host cluster across the production site's shared storage system and the standby site's shared storage system.
With rsync, data consistency is not guaranteed.
Because of these deficiencies in comparison to storage replication, rsync is not supported for disaster recovery use in actual production environments.
These two basic principles apply when you use rsync and Oracle Data Guard to provide disaster protection and disaster recovery for your Oracle Fusion Middleware Disaster Recovery topology:
Use rsync for disaster protection of your Oracle Fusion Middleware middle tier components.
Use Oracle Data Guard for disaster protection of Oracle databases that are used in your Oracle Fusion Middleware topology. Section 3.3, "Database Considerations" describes how to set up Oracle Data Guard to provide disaster recovery for Oracle database.
Follow these steps to use rsync to provide disaster protection and disaster recovery for your Oracle Fusion Middleware middle tier components:
Set up rsync to enable replication of files from a production site host to its standby site peer host. See the rsync man page for instructions on installing and setting up rsync, and for syntax and usage information. Information about rsync is also available at http://rsync.samba.org
.
For each production site host on which one or more Oracle Fusion Middleware components has been installed, set up rsync to copy the following directories and files to the same directories and files on the standby site peer host:
The Oracle Fusion Middleware home directory and subdirectories, and all the files in them.
The Oracle Central Inventory directory and files for the host, which includes the Oracle Universal Installer entries for the Oracle Fusion Middleware installations.
If applicable, the Oracle Fusion Middleware static HTML pages directory for the Oracle HTTP Server installations on the host.
If applicable, the .fmb and .fmx deployment artifact files created by Oracle Forms on the host, and the .rdf deployment artifact files created by Oracle Reports on the host.
Note:
Run rsync as root. If you want rsync to work without prompting users for a password, set up SSH keys between the production site host and standby site host, so that SSH does not prompt for a password.Set up scheduled jobs, for example, cron jobs, for the production site hosts for which you set up rsync in the previous step. These scheduled jobs enable rsync to automatically perform replication of these files from the production site hosts to the standby site hosts on a regular interval. An interval of once a day is recommended for a production site where the Oracle Fusion Middleware configuration does not change very often.
Whenever a change is made to the configuration of an Oracle Fusion Middleware middle tier configuration on a production site host (for example, when a new application is deployed), you should perform a manual synchronization of that host with its standby site peer host using rsync.
Whenever you perform a manual rsync synchronization of an Oracle Fusion Middleware middle tier instance on a production site host to the peer standby site host, you should also manually force a synchronization of any associated database repository for the production site's Oracle Fusion Middleware instance to the standby site using Oracle Data Guard. See Section 3.3.2, "Manually Forcing Database Synchronization with Oracle Data Guard" for more information on manually forcing a synchronization of an Oracle database using Oracle Data Guard.
Follow these steps to perform a failover or switchover from the production site to the standby site when you are using rsync:
Shut down any processes still running on the production site (if applicable).
Stop the rsync jobs between the production site hosts and their standby site peer hosts.
Use Oracle Data Guard to fail over the production site databases to the standby site.
On the standby site, manually start the processes for the Oracle Fusion Middleware Server instances.
Route all user requests to the standby site by performing a global DNS push or something similar, such as updating the global load balancer.
Use a browser client to perform post-failover or post-switchover testing to confirm that requests are being resolved at the standby site (current production site).
At this point, the standby site is the new production site and the production site is the new standby site.
Reestablish the rsync replications between the two sites, but configure the replications so that they go in the opposite direction (from the current production site to the current standby site).
To use the original production site as the new production site, you perform the steps above again, but configure the rsync replications to go in the original direction (from the original production site to the original standby site).
This section describes how to apply an 11g Oracle Fusion Middleware patch set to upgrade the Oracle homes that participate in an Oracle Fusion Middleware Disaster Recovery site.
The list in this section describes the steps for applying a patch set to upgrade the 11g Oracle Fusion Middleware homes in an Oracle Fusion Middleware Disaster Recovery production site.
The following steps assume that the Oracle Central Inventory for any Oracle Fusion Middleware instance that you are patching is located on the production site shared storage, so that the Oracle Central Inventory for the patched instance can b e replicated to the standby site.
Use the following procedure to upgrade 11g Oracle Fusion Middleware patch versions:
Perform a backup of the production site to ensure that the starting state is secured.
Apply the patch set to upgrade the production site instances.
After applying the patch set, manually force a synchronization of the production site shared storage and standby site shared storage. This replicates the production site's patched instance and Oracle Central Inventory in the standby site's shared storage.
After applying the patch set, use Oracle Data Guard to manually force a synchronization of the Oracle databases at the production site and standby sites. Some Oracle Fusion Middleware patch sets may make updates to repositories, so this step ensures that any changes made to production site databases are synchronized to the standby site databases.
The upgrade is now complete. Your Disaster Recovery topology is ready to resume processing.
Note:
Patches must be applied only at the production site for an 11g Oracle Fusion Middleware Disaster Recovery topology. If a patch is for an Oracle Fusion Middleware instance or for the Oracle Central Inventory, the patch will be copied when the production site shared storage is replicated to the standby site shared storage. A synchronization operation should be performed when a patch is installed at the production site.Similarly, if a patch is installed for a production site database, Oracle Data Guard will copy the patch to the standby database at the standby site when a synchronization is performed.