This section contains procedures for the following tasks:
Ensuring Data Consistency for Hitachi Universal Replicator in Asynchronous Mode
Requirements to Support Oracle Real Application Clusters With Data Replication Software
How to Create a Protection Group for Oracle Real Application Clusters
How the Data Replication Subsystem Validates the Data Replication Component
For more information, see Creating a Protection Group That Does Not Require Data Replication in Oracle Solaris Cluster 4.3 Geographic Edition Installation and Configuration Guide.
Use the steps in this task to create and configure a Hitachi TrueCopy or Universal Replicator protection group. If you want to use Oracle Real Application Clusters, see How to Create a Protection Group for Oracle Real Application Clusters.
Before You Begin
Before you create a protection group, ensure that the following conditions are met:
The local cluster is a member of a partnership.
The protection group you are creating does not already exist.
You can also replicate the existing configuration of a protection group from a remote cluster to the local cluster. For more information, see Replicating the Hitachi TrueCopy or Universal Replicator Protection Group Configuration to a Secondary Cluster.
You must be assigned the Geo Management rights profile to complete this procedure. For more information, see Chapter 4, Administering RBAC in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
This command creates a protection group on all nodes of the local cluster.
# geopg create -s partnership -o local-role -d truecopy [-p property [-p…]] \ protection-group
Specifies the name of the partnership.
Specifies the role of this protection group on the local cluster as either primary or secondary.
Specifies that the protection group data is replicated by the Hitachi TrueCopy or Universal Replicator software.
Specifies the properties of the protection group.
You can specify the following properties:
Description – Describes the protection group.
Timeout – Specifies the timeout period for the protection group in seconds.
Nodelist – Lists the host names of the machines that can be primary for the replication subsystem.
Ctgid – Specifies the consistency group ID (CTGID) of the protection group.
Cluster_dgs – Optional. Specifies the Oracle Solaris Cluster device group where the data is written. The LUNs in this device group must correspond to the LUNs that are replicated in the Hitachi TrueCopy or Universal Replicator data replication component that you added to the protection group. Setting this property ensures that the Geographic Edition framework takes offline the specified device group during switchovers and takeovers.
For more information about the properties you can set, see Appendix A, Standard Geographic Edition Properties, in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
Specifies the name of the protection group.
For information about the names and values that are supported by the Geographic Edition framework, see Appendix B, Legal Names and Values of Geographic Edition Entities, in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
This example creates a Hitachi TrueCopy or Universal Replicator protection group on cluster-paris, which is set as the primary cluster.
# geopg create -s paris-newyork-ps -o primary -d truecopy \ -p Ctgid=5 -p Nodelist=phys-paris-1,phys-paris-2 hdspgExample 6 Creating a Hitachi TrueCopy or Universal Replicator Protection Group for Application Resource Groups That Are Online
This example creates a Hitachi TrueCopy or Universal Replicator protection group, hdspg, for an application resource group, resourcegroup1, that is currently online on cluster-newyork.
Create the protection group without the application resource group.
# geopg create -s paris-newyork-ps -o primary -d truecopy \ -p Ctgid=5 -p Nodelist=phys-paris-1,phys-paris-2 hdspg
Activate the protection group.
# geopg start -e local hdspg
Add the application resource group.
# geopg add-resource-group resourcegroup1 hdspg
This section describes the protection group configuration that is required in Geographic Edition software to guarantee data consistency in asynchronous mode replication. Asynchronous mode replication is implemented by using the async fence level of Hitachi Universal Replicator. The following discussion therefore applies only to the async fence level and to Hitachi Universal Replicator as implemented in the Geographic Edition module.
The Geographic Edition module supports Hitachi TrueCopy and Universal Replicator data replication components in asynchronous mode replication. Routine operations for both Hitachi TrueCopy and Universal Replicator provide data consistency in asynchronous mode. However, in the event of a temporary loss of communications or of a “rolling disaster” where different parts of the system fail at different times, only Hitachi Universal Replicator software can prevent loss of consistency of replicated data for asynchronous mode. In addition, Hitachi Universal Replicator software can only ensure data consistency with the configuration described in this section and in Configuring the /etc/horcm.conf File on the Nodes of the Primary Cluster and Configuring the /etc/horcm.conf File on the Nodes of the Secondary Cluster.
In Hitachi Universal Replicator software, the Hitachi storage arrays replicate data from primary storage to secondary storage. The application that produced the data is not involved. Even so, to guarantee data consistency, replication must preserve the application's I/O write ordering, regardless of how many disk devices the application writes.
During routine operations, Hitachi Universal Replicator software on the storage secondary array pulls data from cache on the primary storage array. If data is produced faster than it can be transferred, Hitachi Universal Replicator can commit backlogged I/O and a sequence number for each write to a journal volume on the primary storage array. The secondary storage array pulls that data from primary storage and commits it to its own journal volumes, from where it is transferred to application storage. If communications fail and are later restored, the secondary storage array begins to resynchronize the two sites by continuing to pull backlogged data and sequence numbers from the journal volume. Sequence numbers control the order in which data blocks are committed to disk so that write ordering is maintained at the secondary site despite the interruption. As long as journal volumes have enough disk space to record all data that is generated by the application that is running on the primary cluster during the period of failure, consistency is guaranteed.
In the event of a rolling disaster, where only some of the backlogged data and sequence numbers reach the secondary storage array after failures begin, sequence numbers determine which data should be committed to data LUNs to preserve consistency.
Along with journal volumes, consistency group IDs (CTGIDs) ensure data consistency even if the storage for an application data service includes devices in multiple Hitachi data replication components. A CTGID is an integer that is assigned to one or more Hitachi Universal Replicator data replication components. It designates those devices that must be maintained in a state of replication consistent with each other. Consistency is maintained among all devices with the same CTGID whether the devices are members of a single Hitachi Universal Replicator data replication component or several Hitachi Universal Replicator data replication components. For example, if Hitachi Universal Replicator stops replication on the devices of one data replication component that is assigned the CTGID of 5, it stops replication on all other devices in data replication components with the CTGID of 5.
To ensure data consistency, an exact correspondence must therefore exist between the data replication components that are used by a single application data service and a CTGID. All data replication components that are used by a single data service must have the same unique CTGID. No data replication component can have that CTGID unless it is used by the data service.
To ensure this correspondence, the Geographic Edition software allows the administrator to set a CTGID property on each protection group. The data replication components that are added to the protection group must all have the same CTGID as the protection group. If other data replication components are assigned the same CTGID as the data replication components in the protection group, the Geographic Edition framework generates an error. For example, if the protection group app1-pg has been assigned the CTGID of 5, all data replication components included in app1-pg must have the CTGID of 5. Moreover, all CTGIDs of data replication components that are included in app1-pg must have the CTGID of 5.
You are not required to set a CTGID on a protection group. The Hitachi Universal Replicator storage software will automatically assign a unique CTGID to an asynchronously replicated data replication component when it is initialized. Thereafter, the pairs in that data replication component will be maintained in a state of consistency with each other. Thus, if an application data service in a protection group uses storage in just one asynchronously replicated Hitachi Universal Replicator data replication component, you can let the Hitachi Universal Replicator storage array assign the data replication component's CTGID. You do not have to also set the CTGID of the protection group.
Similarly, if you do not need data consistency, or if your application does not write asynchronously to your Hitachi Universal Replicator data replication components, then setting the CTGID on the protection group has little use. However, if you do not assign a CTGID to a protection group, any later configuration changes to the data replication component or to the protection group might lead to conflicts. Assignment of a CTGID to a protection group provides the most flexibility for later changes and the most assurance of data replication component consistency.
You can assign a consistency group ID (CTGID) to a protection group by setting the property ctgid=consistency-group-ID as an option to the geopg create command. You can assign CTGID values to data replication components in one of two ways:
You can add uninitialized data replication components to the protection group. They are initialized and acquire the CTGID of the protection group when the protection group is started with the geopg start command.
You can initialize a data replication component with the CTGID that you plan to use for the protection group that will hold that data replication component. After you create the protection group with that CTGID, you must assign the data replication component to it.
The following procedure demonstrates these two methods of setting the CTGID for the devices that are used by an application data service. The procedure configures a protection group named app1-pg with a CTGID of 5. This protection group contains the app1-rg resource group and the Hitachi Universal Replicator devgroup1 data replication component, which uses the async fence level.
Before You Begin
Configure a Hitachi Universal Replicator data replication component with journal volumes in the /etc/horcm.conf file as described inConfiguring the /etc/horcm.conf File on the Nodes of the Primary Cluster and Configuring the /etc/horcm.conf File on the Nodes of the Secondary Cluster.
Configure the devices in each device group as raw-disk devices as described in How to Set Up Raw-Disk Device Groups for Geographic Edition Systems.
Configure an Oracle Solaris Cluster resource group that includes a resource of type HAStoragePlus in addition to any other resources that are required for its application data service. This HAStoragePlus resource must use the disk devices of a previously configured Hitachi Universal Replicator data replication component as described in How to Configure a Highly Available Local File System for Hitachi TrueCopy or Universal Replicator Replication.
phys-paris-1# geopg create -s paris-newyork-ps -o primary -d truecopy -p Ctgid=5 \ -p Nodelist=phys-paris-1,phys-paris-2 app1-pg
phys-paris-1# geopg add-resource-group app1-rg app1-pg
Add data replication components that have been configured in the /etc/horcm.conf file but have not been initialized by using the paircreate command.
phys-paris-1# geopg add-replication-component -p Fence_level=async devgroup1 app1-pg
Assign CTGIDs to data replication components when they are initialized by using the Hitachi paircreate command, and add the data replication components to the protection group that has the same value for the CTGID property.
In the following example, a data replication component is initialized with the CTGID of 5 and then added to the app1-pg protection group:
phys-paris-1# paircreate -g devgroup1 -vl -f async 5
phys-paris-1# geopg add-replication-component -p Fence_level=async devgroup1 app1-pg
phys-paris-1# geopg start -e local app1-pg
Uninitialized data replication components, if any, are initialized and assigned the CTGID of 5.
The Geographic Edition framework supports Oracle Real Application Clusters with Hitachi TrueCopy and Universal Replicator software. Observe the following requirements when you configure Oracle Real Application Clusters:
Each Oracle Clusterware OCR and Voting Disk Location must be in its own device group on each cluster and cannot be replicated.
Static data such as Oracle Clusterware and database binaries are not required to be replicated. But this data must be accessible from all nodes of both clusters.
You must create storage resources for dynamic database files that are replicated in their own resource groups. These storage resources must be separate from the resource group that holds the storage resource for Oracle Clusterware.
To be able to leave Oracle RAC infrastructure resource groups outside of Geographic Edition control, you must run Geographic Edition binaries on both cluster partners and set the Oracle RAC protection group External_Dependency_Allowed property to true.
Do not add the Oracle Clusterware OCR and Voting Disk device group to the protection group's Cluster_dgs property.
Do not add Oracle RAC infrastructure resource groups to the protection group. Only add the rac_server_proxy resource group and resource groups for device groups that are replicated to the protection group. Also, you must set to False the Auto_start_on_new_cluster resource group property for the rac_server_proxy resource group and resource groups and for device groups that are replicated.
Before You Begin
Before you create a protection group for Oracle Real Application Clusters (Oracle RAC), ensure that the following conditions are met:
Read Requirements to Support Oracle Real Application Clusters With Data Replication Software.
The node list of the protection group must be the same as the node list of Oracle RAC framework resource group.
If one cluster is running Oracle RAC on a different number of nodes than another cluster, ensure that all nodes on both clusters have the same resource groups defined.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Chapter 4, Administering RBAC in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
This command creates a protection group on all nodes of the local cluster.
# geopg create -s partnership -o local-role -d truecopy \ -p External_Dependency_Allowed=true [-p property [-p…]] protection-group
Specifies the name of the partnership.
Specifies the role of this protection group on the local cluster as primary.
Specifies that the protection group data is replicated by the Hitachi TrueCopy or Universal Replicator software.
Specifies the properties of the protection group.
You can specify the following properties:
Description – Describes the protection group.
External_Dependency_Allowed - Specifies whether to allow any dependencies between resource groups and resources that belong to this protection group and resource groups and resources that do not belong to this protection group. For RAC, set this property to true.
Timeout – Specifies the timeout period for the protection group in seconds.
Nodelist – Lists the host names of the machines that can be primary for the replication subsystem.
Ctgid – Specifies the consistency group ID (CTGID) of the protection group.
Cluster_dgs – Optional. Specifies the Oracle Solaris Cluster device group where the replicated data is written. Specify this property if you want Geographic Edition to unmount the file systems on this device group and take offline the device group. Do not specify OCR or Voting Disk device groups to this property.
For more information about the properties you can set, see Appendix A, Standard Geographic Edition Properties, in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
Specifies the name of the protection group.
For information about the names and values that are supported by the Geographic Edition framework, see Appendix B, Legal Names and Values of Geographic Edition Entities, in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
For more information about the geopg command, refer to the geopg(1M) man page.
# geopg add-replication-component [-p property [-p…]] protection-group
Specifies the properties of the protection group.
You can specify the Fence_level properties which defines the fence level that is used by the disk device group. The fence level determines the level of consistency among the primary and secondary volumes for that disk device group. You must set this to never.
![]() | Caution - To avoid application failure on the primary cluster, specify a Fence_level of never or async. If the Fence_level parameter is not set to never or async, data replication might not function properly when the secondary site goes down. If you specify a Fence_level of never, the data replication roles do not change after you perform a takeover. Do not use programs that would prevent the Fence_level parameter from being set to data or status because these values might be required in special circumstances. If you have special requirements to use a Fence_level of data or status, consult your Oracle representative. |
For more information about the properties you can set, see Appendix A, Standard Geographic Edition Properties, in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide.
Specifies the name of the protection group.
# geopg add-resource-group resource-group protection-group
Specifies a comma-separated list of resource groups to add to or remove from the protection group. The specified resource groups must already be defined.
The protection group must be online before you add a resource group. The geopg add-resource-group command fails when a protection group is offline and the resource group that is being added is online.
Specifies the name of the protection group.
This example creates the protection group pg1 with the External_dependency_allowed property set to true. The example adds a replication component to the protection group, adds resource groups that contain a RAC server proxy, and that contain resources for Oracle ASM device groups for Oracle Database files that use replicated LUNs. The node list of the Oracle RAC framework resource group is set to all nodes of the cluster.
Create the protection group on the primary cluster.
# geopg create -s pts1 -o PRIMARY -d truecopy \ -p External_Dependency_Allowed=true pg1 Protection group "pg1" successfully created.
Add the Hitachi TrueCopy or Universal Replicator data replication component VG01 to protection group pg1.
# geopg add-replication-component --property Fence_level=never VG01 pg1 Device group "VG01" successfully added to the protection group "pg1".
Add the rac_server_proxy-rg resource group and the replicated device-group resource groups asm-dg-dbfiles-rg, hasp4rac-rg, and scaldbdg-rg, to the protection group.
# geopg add-resource-group rac_server_proxy-rg,asm-dg-dbfiles-rg,hasp4rac-rg,scaldbdg-rg pg1
Before creating the protection group, the data replication layer validates that the horcmd daemon is running. The data replication layer validates that the horcmd daemon is running on at least one node that is specified in the Nodelist property.
If the Cluster_dgs property is specified, then the data replication layer verifies that the device group specified is a valid Oracle Solaris Cluster device group. The data replication layer also verifies that the device group is of a valid type.
An Oracle Solaris Cluster resource group is automatically created when the protection group is created.
This resource in this resource group monitors data replication. The name of the Hitachi TrueCopy or Universal Replicator data replication resource group is rg-tc-protection-group.
![]() | Caution - These automatically created replication resource groups are for Geographic Edition internal implementation purposes only. Use caution when you modify these resource groups by using Oracle Solaris Cluster commands. |