Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Geographic Edition Data Replication Guide for Oracle Solaris Availability Suite Oracle Solaris Cluster 4.0 |
1. Replicating Data With the Availability Suite Feature of Oracle Solaris
Task Summary of Replicating Data in an Availability Suite Protection Group
Overview of Availability Suite Data Replication
Availability Suite Lightweight Resource Groups
Availability Suite Replication Resource Groups
Protecting Data on Replicated Volumes From Resynchronization Failure
Initial Configuration of Availability Suite Software
Availability Suite Volume Sets
Resources Required For A Volume Set
Automatic Configuration of Volume Sets
Automatically Enabling Fallback Snapshots
How to Set Up Raw-Disk Device Groups for Geographic Edition Systems
How to Configure an Availability Suite Volume in Oracle Solaris Cluster
Enabling an Availability Suite Volume Set
Automatically Enabling a Solaris Volume Manager Volume Set
Automatically Enabling a Raw Device Volume Set
Managing Fallback Snapshots Manually
Manually Enabling Fallback Snapshots
Manually Disabling Fallback Snapshots
Manually Modifying Fallback Snapshots
How to Configure the Oracle Solaris Cluster Device Group That Is Controlled by Availability Suite
How to Configure a Highly Available File System for Use With Availability Suite
2. Administering Availability Suite Protection Groups
3. Migrating Services That Use Availability Suite Data Replication
This section describes the initial steps you must perform before you can configure Availability Suite replication in the Geographic Edition product.
The example protection group, avspg, in this section has been configured on a partnership that consists of two clusters, cluster-paris and cluster-newyork. An application, which is encapsulated in the apprg1 resource group, is protected by the avspg protection group. The application data is contained in the avsdg device group. The volumes in the avsdg device group can be Solaris Volume Manager volumes or raw device volumes.
The resource group, apprg1, and the device group, avsdg, are present on both the cluster-paris cluster and the cluster-newyork cluster. The avspg protection group protects the application data by replicating data between the cluster-paris cluster and the cluster-newyork cluster.
Note - Replication of each device group requires a logical host on the local cluster and a logical host on the partner cluster.
You cannot use the slash character (/) in a cluster tag in the Geographic Edition software. If you are using raw DID devices, you cannot use predefined DID device group names such as dsk/s3.
To use DIDs with raw device groups, see How to Set Up Raw-Disk Device Groups for Geographic Edition Systems.
This section provides the following information:
How to Set Up Raw-Disk Device Groups for Geographic Edition Systems
How to Configure an Availability Suite Volume in Oracle Solaris Cluster
How to Configure the Oracle Solaris Cluster Device Group That Is Controlled by Availability Suite
How to Configure a Highly Available File System for Use With Availability Suite
This section describes the storage resources and the files required to configure a volume set by using the Availability Suite feature.
Before you can define an Availability Suite volume set, you must determine the following:
The data volumes to replicate such as vol-data-paris in avsdg on cluster-paris and vol-data-newyork in avsdg on cluster-newyork.
The bitmap volume that is needed for replication, such as vol-bitmap-paris in avsdg on cluster-paris and vol-bitmap-newyork in avsdg on cluster-newyork.
One shadow volume and one bitmap shadow volume on each cluster to use for a fallback snapshot, if you choose to configure one. A fallback snapshot is a compact dependent shadow volume created on the secondary cluster immediately prior to the resynchronization of a secondary volume, from which the secondary volume can be reconstructed if resynchronization fails. One fallback snapshot can be configured for each replicated volume on each cluster.
Because the fallback snapshot is a compact dependent shadow volume, as described in the Sun StorageTek Availability Suite 4.0 Point-in-Time Copy Software Administration Guide, the shadow volume need only be large enough to contain changes to the secondary volume. For most installations a volume that is 10% the size of the secondary volume is sufficient. The bitmap shadow volume is sized according to the rules described in the Sun StorageTek Availability Suite 4.0 Point-in-Time Copy Software Administration Guide. On each cluster the shadow volume and bitmap shadow volume must be in the same device group as the replicated volume that the fallback snapshot will protect.
The logical host to use exclusively for replication of the device group avsdg, such as the logical host logicalhost-paris-1 on cluster-paris and the logical host logicalhost-newyork-1 on cluster-newyork.
Note - The logical host that is used for Availability Suite replication must be different from the Geographic Edition infrastructure logical host. For more information, see Configuring Logical Hostnames in Oracle Solaris Cluster Geographic Edition System Administration Guideabout configuring logical hostnames.
One devicegroupname-volset.ini file is required for each device group that will be replicated. The volset file is located at /var/cluster/geo/avs/devicegroupname-volset.ini on all nodes of the primary and secondary clusters of the protection group. For example, the volset file for the device group avsdg is located at /var/cluster/geo/avs/avsdg-volset.ini.
The fields in the volume set file that are handled by the Geographic Edition software are described in the following table. The Geographic Edition software does not handle other parameters of the volume set, including size of memory queue, and number of asynchronous threads. You must adjust these parameters manually by using Availability Suite commands.
|
Details on sizing the disk queue volume can be found in the Availability Suite Remote Mirror Software Administration and Operations Guide and the sndradm(1M) man page.
The Geographic Edition software does not modify the value of the Availability Suite parameters. The software controls only the role of the volume set during switchover and takeover operations.
For more information about the format of the volume set files, refer to the Availability Suite documentation and the iiadm(1M) man page.
You can automatically enable fallback snapshots to protect your replicated secondary volumes from corruption by an incomplete resynchronization as described in Protecting Data on Replicated Volumes From Resynchronization Failure. To do so, on each cluster you will configure one /var/cluster/geo/avs/devicegroupname-snapshot.ini file for each device group whose volumes you want to protect. The devicegroupname-snapshot.ini files are read when the device group is added to a protection group, at the same time that the /var/cluster/geo/avs/devicegroupname-volset.ini files of the device group are read. You can also add fallback snapshots to the volumes of a device group after the device group is added to a protection group, as described in Manually Enabling Fallback Snapshots, but automatic configuration is simpler.
A fallback snapshot for one volume in a device group is enabled by using a single line in the devicegroupname-snapshot.ini file in the following format:
master_vol shadow_vol bitmap_shadow_vol
The volumes used by the fallback snapshot are described in Availability Suite Volume Sets. The variable master_vol is the path name of the replicated volume, shadow_vol is the path name of the compact dependent shadow volume that acts as a fallback for the secondary volume, and bitmap_shadow_vol is the path name of the bitmap volume for the compact dependent shadow volume. Full path names for each volume are required, and all three volumes must be in the same device group. For a single replicated volume it is easiest to use the same volume names on each cluster, but it is not required that you do so. For example, the shadow volume on cluster-paris might be /dev/md/avsset/rdsk/d102, while the shadow volume on cluster-newyork might be /dev/md/avsset/rdsk/d108.
The following example shows one line from the /var/cluster/geo/avs/avsset-snapshot.ini file that enables a fallback snapshot on one cluster for the secondary volume /dev/md/avsset/rdsk/d100 in the device group avsset. The device group avsset was created by using Solaris Volume Manager software, but any type of device group supported by the Geographic Edition software can be used with fallback snapshots.
/dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d102 /dev/md/avsset/rdsk/d103
This example line contains the following types of entries:
/dev/md/avsset/rdsk/d100 – Secondary volume
/dev/md/avsset/rdsk/d102 – Fallback snapshot volume
/dev/md/avsset/rdsk/d103 – Fallback snapshot bitmap volume
Geographic Edition supports the use of raw-disk device groups in addition to various volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Geographic Edition.
The following commands remove the predefined device groups for d7 and d8.
phys-paris-1# cldevicegroup disable dsk/d7 dsk/d8 phys-paris-1# cldevicegroup offline dsk/d7 dsk/d8 phys-paris-1# cldevicegroup delete dsk/d7 dsk/d8
Ensure that the new DID does not contain any slashes. The following command creates a global device group, rawdg, which contains d7 and d8.
phys-paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 \ -t rawdisk -d d7,d8 rawdg
phys-paris-1# cldevicegroup show rawdg
You can use the same DIDs on each cluster. In the following command, the newyork cluster is the partner of the paris cluster.
phys-newyork-1# cldevicegroup disable dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup offline dsk/d5 dsk/d6 phys-newyork-1# cldevicegroup delete dsk/d5 dsk/d6
Use the same device group name that you used on the primary cluster.
phys-newyork-1# cldevicegroup create -n phys-newyork-1,phys-newyork-2 \ -t rawdisk -d d5,d6 rawdg
The following command adds rawdg to the Availability Suite protection group rawpg. The device group to be added must exist and must have the same name, in this case rawdg, on both clusters.
phys-paris-1# geopg add-device-group -p local_logical_host=paris-1h \ -p remote_logical_host=newyork-1h rawdg rawpg
This procedure configures Availability Suite volumes in an Oracle Solaris Cluster environment. These volumes can be Solaris Volume Manager volumes or raw device volumes.
The volumes are encapsulated at the Oracle Solaris Cluster device-group level. The Availability Suite feature interacts with the Solaris Volume Manager disk sets or raw device through this device group interface. The path to the volumes depends on the volume type, as described in the following table.
|
For example, if you configure the volume by using a raw device, choose a raw device group, dsk/d3, on cluster-paris and cluster-newyork.
The Availability Suite feature requires a dedicated bitmap volume for each data volume to track which modifications to the data volume when the system is in logging mode.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s3 and /dev/did/rdsk/d3s4, on the /dev/did/rdsk/d3 device on cluster-paris.
If you use a raw device to configure the volumes, create two partitions, /dev/did/rdsk/d3s5 and /dev/did/rdsk/d3s6, on the /dev/did/rdsk/d3 device on cluster-paris.
You can optionally create two additional volumes on each cluster for each data volume for which a fallback snapshot will be created, as described in Availability Suite Volume Sets. The compact dependent shadow volume can normally be 10% of the size of the volume it will protect. The bitmap shadow volume is sized according to the rules described in the Availability Suite Point-in-Time Copy Software Administration Guide and the iiadm(1M) man page. The volumes used by the fallback snapshot must be in the same device group as the replicated volume they protect.
You can enable the Availability Suite volume sets and fallback snapshots in one of two ways:
Automatically, when the device group is added to the protection group, avspg.
Prepare one devicegroupname-volset.ini for each device group that will be replicated when you are setting up the Availability Suite feature for the first time. If you want to automatically enable fallback snapshots, you will also prepare one devicegroupname-snapshot.ini file for each device group. You must set the device group's Enable_volume_set property to True. The Availability Suite feature reads the information in the devicegroupname-volset.ini file to automatically enable the device group. If you have configured the optional devicegroupname-snapshot.ini file, that will also be read when the device group is added to a protection group.
Manually, after the device group is added to the protection group.
Use the manual procedures to enable the volume sets and fallback snapshots when you are creating volumes on a system that has been configured.
In this example, the cluster-paris cluster is the primary and avsset is a device group that contains a Solaris Volume Manager disk set.
Example 1-1 Automatically Enabling a Solaris Volume Manager Volume Set
This example has the following entries in the /var/cluster/geo/avs/avsset-volset.ini file. Each volume must be defined on a single line in the file:
logicalhost-paris-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 ip async C avsset
The avsset-volset.ini file contains the following entries:
lh-paris-1 – Primary host
/dev/md/avsset/rdsk/d100 – Primary data
/dev/md/avsset/rdsk/d101 – Primary bitmap
lh-newyork-1 – Secondary host
/dev/md/avsset/rdsk/d100 – Secondary data
/dev/md/avsset/rdsk/d101 – Secondary bitmap
ip – Protocol
async – Mode
C – C tag
avsset – Disk set
The sample configuration file defines a volume set that replicates d100 from cluster-paris to d100 on cluster-newyork by using the bitmap volumes and logical hostnames that are specified in the file.
In this example, the cluster-paris cluster is the primary and rawdg is the name of the device group that contains a raw device disk group, /dev/did/rdsk/d3.
Example 1-2 Automatically Enabling a Raw Device Volume Set
This example has the following entries in /var/cluster/geo/avs/avsdg-volset.ini file. Each volume must be defined on a single line in the file:
logicalhost-paris-1 /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 /dev/did/rdsk/d3s6 ip async C rawdg
The rawdg-volset.ini file contains the following entries:
logicalhost-paris-1 – Primary host
/dev/did/rdsk/d3s3 – Primary data
/dev/did/rdsk/d3s4 – Primary bitmap
logicalhost-newyork-1 – Secondary host
/dev/did/rdsk/d3s5 – Secondary data
/dev/did/rdsk/d3s6 – Secondary bitmap
ip – Protocol
async – Mode
C – C flag
rawdg – Device group
The sample configuration file defines a volume set that replicates d3s3 from cluster-paris to d3s5 on cluster-newyork. The volume set uses the bitmap volumes and logical hostnames that are specified in the file.
After you have added the device group to the protection group, avspg, you can manually enable the Availability Suite volume sets and fallback snapshots. Because the Sun Availability Suite commands are installed in different locations in the supported software versions, the following examples illustrate how to enable volume sets for each software version.
Example 1-3 Manually Enabling an Availability Suite Volume Set
This example manually enables a Solaris Volume Manager volume set when using Availability Suite.
phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \ /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d101 \ logicalhost-newyork-1 /dev/md/avsset/rdsk/d100 \ /dev/md/avsset/rdsk/d101 ip async C avsset
Example 1-4 Manually Enabling a Raw Device Volume Set
This example manually enables a raw device volume set when using Availability Suite.
phys-paris-1# /usr/sbin/sndradm -e logicalhost-paris-1 \ /dev/did/rdsk/d3s3 /dev/did/rdsk/d3s4 logicalhost-newyork-1 /dev/did/rdsk/d3s5 \ /dev/did/rdsk/d3s6 ip async C dsk/d3
Information about the sndradm command execution is written to the Availability Suite log file at /var/adm/ds.log. Refer to this file if errors occur while manually enabling the volume set.
Fallback snapshots are described in Protecting Data on Replicated Volumes From Resynchronization Failure. The easiest way to enable a fallback snapshot for a volume is to use the automatic configuration procedures described in Automatically Enabling Fallback Snapshots. However, if a device group is added to a protection group without configuring automatic fallback snapshots for its volumes, they can still be configured manually. This section describes the procedures for manually enabling, disabling and modifying a fallback snapshot for a volume in such a device group.
One replication resource group, containing one replication resource, is automatically created for a device group on each cluster when it is added to a protection group, as described in Availability Suite Replication Resource Groups. The Snapshot_volume property of the replication resource can be used to configure fallback snapshots for its device group. The Snapshot_volume property is a string array, so it can be set to as many fallback snapshot configurations as you have volumes in the device group.
You can enable a fallback snapshot on any of the volumes configured on the device group by appending an entry to those already assigned to the Snapshot_volume property. Each entry is a string of the format:
master_vol:shadow_vol:bitmap_shadow_vol
The variable master_vol is set to the full path name of the secondary volume, shadow_vol is set to the full path name of the compact dependent shadow volume that serves as a fallback snapshot for the secondary volume, and bitmap_shadow_vol is set to the full path name of the bitmap volume for the shadow volume. The three fields are separated by colons, and no spaces are permitted anywhere in the entry.
Note - The Snapshot_volume property is set on the replication resource associated with a device group, not on the device group itself. To view the value of the Snapshot_volume property, you must therefore use the clresource show command on the replication resource devicegroupname-rep-rs.
To manually enable a fallback snapshot, the replicated volume must already be configured and added to a protection group as described in How to Add a Data Replication Device Group to an Availability Suite Protection Group. You must also prepare two volumes on each cluster to use for the fallback snapshot as described in Availability Suite Volume Sets.
Because the Snapshot_volume property can contain multiple values in the format master_vol:shadow_vol:bitmap_shadow_vol, you append a new entry to those already assigned to the property by using the += (plus-equal) operator, as shown in this example:
-p Snapshot_volume+=/dev/md/rdsk/avsset/d100:/dev/md/rdsk/avsset/d102:/dev/md/rdsk/avsset/d103
In this entry the replicated volume is /dev/md/avsset/rdsk/d100, in the device group avsset. The fallback snapshot uses the shadow volume /dev/md/avsset/rdsk/d102. Its bitmap shadow volume is /dev/md/avsset/rdsk/d103.
Example 1-5 Manually Enabling a Fallback Snapshot
This example configures fallback snapshots on both clusters for a replicated volume /dev/md/avsset/rdsk/d100 in the Availability Suite device group avsset. For simplicity, this example assumes that you are enabling fallback snapshots for the replicated volume on both clusters. It also assumes the same path names for the replicated volume, the shadow volume and the bitmap shadow volume on both clusters. In practice you can use different volume names on each cluster in a partnership as long as the volumes on any one cluster are in the same device group, and the device group to which they belong has the same name on both clusters.
In this example a fallback snapshot on each cluster is configured by using the compact dependent shadow volume /dev/md/avsset/rdsk/d102 and the bitmap shadow volume /dev/md/avsset/rdsk/d103. The protection group of the replicated volume is avspg. The device group avsset is created by using Solaris Volume Manager software, but any type of device group supported by the Geographic Edition software can be used with fallback snapshots.
Perform Steps 1 and 2 of the following procedure on one node of either cluster. Perform Step 3 on one node of both clusters. Perform Step 4 on one node of the cluster that is currently secondary for the device group.
Perform this step on one node of either cluster.
Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are enabling a fallback snapshot:
phys-newyork-1# /usr/sbin/sndradm -P
Perform this step on one node of either cluster.
Identify the resource group used for the replication of the device group avsset. It will have a name of the form protectiongroupname-rep-rg and it will contain a resource named devicegroupname-rep-rs, as described in Availability Suite Replication Resource Groups. In this example the replication resource group is called avspg-rep-rg, and the replication resource is called avsset-rep-rs.
phys-newyork-1# geopg list
Perform this step on one node of each cluster on which you want to configure fallback snapshots.
Append the entry /dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103 to the Snapshot_volume property on the resource avsset-rep-rs. Do not put spaces adjacent to the colons, and ensure that you include the + sign in the operator:
phys-newyork-1# clresource set -g avspg-rep-rg -p Snapshot_volume+=/dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103 avsset-rep-rs
To enable the fallback snapshot, perform this step on one node of the cluster that is currently secondary for the device group.
Attach the snapshot volume to the secondary replicated volume. In this command you will again specify the master volume, shadow volume, and bitmap shadow volume, separated by spaces:
phys-newyork-1# /usr/sbin/sndradm -C avsset -I a /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d102 /dev/md/avsset/rdsk/d103
A Snapshot_volume property can contain multiple entries, one for each replicated volume in its associated device group. If you want to disable the fallback snapshot for just one of the replicated volumes in a device group, you must identify the exact entry for that volume and explicitly remove it by using the -= (minus-equal) operator as shown in this example:
-p Snapshot_volume-=/dev/md/rdsk/avsset/d100:/dev/md/rdsk/avsset/d102:/dev/md/rdsk/avsset/d103
You can locate the specific entry for the fallback snapshot you want to disable by using the clresource show command on the devicegroupname-rep-rs resource.
Example 1-6 Manually Disabling a Fallback Snapshot
This example disables the fallback snapshot for the secondary replicated volume /dev/md/avsset/rdsk/d100. This fallback snapshot was enabled in Example 1-5. Perform Steps 1 and 2 of the following procedure on one node of either cluster. Perform Steps 3 and 4 on one node of both clusters. Perform Step 5 on one node of the cluster that is currently secondary for the device group.
Perform this step on one node of either cluster.
Verify which cluster is the current primary and which is the current secondary for the device group containing the volume for which you are disabling a fallback snapshot:
phys-newyork-1# /usr/sbin/sndradm -P
Perform this step on one node of either cluster.
Identify the resource group used for the replication of the device group avsset. It will have a name of the form protectiongroupname-rep-rg and it will contain a resource named devicegroupname-rep-rs, as described in Availability Suite Replication Resource Groups. In this example the replication resource group is called avspg-rep-rg, and the replication resource is called avsset-rep-rs.
phys-newyork-1# geopg list
Perform this step on one node of each cluster.
Locate the entry you want to delete from those configured on the Snapshot_volume property of the replication resource:
phys-newyork-1# clresource show -p Snapshot_property avsset-rep-rs
Perform this step on one node of each cluster.
Unconfigure the Snapshot_volume property. The operator -= removes the specified value from the property. Ensure that you include the - sign in the operator, and that you specify the Snapshot_volume entry exactly as it appears in the output of the clresource show command:
phys-newyork-1# clresource set -p Snapshot_volume-=/dev/md/avsset/rdsk/d100:/dev/md/avsset/rdsk/d102:/dev/md/avsset/rdsk/d103 avsset-rep-rs
Perform this step on one node of the cluster that is currently secondary for the device group.
Detach the snapshot volume from the replicated data volume. In this command you will again specify the master volume, shadow volume and bitmap shadow volume, separated by spaces:
phys-newyork-1# /usr/sbin/sndradm -C avsset -I d /dev/md/avsset/rdsk/d100 /dev/md/avsset/rdsk/d102 /dev/md/avsset/rdsk/d103
To manually modify a fallback snapshot, delete the entry you want to change from the Snapshot_volume property, then add the new entry. Follow the procedures that are described in Manually Disabling Fallback Snapshots and in Manually Enabling Fallback Snapshots.
The Availability Suite feature supports Solaris Volume Manager and raw device volumes.
# cldevicegroup show -v dg1
For more information about this command, refer to the cldevicegroup(1CL) man page.
# cldevicegroup show -v dg1
For more information about this command, see the cldevicegroup(1CL) man page.
The application writes to this file system.
You can use either a cluster file system (PxFS) or a highly available local file system as a highly available file system. A highly available file system can be accessed simultaneously by all nodes of the cluster.
Note - You must specify the mount at boot field in this file to no. This value prevents the file system from mounting on the secondary cluster at cluster startup. Instead, the Oracle Solaris Cluster software and the Geographic Edition framework handle mounting the file system by using the HAStoragePlus resource when the application is brought online on the primary cluster. You must not mount data on the secondary cluster because data on the primary will not be replicated to the secondary cluster.
Adding this resource ensures that the necessary file systems are remounted before the application is started.
For more information about the HAStoragePlus resource type, refer to the Oracle Solaris Cluster Data Services Planning and Administration Guide.
Example 1-7 Configuring a Highly Available File System for Solaris Volume Manager Volumes
This example configures a highly available file system for Solaris Volume Manager volumes. This example assumes that the resource group apprg1 already exists.
Create a UNIX file system (UFS).
# newfs /dev/md/avsset/rdsk/d100
Update the /etc/vfstab file on each node of the cluster.
/dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no global,logging
Add the HAStoragePlus resource.
# clresource create -g apprg1 -t SUNWHAStoragePlus \ -p FilesystemMountPoints=/global/sample rs-hasp
Example 1-8 Configuring a Highly Available File System for Raw Device Volumes
This example assumes that the apprg1 resource group already exists.
Create a UNIX file system (UFS).
# newfs /dev/did/rdsk/d3s3
Update the /etc/vfstab file on each node of the cluster.
/dev/md/avsset/dsk/d100 /dev/md/avsset/rdsk/d100 /global/sample ufs 2 no global,logging
Add the HAStoragePlus resource.
# clresource create -g apprg1 -t SUNWHAStoragePlus \ -p FilesystemMountPoints=/global/sample rs-hasp