This section describes the steps you must perform before you can configure Oracle ZFS Storage Appliance remote replication with Geographic Edition software. The following procedures are in this section:
How to Create a Role and Associated User for the Primary and Secondary Appliances
How to Create a Project and Enable Replication for the Project
How to Configure Oracle Solaris Cluster Resources on the Primary Cluster
How to Configure Oracle Solaris Cluster Resources on the Secondary Cluster
If a role and associated user do not yet exist on the source and target appliances, perform this procedure to create them.
Configure the role with the following permissions:
Object nas.*.*.* with permissions clone, destroy, rrsource, rrtarget, createShare, and createProject.
Object workflow.*.* with permission read.
Ensure that NFS exceptions and LUN settings are identical on the primary and secondary storage appliances. For more information, see Copying and Editing Actions in Oracle ZFS Storage 7000 System Administration Guide (http://docs.oracle.com/cd/E26765_01/html/E26397/).
These groups must use the same name in the replication target as in the source appliance.
For more information on replicating projects with iSCSI LUNs, see iSCSI Configurations and Replication in Oracle ZFS Storage Appliance Administration Guide (http://docs.oracle.com/cd/E71909_01/html/E71919/goksk.html)
Troubleshooting
If you need to stop Oracle ZFS Storage Appliance replication directly from the Oracle ZFS Storage appliance, you must perform the following tasks in the order shown:
Set continuous=false.
Wait for the update to complete.
Set enabled=false to stop replication.
Geographic Edition requires that last_result of replication be a success. Otherwise, adding a project to a Geographic Edition protection group and protection group validation will fail.
This procedure creates Oracle Solaris Cluster resources on the primary cluster for the application to be protected.
Before You Begin
Ensure that the following tasks are completed on the storage appliance:
Replication peers are configured by the storage administrator.
Projects are configured by the storage administrator.
Replication is enabled for the project.
For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups use the same names in the replication target as in the source appliance.
If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.
Specify the LUNs or file systems in the Oracle ZFS Storage appliance to be replicated.
For information about creating device groups, file systems, and ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.
This resource manages bringing online the Oracle ZFS Storage Appliance storage on both the primary and secondary clusters.
For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.
This procedure creates Oracle Solaris Cluster resources on the secondary cluster for the application to be protected.
Before You Begin
Ensure that the following tasks are completed on the storage appliance:
Replication peers are configured by the storage administrator.
Projects are configured by the storage administrator.
Replication is enabled for the project.
For iSCSI LUNs, if you use nondefault target groups, the target groups and initiator groups used by LUNs within the project also exist on the replication target. In addition, these groups must use the same names in the replication target as in the source appliance.
If you use file systems, NFS Exceptions exist for all nodes of both clusters. This ensures that either cluster can access the file systems when that cluster has the primary role.
The Auto_start_on_new_cluster property must be set to False.
phys-newyork-1# clresourcegroup create -p Auto_start_on_new_cluster=False \ application-resource-group
If the project contains any LUNs, skip to Step 4.
This executes a manual replication to synchronize the two sites.
Enter the same project name as on the primary appliance.
See How to Create and Configure an Oracle ZFS Storage Appliance Protection Group.
# geopg start -e global protection-group
# geopg switchover -f -m cluster-newyork protection-group
The project is made local on the secondary storage.
Specify the LUNs or file systems in the project that is now local on the secondary appliance.
For information about creating device groups and file systems and adding ZFS storage pools in a cluster configuration, see Oracle Solaris Cluster System Administration Guide.
This resource manages bringing online the Oracle ZFS Storage Appliance storage on the secondary cluster.
For information about creating an HAStoragePlus or scalable mount-point resource, see Oracle Solaris Cluster Data Services Planning and Administration Guide.
phys-newyork-1# clresourcegroup online -emM application-resource-group
If the project contains any LUNs, skip to Step 9.
phys-newyork-1# clresource disable -g application-resource-group + phys-newyork-1# clresourcegroup offline application-resource-group phys-newyork-1# clresourcegroup unmanage application-resource-group
phys-newyork-1# umount /mounts/file-system
phys-newyork-1# cldevicegroup offline raw-disk-group
This step takes offline the configuration on the secondary cluster and brings it online on the primary cluster.
# geopg switchover -f -m cluster-paris protection-group
Next Steps
If the replicated project contains any LUNs, initial configuration on the primary and secondary clusters is now complete.
If the replicated project contains only file systems, go to How to Create and Configure an Oracle ZFS Storage Appliance Protection Group.