Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 4.0 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
Overview of the clzonecluster Utility
Adding File Systems to a Zone Cluster
How to Add a Local File System to a Zone Cluster
How to Add a ZFS Storage Pool to a Zone Cluster
How to Add a Cluster File System to a Zone Cluster
Adding Storage Devices to a Zone Cluster
How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)
How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)
This section provides procedures to configure a cluster of Oracle Solaris non-global zones, called a zone cluster. It describes the following topics:
The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.
The utility operates in the following levels of scope, similar to the zonecfg utility:
The cluster scope affects the entire zone cluster.
The node scope affects only the one zone cluster node that is specified.
The resource scope affects either a specific node or the entire zone cluster, depending on which scope you enter the resource scope from. Most resources can only be entered from the node scope. The scope is identified by the following prompts:
clzc:zone-cluster-name:resource> cluster-wide setting clzc:zone-cluster-name:node:resource> node-specific setting
You can use the clzonecluster utility to specify any Oracle Solaris zones resource parameter as well as the parameters that are specific to zone clusters. For information about parameters that you can set in a zone cluster, see the clzonecluster(1CL) man page. Additional information about Oracle Solaris zones resource parameters is in the zonecfg(1M) man page.
This section describes how to configure a cluster of non-global zones.
Perform this procedure to create a cluster of non-global zones.
Before You Begin
Create a global cluster. See Chapter 3, Establishing the Global Cluster.
Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.
Have available the following information:
The unique name to assign to the zone cluster.
The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Resource Types and Properties in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
The name of each node in the global cluster on which to create a zone-cluster node.
The zone public hostname, or host alias, that you assign to each zone-cluster node.
If applicable, the public-network IP address that each zone-cluster node uses.
If applicable, the name of the public-network adapter that each zone-cluster node uses to connect to the public network.
Note - If you do not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
You perform all steps of this procedure from a node of the global cluster.
If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.
phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-2 Online phys-schost-1 Online
Observe the following special instructions:
By default, whole root zones are created. To create whole root zones, add the -b option to the create command.
Specifying an IP address and NIC for each zone cluster node is optional.
phys-schost-1# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> create Set the zone path for the entire zone cluster clzc:zone-cluster-name> set zonepath=/zones/zone-cluster-name Add the first node and specify node-specific settings clzc:zone-cluster-name> add node clzc:zone-cluster-name:node> set physical-host=base-cluster-node1 clzc:zone-cluster-name:node> set hostname=hostname1 clzc:zone-cluster-name:node> add net clzc:zone-cluster-name:node:net> set address=public-netaddr clzc:zone-cluster-name:node:net> set physical=adapter clzc:zone-cluster-name:node:net> end clzc:zone-cluster-name:node> end Add authorization for the public-network addresses that the zone cluster is allowed to use clzc: zone-cluster-name> add net clzc: zone-cluster-name:net> set address=IP-address1 clzc: zone-cluster-name:net> end Save the configuration and exit the utility clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
phys-schost-1# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add node clzc:zone-cluster-name:node> set physical-host=base-cluster-node2 clzc:zone-cluster-name:node> set hostname=hostname2 clzc:zone-cluster-name:node> add net clzc:zone-cluster-name:node:net> set address=public-netaddr clzc:zone-cluster-name:node:net> set physical=adapter clzc:zone-cluster-name:node:net> end clzc:zone-cluster-name:node> end clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, no output is displayed.
phys-schost-1# clzonecluster verify zone-cluster-name phys-schost-1# clzonecluster status zone-cluster-name === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- zone basenode1 zone-1 Offline Configured basenode2 zone-2 Offline Configured
phys-schost-1# clzonecluster install options zone-cluster-name Waiting for zone install commands to complete on all the nodes of the zone cluster "zone-cluster-name"...
If needed, include the following options in the clzonecluster install command.
To include system configuration information, add the following option:
-c config-profile.xml
The -c config-profile.xml option provides a configuration profile for all non-global zones of the zone cluster. Using this option changes only the hostname of the zone, which is unique for each zone in the zone cluster. All profiles must have a .xml extension.
If the base global-cluster nodes for the zone-cluster are not all installed with the same Oracle Solaris Cluster packages but you do not want to change which packages are on the base nodes, add the following option:
-M manifest.xml
The -M manifest.xml option specifies a custom Automated Installer manifest that you configure to install the necessary packages on all zone-cluster nodes. If the clzonecluster install command is run without the -M option, zone-cluster installation fails on a base node if it is missing a package that is installed on the issuing base node.
Installation of the zone cluster might take several minutes phys-schost-1# clzonecluster boot zone-cluster-name Waiting for zone boot commands to complete on all the nodes of the zone cluster "zone-cluster-name"...
On each zone-cluster node, issue the following command and progress through the interactive screens.
phys-schost-1# zlogin -C zone-cluster-name
phys-schost# init -g0 -y -i6
Perform the following commands on each node of the zone cluster.
phys-schost# zlogin zcnode zcnode# svcadm enable svc:/network/dns/client:default zcnode# svcadm enable svc:/network/login:rlogin zcnode# reboot
Example 6-1 Configuration File to Create a Zone Cluster
The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.
In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and the public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the net0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the net1 adapter.
create set zonepath=/zones/sczone add net set address=172.16.2.2 end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.16.0.1 set physical=net0 end end add node set physical-host=phys-schost-2 set hostname=zc-host-2 add net set address=172.16.0.2 set physical=net1 end end commit exit
Example 6-2 Creating a Zone Cluster by Using a Configuration File
The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.
phys-schost-1# clzonecluster configure -f sczone-config sczone phys-schost-1# clzonecluster verify sczone phys-schost-1# clzonecluster install sczone Waiting for zone install commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster boot sczone Waiting for zone boot commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster status sczone === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- sczone phys-schost-1 zc-host-1 Offline Running phys-schost-2 zc-host-2 Offline Running
Next Steps
If you want to add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.
If you want to add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.
See Also
If you want to update a zone cluster, follow procedures in Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide. These procedures include special instructions for zone clusters, where needed.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
This section provides the following procedures to add file systems for use by the zone cluster:
In addition, if you want to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a local file system on the global cluster for use by the zone cluster.
Note - To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.
Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
You perform all steps of the procedure from a node of the global cluster.
Ensure that the file system is created on shared disks.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=mount-point clzc:zone-cluster-name:fs> set special=disk-device-name clzc:zone-cluster-name:fs> set raw=raw-disk-device-name clzc:zone-cluster-name:fs> set type=FS-type clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
Note - Enable logging for UFS file systems.
phys-schost# clzonecluster show -v zone-cluster-name
Example 6-3 Adding a Local File System to a Zone Cluster
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [logging] cluster-control: [true] …
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Note - To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
You perform all steps of this procedure from a node of the global zone.
Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Oracle Solaris Administration: ZFS File Systems for procedures to create a ZFS pool.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add dataset clzc:zone-cluster-name:dataset> set name=ZFSpoolname clzc:zone-cluster-name:dataset> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
phys-schost# clzonecluster show -v zone-cluster-name
Example 6-4 Adding a ZFS Storage Pool to a Zone Cluster
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 …
Next Steps
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
You perform all steps of this procedure from a voting node of the global cluster.
phys-schost# vi /etc/vfstab … /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=zone-cluster-lofs-mountpoint clzc:zone-cluster-name:fs> set special=global-cluster-mount-point clzc:zone-cluster-name:fs> set type=lofs clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the file system mount point for LOFS to make the cluster file system available to the zone cluster.
Specifies the file system mount point of the original cluster file system in the global cluster.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in Oracle Solaris Administration: Devices and File Systems.
phys-schost# clzonecluster show -v zone-cluster-name
Example 6-5 Adding a Cluster File System to a Zone Cluster
The following example shows how to add a cluster file system with mount point /global/apache to a zone cluster. The file system is available to a zone cluster using the loopback mount mechanism at the mount point /zone/apache.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging phys-schost-1# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=/zone/apache clzc:zone-cluster-name:fs> set special=/global/apache clzc:zone-cluster-name:fs> set type=lofs clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /zone/apache special: /global/apache raw: type: lofs options: [] cluster-control: true …
Next Steps
Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
This section describes how to add the direct use of global storage devices by a zone cluster. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.
Note - To import raw-disk devices (cNtXdYsZ) into a zone cluster node, use the zonecfg command as you normally would for other brands of non-global zones.
Such devices would not be under the control of the clzonecluster command, but would be treated as local devices of the node. See Mounting File Systems in Running Non-Global Zones in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management for more information about importing raw-disk devices into a non-global zone.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
This section contains the following procedures:
How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)
How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)
Perform this procedure to add an individual metadevice of a Solaris Volume Manager disk set to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online diskset
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/set-number
You must use a separate add device session for each set match= entry.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add device clzc:zone-cluster-name:device> set match=/dev/md/diskset/*dsk/metadevice clzc:zone-cluster-name:device> end clzc:zone-cluster-name> add device clzc:zone-cluster-name:device> set match=/dev/md/shared/setnumber/*dsk/metadevice clzc:zone-cluster-name:device> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the full logical device path of the metadevice
Specifies the full physical device path of the disk set number
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zone-cluster-name
Example 6-6 Adding a Metadevice to a Zone Cluster
The following example adds the metadevice d1 in the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/d1 clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/d1 clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone
Perform this procedure to add an entire Solaris Volume Manager disk set to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online diskset
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/set-number
You must use a separate add device session for each set match= entry.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add device clzc:zone-cluster-name:device> set match=/dev/md/diskset/*dsk/* clzc:zone-cluster-name:device> end clzc:zone-cluster-name> add device clzc:zone-cluster-name:device> set match=/dev/md/shared/set-number/*dsk/* clzc:zone-cluster-name:device> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the full logical device path of the disk set
Specifies the full physical device path of the disk set number
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zone-cluster-name
Example 6-7 Adding a Disk Set to a Zone Cluster
The following example adds the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/* clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone
Perform this procedure to add a DID device to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
The device you add must be connected to all nodes of the zone cluster.
phys-schost# cldevice list -v
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add device clzc:zone-cluster-name:device> set match=/dev/did/*dsk/dNs* clzc:zone-cluster-name:device> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the full device path of the DID device
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zone-cluster-name
Example 6-8 Adding a DID Device to a Zone Cluster
The following example adds the DID device d10 to the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/did/*dsk/d10s* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone