Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 4.1 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
Overview of Creating and Configuring a Zone Cluster
Creating and Configuring a Zone Cluster
How to Install and Configure Trusted Extensions
How to Configure a Zone Cluster to Use Trusted Extensions
Adding File Systems to a Zone Cluster
How to Add a Highly Available Local File System to a Zone Cluster
How to Add a ZFS Storage Pool to a Zone Cluster
How to Add a Cluster File System to a Zone Cluster
Adding Local File Systems to a Specific Zone-Cluster Node
How to Add a Local File System to a Specific Zone-Cluster Node
How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node
Adding Storage Devices to a Zone Cluster
How to Add a Global Storage Device to a Zone Cluster
How to Add a Raw-Disk Device to a Specific Zone-Cluster Node
This section provides the following information and procedures to create and configure a zone cluster.
This section provides procedures on how to use the clsetup utility to create a zone cluster, and add a network address, file system, ZFS storage pool, and storage device to the new zone cluster.
If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.
You can alternatively use the clzonecluster utility to create and configure a cluster. See the clzonecluster(1CL) man page for more information.
This section contains the following procedures:
This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris with zone clusters. If you do not plan to enable Trusted Extensions, proceed to Creating a Zone Cluster.
Perform this procedure on each node in the global cluster.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster and Trusted Extensions software. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
If an external name service is used, ensure that an LDAP naming service is configured for use by Trusted Extensions. See Chapter 5, Configuring LDAP for Trusted Extensions (Tasks), in Trusted Extensions Configuration and Administration
Review requirements and guidelines for Trusted Extensions in a zone cluster. See Guidelines for Trusted Extensions in a Zone Cluster.
Follow procedures in Chapter 3, Adding the Trusted Extensions Feature to Oracle Solaris (Tasks), in Trusted Extensions Configuration and Administration.
The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.
Disable this feature by replacing each script with a symbolic link to the /bin/true utility.
phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true phys-schost# ln -s /usr/lib/zones/zoneunshare /bin/true
phys-schost# svcadm enable rlogin
Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.
other account requisite pam_roles.so.1 Tab allow_remote other account required pam_unix_account.so.1 Tab allow_unlabeled
# tncfg -t admin_low tncfg:admin_low> add host=ip-address1 tncfg:admin_low> add host=ip-address2 … tncfg:admin_low> exit
# tncfg -t admin_low remove host=0.0.0.0
# tncfg -t cipso tncfg:cipso> add host=ip-address1 tncfg:cipso> add host=ip-address2 … tncfg:cipso> exit
When all steps are completed on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.
The LDAP server is used by the global zone and by the nodes of the zone cluster.
Next Steps
Create the zone cluster. Go to Creating a Zone Cluster.
Perform this procedure to create a zone cluster.
To modify the zone cluster after it is installed, see Performing Zone Cluster Administrative Tasks in Oracle Solaris Cluster System Administration Guide and the clzonecluster(1CL) man page.
Before You Begin
Create a global cluster. See Chapter 3, Establishing the Global Cluster.
Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.
If the zone cluster will use Trusted Extensions, ensure that you have installed, configured, and enabled Trusted Extensions as described in How to Install and Configure Trusted Extensions.
Have available the following information:
The unique name to assign to the zone cluster.
Note - If Trusted Extensions is enabled, the zone cluster name must be the same name as a Trusted Extensions security label that has the security levels that you want to assign to the zone cluster. Create a separate zone cluster for each Trusted Extensions security label that you want to use.
The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Resource Types and Properties in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management. By default, whole-root zones are created.
The name of each node in the global cluster on which to create a zone-cluster node.
The zone public hostname, or host alias, that you assign to each zone-cluster node.
If applicable, the public-network IP address that each zone-cluster node uses. Specifying an IP address and NIC for each zone cluster node is optional.
If applicable, the name of the public-network IPMP group that each zone-cluster node uses to connect to the public network.
Note - If you do not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-2 Online phys-schost-1 Online
phys-schost# clsetup
The Main Menu is displayed.
A zone cluster name can contain ASCII letters (a-z and A-Z), numbers, a dash, or an underscore. The maximum length of the name is 20 characters.
You can set the following properties:
|
A root account password is required for a solaris10 brand zone.
You can set the following properties:
|
You can set the following properties:
|
You can set the following properties:
|
You can select one or all of the available physical nodes (or hosts), and then configure one zone-cluster node at a time.
You can set the following properties:
|
The network addresses can be used to configure a logical hostname or shared IP cluster resources in the zone cluster. The network address is in the zone cluster global scope.
The results of your configuration change are displayed, similar to the following:
>>> Result of the Creation for the Zone Cluster(sczone) <<< The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone create set brand=solaris set zonepath=/zones/sczone set ip-type=shared set enable_priv_net=true add capped-memory set physical=2G end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.1.1.1 set physical=net0 end end add net set address=172.1.1.2 end Zone cluster, zc2 has been created and configured successfully. Continue to install the zone cluster(yes/no) ?
The clsetup utility performs a standard installation of a zone cluster and you cannot specify any options.
The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, no output is displayed.
phys-schost-1# clzonecluster verify zone-cluster-name phys-schost-1# clzonecluster status zone-cluster-name === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- zone basenode1 zone-1 Offline Configured basenode2 zone-2 Offline Configured
From the global zone, launch the txzonemgr GUI.
phys-schost# txzonemgr
Select the global zone, then select the item, Configure per-zone name service.
phys-schost-1# clzonecluster install options zone-cluster-name Waiting for zone install commands to complete on all the nodes of the zone cluster "zone-cluster-name"...
|
Use either the -a or -d option to install Geographic Edition software, core packages, and agents that are supported in the zone cluster:
Note - For a list of agents that are currently supported in a solaris10 brand zone cluster, see Oracle Solaris Cluster 4 Compatibility Guide.
|
For more information, see the clzonecluster(1CL) man page.
Otherwise, skip to Step 21.
Note - In the following steps, the non-global zone zcnode and zone-cluster-name share the same name.
Configure only one zone-cluster node at a time.
phys-schost# zoneadm -z zcnode boot
phys-schost# zlogin zcnode zcnode# sysconfig unconfigure zcnode# reboot
The zlogin session terminates during the reboot.
phys-schost# zlogin -C zcnode
For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
phys-schost# zoneadm -z zcnode halt
phys-schost# clzonecluster boot zone-cluster-name
phys-schost# zlogin zcnode zcnode# sysconfig unconfigure zcnode# reboot
The zlogin session terminates during the reboot.
phys-schost# zlogin -C zcnode
For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
phys-schost# clzonecluster boot zone-cluster-name
phys-schost# zlogin -C zcnode
For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
Installation of the zone cluster might take several minutes.
phys-schost# clzonecluster boot zone-cluster-name
The clsetup utility does not automatically configure IPMP groups for exclusive-IP zone clusters. You must create an IPMP group manually before you create a logical-hostname or shared-address resource.
phys-schost# ipadm create-ipmp -i interface sc_ipmp0 phys-schost# ipadm delete-addr interface/name phys-schost# ipadm create-addr -T static -a IPaddress/prefix sc_ipmp0/name
Next Steps
To configure Oracle Solaris Cluster 3.3 data services that you installed in a solaris10 brand zone cluster, follow procedures for zone clusters in the applicable data-service manual. See Oracle Solaris Cluster 3.3 Documentation.
To complete Trusted Extensions configuration, go to How to Configure a Zone Cluster to Use Trusted Extensions.
Otherwise, add file systems or storage devices to the zone cluster. See the following sections:
After you create a labeled brand zone cluster, perform the following steps to finish configuration to use Trusted Extensions.
Perform this step on each node of the zone cluster.
phys-schost# cat /etc/cluster/nodeid N
Ensure that the SMF service has been imported and all services are up before you log in.
The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.
In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.
zc1# ifconfig -a lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone zc1 inet 127.0.0.1 netmask ff000000 net0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:3:ba:19:fa:b7 ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4 inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:14:4f:24:74:d8 ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 zone zc1 inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23 ether 0:0:0:0:0:2 clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 zone zc1 inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID
172.16.0.22 clusternodeN-priv
Each net resource that was specified to the clzonecluster command when you created the zone cluster
Create new entries for the IP addresses used by zone-cluster components and assign each entry a CIPSO template. These IP addresses which exist in the zone-cluster node's /etc/inet/hosts file are as follows:
Each zone-cluster node private IP address
All cl_privnet IP addresses in the zone cluster
Each logical-hostname public IP address for the zone cluster
Each shared-address public IP address for the zone cluster
phys-schost# tncfg -t cipso tncfg:cipso> add host=ipaddress1 tncfg:cipso> add host=ipaddress2 … tncfg:cipso> exit
For more information about CIPSO templates, see How to Configure a Different Domain of Interpretation in Trusted Extensions Configuration and Administration.
Perform the following commands on each node of the zone cluster.
phys-schost# ipadm set-prop -p hostmodel=weak ipv4 phys-schost# ipadm set-prop -p hostmodel=weak ipv6
For more information about the hostmodel property, see hostmodel (ipv4 or ipv6) in Oracle Solaris 11.1 Tunable Parameters Reference Manual.
Next Steps
To add file systems or storage devices to the zone cluster. See the following sections:
See Also
If you want to update the software on a zone cluster, follow procedures in Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide. These procedures include special instructions for zone clusters, where needed.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
Note - To add a file system whose use is limited to a single zone-cluster node, see instead Adding Local File Systems to a Specific Zone-Cluster Node.
This section provides the following procedures to add file systems for use by the zone cluster:
Perform this procedure to configure a highly available local file system on the global cluster for use by a zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.
Perform all steps of the procedure from a node of the global cluster.
Ensure that the file system is created on shared disks.
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.
The Mount Type Selection menu is displayed.
The File System Properties for the Zone Cluster menu is displayed.
When, finished, type d and press Return.
The results of your configuration change are displayed.
phys-schost# clzonecluster show -v zone-cluster-name
Example 6-1 Adding a Highly Available Local File System to a Zone Cluster
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [logging] cluster-control: [true] …
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a ZFS storage pool to a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.
The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.
Before You Begin
Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Oracle Solaris 11.1 Administration: ZFS File Systems for procedures to create a ZFS pool.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The ZFS Pool Selection for the Zone Cluster menu is displayed.
The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.
The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.
The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add dataset set name=myzpool5 end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zoneclustername
The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.
You can add the following types of cluster file systems to a zone cluster:
UFS cluster file system - You specify the file system type in the /etc/vfstab file, using the global mount option. This file system can be located on the shared disk or on a Solaris Volume Manager device.
Before You Begin
Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Chapter 5, Creating a Cluster File System.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# vi /etc/vfstab
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
You can also type e to manually specify all properties for a file system.
The Mount Type Selection menu is displayed.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in Oracle Solaris 11.1 Administration: Devices and File Systems.
The File System Properties for the Zone Cluster menu is displayed.
Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.
When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add fs set dir=/dev/md/ddg/dsk/d9 set special=/dev/md/ddg/dsk/d10 set raw=/dev/md/ddg/rdsk/d10 set type=lofs end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zone-cluster-name
Next Steps
(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
This section describes how to add file systems that are dedicated to a single zone-cluster node. To instead configure file systems for use by the entire zone cluster, go to Adding File Systems to a Zone Cluster.
This section contains the following procedures:
How to Add a Local File System to a Specific Zone-Cluster Node
How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node
Perform this procedure to add a local file system to a single, specific zone-cluster node of a specific zone cluster. The file system is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.
Note - To add a highly available local file system to a zone cluster, perform procedures in How to Add a Highly Available Local File System to a Zone Cluster.
Note - Perform all steps of this procedure from a node of the global cluster.
Use local disks of the global-cluster node that hosts the intended zone-cluster node.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> select node physical-host=baseclusternode clzc:zoneclustername:node> add fs clzc:zoneclustername:node:fs> set dir=mountpoint clzc:zoneclustername:node:fs> set special=disk-device-name clzc:zoneclustername:node:fs> set raw=raw-disk-device-name clzc:zoneclustername:node:fs> set type=FS-type clzc:zoneclustername:node:fs> end clzc:zoneclustername:node> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the file-system mount point
Specifies the name of the disk device
Specifies the name of the raw-disk device
Specifies the type of file system
Note - Enable logging for UFS file systems.
phys-schost# clzonecluster show -v zoneclustername
Example 6-2 Adding a Local File System to a Zone-Cluster Node
This example adds a local UFS file system /local/data for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add fs clzc:sczone:node:fs> set dir=/local/data clzc:sczone:node:fs> set special=/dev/md/localdg/dsk/d1 clzc:sczone:node:fs> set raw=/dev/md/localdg/rdsk/d1 clzc:sczone:node:fs> set type=ufs clzc:sczone:node:fs> add options [logging] clzc:sczone:node:fs> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: fs dir: /local/data special: /dev/md/localdg/dsk/d1 raw: /dev/md/localdg/rdsk/d1 type: ufs options: [logging] cluster-control: false ...
Perform this procedure to add a local ZFS storage pool to a specific zone-cluster node. The local ZFS pool is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.
Note - To add a highly available local ZFS pool to a zone cluster, see How to Add a Highly Available Local File System to a Zone Cluster.
Perform all steps of the procedure from a node of the global cluster.
Use local disks of the global-cluster node that hosts the intended zone-cluster node.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> select node physical-host=baseclusternode clzc:zoneclustername:node> add dataset clzc:zoneclustername:node:dataset> set name=localZFSpoolname clzc:zoneclustername:node:dataset> end clzc:zoneclustername:node> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the name of the local ZFS pool
phys-schost# clzonecluster show -v zoneclustername
Example 6-3 Adding a Local ZFS Pool to a Zone-Cluster Node
This example adds the local ZFS pool local_pool for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add dataset clzc:sczone:node:dataset> set name=local_pool clzc:sczone:node:dataset> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: dataset name: local_pool
This section describes how to add the direct use of global storage devices by a zone cluster or add storage devices that are dedicated to a single zone-cluster node. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
This section contains the following procedures:
Perform this procedure to add one of the following types of storage devices in cluster scope:
Raw-disk devices
Solaris Volume Manager disk sets (not including multi-owner)
Note - To add a raw-disk device to a specific zone-cluster node, go instead to How to Add a Raw-Disk Device to a Specific Zone-Cluster Node.
The clsetup utility discovers and displays the available storage devices that are configured on the cluster nodes where the selected zone cluster is configured. After you use the clsetup utility to add a storage device to an existing zone cluster , use the clzonecluster command to modify the configuration. For instructions on using the clzonecluster command to remove a storage device from a zone cluster, see How to Remove a Storage Device From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online device
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
A list of the available devices is displayed.
You can also type e to manually specify properties for a storage device.
The Storage Device Property for the Zone Cluster menu is displayed.
Note - An asterisk (*) is used as a wildcard character in the path name.
When, finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add device set match=/dev/md/ddg/*dsk/* end add device set match=/dev/md/shared/1/*dsk/* end Configuration change to sczone zone cluster succeeded. The change will become effective after the zone cluster reboots.
phys-schost# clzonecluster show -v zoneclustername
Perform this procedure to add a raw-disk device to a specific zone-cluster node. This device would not be under Oracle Solaris Cluster control. Perform all steps of the procedure from a node of the global cluster.
Note - To add a raw-disk device for use by the full zone cluster, go instead to How to Add a Global Storage Device to a Zone Cluster.
You perform all steps of this procedure from a node of the global cluster.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> select node physical-host=baseclusternode clzc:zone-cluster-name:node> add device clzc:zone-cluster-name:node:device> set match=/dev/*dsk/cNtXdYs* clzc:zone-cluster-name:node:device> end clzc:zone-cluster-name:node> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the full device path of the raw-disk device
phys-schost# clzonecluster show -v zoneclustername
Example 6-4 Adding a Raw-Disk Device to a Specific Zone-Cluster Node
The following example adds the raw–disk device c1t1d0s0 for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add device clzc:sczone:node:device> set match=/dev/*dsk/c1t1d0s0 clzc:sczone:node:device> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: device name: /dev/*dsk/c1t1d0s0