Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide Oracle Solaris Cluster 4.0 |
1. Installing and Configuring HA for Solaris Zones
Overview of Installing and Configuring HA for Solaris Zones
Planning the HA for Solaris Zones Installation and Configuration
Restrictions for Zone Network Addresses
Restrictions for a Multiple-Masters Zone
Restrictions for the Zone Path of a Zone
Restrictions on Major Device Numbers in /etc/name_to_major
Dependencies Between HA for Solaris Zones Components
Parameter File Directory for HA for Solaris Zones
Installing and Configuring Zones
How to Enable a Zone to Run in a Failover Configuration
How to Enable a Zone to Run in a Multiple-Masters Configuration
How to Install a Zone and Perform the Initial Internal Zone Configuration
Verifying the Installation and Configuration of a Zone
How to Verify the Installation and Configuration of a Zone
Installing the HA for Solaris Zones Package
How to Install the HA for Solaris Zones Package
Registering and Configuring HA for Solaris Zones
Specifying Configuration Parameters for the Zone Boot Resource
Writing Scripts for the Zone Script Resource
Specifying Configuration Parameters for the Zone Script Resource
Writing a Service Probe for the Zone SMF Resource
Specifying Configuration Parameters for the Zone SMF Resource
How to Create and Enable Resources for the Zone Boot Component
How to Create and Enable Resources for the Zone Script Component
How to Create and Enable Resources for the Zone SMF Component
Verifying the HA for Solaris Zones and Configuration
How to Verify the HA for Solaris Zones Installation and Configuration
Upgrading Non-Global Zones Managed by HA for Oracle Solaris Zones
Tuning the HA for Solaris Zones Fault Monitors
Operation of the HA for Solaris Zones Parameter File
Operation of the Fault Monitor for the Zone Boot Component
Operation of the Fault Monitor for the Zone Script Component
Operation of the Fault Monitor for the Zone SMF Component
Tuning the HA for Solaris Zones Stop_timeout property
Choosing the Stop_timeout value for the Zone Boot Component
Choosing the Stop_timeout value for the Zone Script Component
Choosing the Stop_timeout value for the Zone SMF Component
Denying Cluster Services for a Non-Global Zone
Debugging HA for Solaris Zones
How to Activate Debugging for HA for Solaris Zones
Installing and configuring Solaris Zones involves the following tasks:
Enabling a zone to run in your chosen data service configuration, as explained in the following sections:
Installing and configuring a zone, as explained in:
Perform this task for each zone that you are installing and configuring. This section explains only the special requirements for installing Solaris Zones for use with HA for Solaris Zones. For complete information about installing and configuring Solaris Zones, see Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
# clresourcetype register SUNW.HAStoragePlus
# clresourcegroup create solaris-zone-resource-group
This HAStoragePlus resource is for the zonepath. The file system must be a failover file system.
# clresource create \ -g solaris-zone-resource-group \ -t SUNW.HAStoragePlus \ -p Zpools=solaris-zone-instance-zpool \ solaris-zone-has-resource-name
# clreslogicalhostname create \ -g solaris-zone-resource-group \ -h solaris-zone-logical-hostname \ solaris-zone-logical-hostname-resource-name
# clresourcegroup online -M solaris-zone-resource-group
# clresourcegroup create \ -p Maximum_primaries=max-number \ -p Desired_primaries=desired-number \ solaris-zone-resource-group
# clresourcegroup online -M solaris-zone-resource-group
Perform this task on each node that is to host the zone.
Note - For complete information about installing a zone, see Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
Before You Begin
Consult Configuration Restrictions and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:
The number of Solaris Zone instances that are to be deployed.
The zpool containing the file system that is to be used by each Solaris Zone instance.
Ensure that the zone is configured.
If the zone that you are installing is to run in a failover configuration, configure the zone's zone path to specify a file system on a zpool. The zpool must be managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration.
For detailed information about configuring a zone before installation of the zone, see the following documentation:
Note - This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
Specify the ZFS storage pool and the resource group that you created.
phys-schost-1# clresource create -t SUNW.HAStoragePlus \ -g resourcegroup -p Zpools=pool hasp-resource
phys-schost-1# clresourcegroup online -eM resourcegroup
You will use this file system as the zone root path for the solaris brand zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem
Output is similar to the following.
phys-schost-1# beadm list -H … b101b-SC;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;756…
In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd and the BE is b101b-SC.
phys-schost-2# zfs set org.opensolaris.libbe:uuid=uuid rpool/ROOT/BE
Note - If you use a multimaster configuration, you do not need to set the UUID as described in this step.
Set the zone root path to the file system that you created on the ZFS storage pool.
Note - You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.
phys-schost# zonecfg -z zonename \ 'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set zonepath=/pool/filesystem/zonename ; set autoboot=false'
phys-schost# zonecfg -z zonename \ 'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set autoboot=false'
phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename configured /pool/filesystem/zonename brand shared
Note - For a multi-master configuration, you do not need an HAStoragePlus resource as described in Step a and you do not need to perform the switchover described in Step 9.
Output is similar to the following:
phys-schost# clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-resource phys-schost-1 Online Online phys-schost-2 Offline Offline
Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.
phys-schost-1# zoneadm -z zonename install
phys-schost-2# zoneadm -z zonename install -a flarimage -u
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename brand shared
phys-schost-1# zoneadm -z zonename boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename running /pool/filesystem/zonename brand shared
Follow the interactive steps to finish the zone configuration.
The zone's status should return to installed.
phys-schost-1# zoneadm -z zonename halt
phys-schost-1# zoneadm -z zonename detach -F
The zone state changes from installed to configured.
Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# clresourcegroup switch -n phys-schost-2 resourcegroup
Perform the remaining tasks in this step from the node to which you switch the resource group.
phys-schost-2# zoneadm -z zonename attach -F
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename brand shared
phys-schost-2# zoneadm -z zonename boot
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename
phys-schost-2# zoneadm -z zonename halt
phys-schost-1# zoneadm -z zonename detach -F
The zone state changes from installed to configured.