Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster 4.0 Release Notes Oracle Solaris Cluster 4.0 |
Oracle Solaris Cluster 4.0 Release Notes
Default Root File System of Oracle Solaris ZFS
Selected Support for Non-Global Zones
HA for Oracle with Oracle Data Guard Replication
What's Not Included in the Oracle Solaris Cluster 4.0 Software
Solaris Volume Manager Disk Sets in a Zone Cluster
Commands Modified in This Release
Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)
IP Addresses on a Failed IP Interface Can No Longer Be Used Locally (7099852)
DID Disk Add to Solaris Zone Is Not Accepting Wild Card for *dsk (7081090)
Oracle Solaris Cluster Geographic Edition Software Requirements
Oracle Solaris Operating System
x86: clzonecluster export Command Fails (7066586)
Using chmod to setuid Returns Error in Non-Global Zone on PxFS Secondary Server (7020380)
Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)
Cluster.CCR: libpnm system error: Failed to resolve pnm proxy pnm_server.2.zonename (6942090)
Missing /dev/rmt Causes Incorrect Reservation Usage When Policy Is pathcount (6920996)
Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)
Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)
'Unable to Determine Oracle CRS Version' Error After Applying Patch 145333-09 (7090390)
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
scinstall Tries to Create an IPMP Group on a Standby Interface (7095759)
Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)
Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)
ssm_start Fails Due to Unrelated IPMP Down (6938555)
This section discusses errors or omissions for documentation in the Oracle Solaris Cluster and Geographic Edition 4.0 release.
The initial version of this Release Notes contained the procedure How to Configure the HA for Zones Zone Boot Component for solaris or solaris10 Brand Zones. That procedure was removed in an update of this Release Notes and can now be found at How to Create and Enable Resources for the Zone Boot Component in Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide.
Oracle Solaris Cluster 4.0 software supports Solaris Volume Manager software. The Oracle Solaris 11 documentation set does not include a manual for Solaris Volume Manager software. However, you can still use the Solaris Volume Manager Administration Guide from the Oracle Solaris 10 9/10 release, which is valid with the Oracle Solaris Cluster 4.0 release.
This section discusses errors, omissions, and additions in the Oracle Solaris Cluster man pages.
If you are developing an agent for services that will run in a zone cluster; and if your agent might need to execute some of its methods in the global zone; you can refer to the Oracle Solaris Cluster 3.3 5/11 version of the section 3HA man pages for information that was inadvertently omitted or altered in the 4.0 version of the section 3HA man pages.
The globaldevfs property is no longer valid and should be ignored.
At time of initial release, no NAS devices of type sun or netapp_nas are available. Information about the sun or netapp_nas NAS device type should be ignored.
The description for the remove subcommand includes the following statement:
This subcommand also removes the cluster software from the node.
This statement is incorrect and should be ignored. You must use the pkg remove command to remove the cluster software packages from a node.
At time of initial release, no Sun Microsystems, Inc. or Network Appliance (NetApp) NAS devices are available. Information about these NAS devices should be ignored.
Example output has entries that mention the Pkglist property. This property is not used in the 4.0 release and the example content should be ignored.
The -c config_profile.xml option is added to the install subcommand. The following is the command syntax for this option.
clzonecluster install -c config_profile.xml zone-cluster-name
Specifies a configuration profile template. After installation from the repository, the template applies the system configuration information to all nodes of the zone cluster. If config_profile.xml is not specified, you must manually configure each zone-cluster node by running from the global zone on each node the zlogin -C zone-cluster-name command. All profiles must have a .xml extension.
The -c option replaces the hostname of the zone-cluster node in the configuration profile template. The profile is applied to the zone-cluster node after booting the zone-cluster node.
In the description of the install subcommand, the man page incorrectly states that, if you do not specify the -M option, the Automated Installer installs the ha-cluster-full group package by default. Instead, when -M is not specified, all of the ha-cluster/* packages that are installed in the global zone of the issuing node are installed in all nodes of the zone cluster.
The following syntax and description for the export subcommand is missing from the man page:
/usr/cluster/bin/clzonecluster export [-f commandfile] zoneclustername
Exports the zone cluster configuration into a command file.
The exported commandfile can be used as the input for the configure subcommand. You can use the export subcommand only from a global-cluster node.
The RBAC authorization for the export subcommand is solaris.cluster.admin.
The following information applies to the r_properties(5) man page.
Multiple instances of Global_zone_override were changed to _override.
The Resource_project_name property description was omitted. Refer to the Oracle Solaris Cluster 3.3 5/11 version of the r_properties(5) man page for information about the Resource_project_name property.
If you are developing an agent for services that will run in a zone cluster; and if your agent might need to execute some of its methods in the global zone; then you should refer to the Oracle Solaris Cluster 3.3 5/11 version of the r_properties(5) man page for information that was inadvertently omitted or altered in the 4.0 version of the r_properties(5) man page.
The -L option is omitted from the scinstall(1M) man page. This option is used with the scinstall -u update command. The following is the syntax for specifying the -L option:
scinstall -u update [-b bename] [-L {accept | licenses | accept,licenses | licenses,accept}]
The argument accept corresponds to the --accept option of the pkg command and the argument licenses corresponds to the --licenses option.
Specifying -L accept indicates that you agree to and accept the licenses of the packages that are updated. If you do not provide this option, and any package licenses require acceptance, the update operation fails.
Specifying -L licenses displays all of the licenses for the packages that are updated.
When both -L accept and -L licenses are used, the licenses of the packages that are updated are displayed as well as accepted. The order you specify the accept and licenses arguments does not affect the behavior of the command.
If you are developing an agent for services that will run in a zone cluster; and if your agent might need to execute some of its methods in the global zone; then you should refer to the Oracle Solaris Cluster 3.3 5/11 version of the rt_properties(5) man page for information that was inadvertently omitted or altered in the 4.0 version of the rt_properties(5) man page.
The following extension properties are missing from the SUNW.gds(5) man page.
The number of times that the process monitor facility (PMF) restarts the fault monitor during the time window that the Monitor_retry_interval property specifies. This property refers to restarts of the fault monitor itself rather than to the resource. The system-defined properties Retry_interval and Retry_count control restarting of the resource.
Optional
Integer
4
0 - 2147483647
-1 indicates an infinite number of retry attempts.
At any time
The time (in minutes) over which failures of the fault monitor are counted. If the number of times that the fault monitor fails exceeds the value that is specified in the extension property Monitor_retry_count within this period, the PMF does not restart the fault monitor.
Optional
Integer
2
0 – 2147483647
-1 indicates an infinite retry interval.
At any time
The following value for the Standby_mode extension property is missing from the man page:
Beginning with Oracle 11g, specifies a snapshot standby database.