Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster 4.0 Release Notes Oracle Solaris Cluster 4.0 |
Oracle Solaris Cluster 4.0 Release Notes
Default Root File System of Oracle Solaris ZFS
Selected Support for Non-Global Zones
HA for Oracle with Oracle Data Guard Replication
What's Not Included in the Oracle Solaris Cluster 4.0 Software
Solaris Volume Manager Disk Sets in a Zone Cluster
Commands Modified in This Release
Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)
IP Addresses on a Failed IP Interface Can No Longer Be Used Locally (7099852)
DID Disk Add to Solaris Zone Is Not Accepting Wild Card for *dsk (7081090)
Oracle Solaris Cluster Geographic Edition Software Requirements
Oracle Solaris Operating System
x86: clzonecluster export Command Fails (7066586)
Using chmod to setuid Returns Error in Non-Global Zone on PxFS Secondary Server (7020380)
Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)
Cluster.CCR: libpnm system error: Failed to resolve pnm proxy pnm_server.2.zonename (6942090)
Missing /dev/rmt Causes Incorrect Reservation Usage When Policy Is pathcount (6920996)
Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)
Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)
'Unable to Determine Oracle CRS Version' Error After Applying Patch 145333-09 (7090390)
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
scinstall Tries to Create an IPMP Group on a Standby Interface (7095759)
Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)
Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)
ssm_start Fails Due to Unrelated IPMP Down (6938555)
Oracle Solaris Cluster 4.0 Documentation Set
HA for Zones Procedure Moved to the Data Service Manual
Correction to Default Set of Packages That Are Installed by the Automated Installer
This section contains information about Oracle Solaris Cluster compatibility issues with other products, as of initial release. Contact Oracle support services to see if a code fix becomes available.
Problem Summary: When creating an Oracle Solaris Cluster resource for an Oracle ASM instance, the error message ORACLE_SID (+ASM2) does not match the Oracle ASM configuration ORACLE_SID () within CRS or ERROR: Oracle ASM is either not installed or the installation is invalid! is reported by the clsetup utility. This situation occurs because, after Oracle Grid Infrastructure 11.2.0.3 is installed, the value for GEN_USR_ORA_INST_NAME@SERVERNAME of the ora.asm resource does not contain all the Oracle ASM SIDs that are running on the cluster.
Workaround: Use the crsctl command to add the missing SIDs to the ora.asm resource.
# crsctl modify res ora.asm \ -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(hostname)"=ASM_SID
Problem Summary: This problem affects data services that use the connect() call to probe the health of the application through its logical hostname IP address. In a cluster-wide network outage scenario, there is a change in the behavior of the connect() call on the Oracle Solaris 11 software from the Oracle Solaris 10 release. The connect() call fails if the IPMP interface, on which the logical hostname IP is plumbed, goes down. This makes the agent probe fail if the network outage is longer than the probe_timeout and eventually brings the resource and the associated resource group to the offline state.
Workaround: Configure the application to listen on localhost:port to ensure that the monitoring program does not fail the resource in a public-network outage scenario.
Problem Summary: If the package pkg:/system/resource-mgmt/resource-cap is not installed and a zone is configured with capped-memory resource control as part of the configuration, the zone boot fails. Output is similar to the following:
zone 'zone-1': enabling system/rcap service failed: entity not found zoneadm: zone 'zone-1': call to zoneadmd failed
Workaround: Install pkg:/system/resource-mgmt/resource-cap into the global zone. Once the resource-cap package is installed, the zone can boot.
Problem Summary: When using the zonecfg utility, if you add a DID disk to a non-global zone by using a wild card (*) and without specifying the paths, the addition fails.
Workaround: Specify the raw device paths and block device paths explicitly. The following example adds the d5 DID device:
root@phys-cluster-1:~# zonecfg -z foo zonecfg:foo> add device zonecfg:foo:device> set match=/dev/did/dsk/d5s* zonecfg:foo:device> end zonecfg:foo> add device zonecfg:foo:device> set match=/dev/did/rdsk/d5s* zonecfg:foo:device> end zonecfg:foo> exit