Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster 4.1 Release Notes Oracle Solaris Cluster 4.1 |
1. Oracle Solaris Cluster 4.1 Release Notes
Support for Oracle Solaris 11.2 OS
New clsetup Wizards to Create a Zone Cluster
Support for solaris10 Brand Zone Clusters
Support for Exclusive-IP Zone Clusters
Support for Trusted Extensions With Zone Clusters
Resource Dependencies Can Be Defined on a Per-Node Basis
Support for Kernel Cage Dynamic Reconfiguration (DR)
Cluster Security Framework Is Enhanced
Support for Socket Direct Protocol Over the Cluster Interconnect
Faster Failure Detection and Response by Storage Monitors
New clsetup Wizard to Configure the Oracle PeopleSoft Application Server Data Service
New clsetup Wizard to Configure the Oracle WebLogic Server Data Service
Support for MySQL and MySQL Cluster Data Services
New Data Service for PostgreSQL
New Data Service for SAP liveCache
New Data Service for SAP MaxDB
New Data Service for Siebel 8.2.2
New Data Service for Sybase ASE
New Data Service for Oracle Traffic Director
New Data Service for Oracle TimesTen
New Manual for SAP NetWeaver Data Service
New Data Service for Oracle External Proxy
New Data Service for Oracle PeopleSoft Enterprise Process Scheduler
New Data Service for Oracle Web Tier
Support for Oracle E-Business 12.1.1 Data Service
Support for Sun ZFS Storage Appliance Data Replication With Geographic Edition
Support for EMC Symmetrix Remote Data Facility With Geographic Edition
Support for MySQL Replication With Geographic Edition
New Man Pages for the ccradm and dcs_config Advanced Maintenance Commands
Selected Support for Non-Global Zones
What's Not Included in the Oracle Solaris Cluster 4.1 Software
Solaris Volume Manager Disk Sets in a Zone Cluster
Commands Modified in This Release
Logical Host Does not Fail Over with Public Net Fault (16979921)
Oracle ASM With Solaris Volume Manager Mirrored Logical Volumes
osysmond Core Dumps in S10 Brand Zone During GI root.sh and Starting of CRS (14456069)
Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)
Oracle Solaris 11 SRU Installation Might Fail Due to Out-of-Date pkg Command
Adding Main Adapter to IPMP Group Removes DNS Configuration (7198718)
SAP JAVA Issue Affects HA for SAP NetWeaver Ability to Fail Over in Unplanned Outage (7191360)
Geographic Edition Software Requirements
Oracle Solaris Operating System
Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface (16618736)
Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener (16231523)
Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)
Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)
Cluster File System Does Not Support Extended Attributes (7167470)
Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)
Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)
Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)
More Validation Checks Needed When Combining DIDs (6605101)
Active-Standby Configuration Not Supported for HA for TimesTen (16861602)
SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)
When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
clresource show -p Command Returns Wrong Information (7200960)
Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs (15924240)
DR State Stays Reporting unknown on One Partner (7189050)
Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)
Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)
ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)
Wizard Won't Discover the ASM SID (7190064)
RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)
cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)
Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)
Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)
sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)
Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)
Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)
x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)
Oracle Solaris Cluster 4.1 Documentation Set
HA for Oracle Solaris Zones Guide
Geographic Edition Data Replication Guide for Oracle Solaris Availability Suite
This section contains information about Oracle Solaris Cluster compatibility issues with other products, as of initial release. Contact your Oracle support representative to see whether a fix becomes available.
Problem Summary: IPMP groups in exclusive-IP zone clusters fail to recognize link failures that cause dependent logical hostname resources to remain online, even when the base network interface link is broken.
Workaround: Enable transitive probing for the IPMP network service or create probe-based IPMP groups in exclusive-IP zone clusters.
Problem Summary: If your Oracle Solaris Cluster HA for Oracle Database or Support for Oracle RAC configuration requires using Oracle ASM with Solaris Volume Manager mirrored logical volumes, you might experience failures of the SUNW.ScalDeviceGroup probe. These failures result in a loss of availability of any service that is dependent on the SUNW.ScalDeviceGroup resource.
Workaround: You can mitigate the failures by increasing the IOTimeout property setting for the SUNW.ScalDeviceGroup resource type. See Article 603825.1 at My Oracle Support for additional information.
Problem Summary: This problem involves Oracle RAC 11g release 2 configured in a solaris10 brand zone cluster. When the Grid Infrastructure root.sh script is run or when Cluster Ready Services (CRS) is started, the osysmond process might dump core one or more times.
Workaround: Contact Oracle Support to learn whether a patch or workaround is available.
Problem Summary: When creating an Oracle Solaris Cluster resource for an Oracle ASM instance, one of the following error messages might be reported by the clsetup utility:
ORACLE_SID (+ASM2) does not match the Oracle ASM configuration ORACLE_SID () within CRS
ERROR: Oracle ASM is either not installed or the installation is invalid!
This situation occurs because, after Oracle Grid Infrastructure 11g release 2 is installed, the value for GEN_USR_ORA_INST_NAME@SERVERNAME of the ora.asm resource does not contain all the Oracle ASM SIDs that are running on the cluster.
Workaround: Use the crsctl command to add the missing SIDs to the ora.asm resource.
# crsctl modify res ora.asm \ -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(hostname)"=ASM_SID
Problem Summary: When you install an Oracle Solaris 11 SRU to your cluster prior to the upgrade to Oracle Solaris 11.1, you might receive an error message similar to the following:
WARNING: pkg(5) appears to be out of date, and should be updated before running update. Please update pkg(5) by executing 'pkg install pkg:/package/pkg' as a privileged user and then retry the update.
Workaround: Follow the instructions in the error message.
Problem Summary: The clzonecluster install-cluster command might fail to install a patch on a solaris10 brand zone if Oracle Solaris Cluster patch 145333-15 (SPARC) or 145334-15 (x86) is installed in the zone. For example:
# clzonecluster install-cluster -p patchdir=/var/tmp/patchdir,patchlistfile=plist S10ZC Installing the patches ... clzc: (C287410) Failed to execute command on node "zcnode1": scpatchadm: Logging reports to "/var/cluster/logs/install/scpatchadm.log.123" scpatchadm.log.123 would show the message: scpatchadm: Failed to install the following patches: 123456-01 clzc: (C287410) Failed to execute command on node "zcnode1"
Workaround: Log in to the zone and install the patch by using the patchadd command.
Contact your Oracle support representative to learn whether an Oracle Solaris Cluster 3.3 patch becomes available.
Problem Summary: A problem occurs if you delete a network adapter then recreate it for an IPMP group, such as in the following example commands:
# ipadm delete-ip adapter # ipadm create-ip adapter # ipadm create-ipmp -i adapter sc_ipmp0 # ipadm create-addr -T static -a local=hostname/24 sc_ipmp0/v4
Soon after the IPMP address is created, the /etc/resolv.conf file disappears and LDAP service becomes disabled. Even an enabled service stays at the offline state.
Workaround: Before you delete the network adapter with the ipadm delete-ip command, run the svcadm refresh network/location:default command.
The SAP JAVA stack has a severe problem that affects the failover of dialogue instances in an HA for SAP NetWeaver configuration. On an unplanned node outage, like a panic or power outage, the SAP message server does not accept the connection of a dialogue instance on a different node until a timeout is over. This leads to the following behavior:
Once a node that is hosting a failover dialogue instance panics or experiences an outage, the dialogue instance does not start on the target node on the first try. The dialogue instance will do one of the following:
Come online after one or more retries.
Fail back to the original node if that node comes back up early enough.
This behavior occurs only on unplanned outages. Any orderly shutdown of a node does not experience this problem. Also, ABAP or dual-stack configurations are not affected.
Problem Summary: If the package pkg:/system/resource-mgmt/resource-cap is not installed and a zone is configured with capped-memory resource control as part of the configuration, the zone boot fails. Output is similar to the following:
zone 'zone-1': enabling system/rcap service failed: entity not found zoneadm: zone 'zone-1': call to zoneadmd failed
Workaround: Install pkg:/system/resource-mgmt/resource-cap into the global zone. Once the resource-cap package is installed, the zone can boot.
At initial release of Oracle Solaris Cluster 4.1 software, an active:active remote replication in a clustered configuration, where both heads are replicating data, is not supported by Sun ZFS Storage Appliance. Contact your Oracle support representative to learn whether a patch or workaround is available.
However, active-passive configurations are currently supported in a clustered configuration.