Go to main content

Oracle® Solaris Cluster 4 Compatibility Guide

Exit Print View

Updated: February 2020
 
 

InfiniBand Support

    InfiniBand is supported with IP and Ethernet.

  • InfiniBand is supported with Internet Protocol over InfiniBand (IPoIB) for both cluster interconnect and public networking.

  • InfiniBand is also supported with Ethernet over InfiniBand (EoIB) for public networking. EoIB uses the Sun Network QDR InfiniBand Gateway Switch (X2826A-Z) based VNICs.


Note -  You must create separate InfiniBand partitions on the InfiniBand switches for public networks and for both private cluster networks. So, you need two separate partitions for the two cluster interconnects, and one partition for the public network, if you use a public network.

InfiniBand host channel adapters (HCAs) are cabled to InfiniBand switches. Directly cabling HCAs between two cluster nodes is not supported.

Table 93  PCIe InfiniBand Interfaces for SPARC Servers
Server
7104074 (PTO), 7104073 (ATO) Oracle Dual Port QDR InfiniBand Adapter M3a
(X)4242A Sun IB Dual Port 4x QDR PCIe LP HCA M2
Fujitsu M10-1, Fujitsu M10-4, Fujitsu M10-4S
Y
Y
M8-8
Y
T8-1, T8-2, T8-4
Y
Netra SPARC T4-2
Y
Netra SPARC S7-2
SPARC Enterprise M3000
Y
SPARC Enterprise M4000
Y
SPARC Enterprise M5000
Y
SPARC Enterprise M9000
Y
SPARC M5-32, SPARC M6-32
Y
Y
SPARC M7-8, SPARC M7-16
Y
SPARC S7-2 server, SPARC S7-2L server
Y
SPARC T3-1, SPARC T3-2
Y
SPARC T4-1, SPARC T4-2
Y
Y
SPARC T5-2, SPARC T5-4, SPARC T5-8
Y
Y
SPARC T7-1, SPARC T7-2. SPARC T7-4
Y
  • a – Support starts with Oracle Solaris Cluster.1 SRU 3, Oracle Solaris 11.1 SRU 9

The PCIe ExpressModule InfiniBand HCAs that can be used for Oracle Solaris Cluster networking are listed in the following table.

Table 94  PCIe ExpressModule InfiniBand Interfaces for SPARC Servers
Server
(X)4243A-Z Dual Port 4x QDR IB PCIe ExpressModule HCA M2
SPARC T3-1B, SPARC T3-4
Y
SPARC T3-4
Y
SPARC T4-1B, SPARC T4-4, SPARC T5-1B
Y