Supported Platforms
The Oracle Communications Session Border Controller (SBC) can run on a variety of physical and virtual platforms. You can also run the SBC in public cloud environments. The following topics list the supported platforms and high level requirements.
Supported Physical Platforms
You can run the Oracle Communications Session Border Controller on the following hardware platforms.
- Acme Packet 3900
- Acme Packet 3950
- Acme Packet 4600
- Acme Packet 4900
- Acme Packet 6350 (Quad 10GbE NIU only)
- Acme Packet 6100
- Acme Packet 6300
- Acme Packet 6350 (Dual port NIU only)
- Acme Packet 4600
- Acme Packet 4900 (S-Cz9.3.0p2 and newer)
- Oracle Server X8-2
- Oracle Server X9-2
- Oracle Server X7-2
Supported Private Virtual Infrastructures and Public Clouds
You can run the SBC on the following private virtual infrastructures, which include individual hypervisors as well as private clouds based on architectures such as VMware or Openstack.
Note:
The SBC does not support automatic, dynamic disk resizing.Note:
Virtual SBCs do not support media interfaces when media interfaces of different NIC models are attached. Media interfaces are supported only when all media interfaces are of the same model, belong to the same Ethernet Controller, and have the same PCI Vendor ID and Device ID.Supported Hypervisors for Private Virtual Infrastructures
Oracle supports installation of the SBC on the following hypervisors:
- KVM (the following versions or later)
- Linux kernel version: 3.10.0-1127
- Library: libvirt 4.5.0
- API: QEMU 4.5.0
- Hypervisor: QEMU 1.5.3
- VMware: vSphere ESXi (Version 7.0)
As of S-Cz9.3.0p2, the vSBC supports VMware: vSphere ESXi (Version 8.0)
- Microsoft Hyper-V: Microsoft Server (2012 R2 or later)
Compatibility with OpenStack Private Virtual Infrastructures
Oracle distributes Heat templates for the Newton and Pike versions of OpenStack. Download the source, nnSCZ930_HOT.tar.gz, and follow the OpenStack Heat Template instructions.
The nnSCZ930_HOT.tar.gz file contains two files:
- nnSCZ930_HOT_pike.tar
- nnSCZ930_HOT_newton.tar
Use the Newton template when running either the Newton or Ocata versions of OpenStack. Use the Pike template when running Pike or a later version of OpenStack.
Supported Public Cloud Platforms
You can run the SBC on the following public cloud platforms.
- Oracle Cloud Infrastructure (OCI)
After deployment, you can change the shape of your machine by, for example, adding disks and interfaces. OCI Cloud Shapes and options validated in this release are listed in the table below.
Shape OCPUs/VCPUs vNICs Tx/Rx Queues Max Forwarding Cores DoS Protection Memory VM.Standard2.4 4/8 4 2 2 Y 60 VM.Standard2.8 8/16 8 2 2 Y 120 VM.Standard2.16 16/32 16 2 2 Y 240 VM.Optimized3.Flex-Small 4/8 4 8 6Foot 1 Y 16 VM.Optimized3.Flex-Medium 8/16 8 15 14Foot 2 Y 32 VM.Optimized3.Flex-Large 16/32 16 15 15 Y 64 Footnote 1 This maximum is 5 when using DoS Protection
Footnote 2 This maximum is 13 when using DoS Protection
Networking using image mode [SR-IOV mode - Native] is supported on OCI. PV and Emulated modes are not currently supported.
Note:
Although the VM.Optimized3.Flex OCI shape is flexible, allowing you to choose from 1-18 OCPUs and 1-256GB of memory, the vSBC requires a minimum of 4 OCPUs and 16GB of memory per instance on these Flex shapes. - Amazon Web Services (EC2)
This table lists the AWS instance sizes that apply to the SBC.
Instance Type vNICs RAM vCPUs Max Forwarding Cores DOS Protection c4.xlarge 4 7.5 4 c4.2xlarge 8 15 4 c4.4xlarge 16 30 8 c5.xlarge 4 8 4 1 N c5.2xlarge 4 16 8 2 Y c5.4xlarge 8 32 16 6 Y c5n.xlarge 4 10.5 4 1 N c5n.2xlarge 4 21 8 2 Y c5n.4xlarge 8 42 16 6 Y Driver support detail includes:
- ENA is supported on C5/C5n family only.
Note:
C5 instances use the Nitro hypervisor. - Microsoft Azure
The following table lists the Azure instance sizes that you can use for the SBC.
Size (Fs series) vNICs RAM vCPUs DOS Protection Standard_F4s 4 8 4 N Standard_F8s 8 16 8 Y Standard_F16s 8 32 16 Y Size vNICs RAM vCPUs DOS Protection Standard_F8s_v2 4 16 8 Y Standard_F16s_v2 4 32 16 Y Size types define architectural differences and cannot be changed after deployment. During deployment you choose a size for the SBC, based on pre-packaged Azure sizes. After deployment, you can change the detail of these sizes to, for example, add disks or interfaces. Azure presents multiple size options for multiple size types.
For higher performance and capacity on media interfaces, use the Azure CLI to create a network interface with accelerated networking. You can also use the Azure GUI to enable accelerated networking.
Note:
The SBC does not support Data Disks deployed over any Azure instance sizes.Note:
Azure v2 instances have hyperthreading enabled. - Google Cloud Platform
The following table lists the GCP instance sizes that you can use for the SBC.
Table 1-1 GCP Machine Types
Machine Type vCPUs Memory (GB) vNICs Egress Bandwidth (Gbps) Max Tx/Rx queues per VM Foot 3 n2-standard-4 4 16 4 10 4 n2-standard-8 8 32 8 16 8 n2-standard-16 16 64 8 32 16 Footnote 3 Using virtIO or a custom driver, the VM is allocated 1 queue for each vCPU with a minimum of 1 queue and maximum of 32 queues. Next, each NIC is assigned a fixed number of queues calculated by dividing the number of queues assigned to the VM by the number of NICs, then rounding down to the closest whole number. For example, each NIC has five queues if a VM has 16 vCPUs and three NICs. It is also possible to assign a custom queue count. To create a VM with specific queue counts for NICs, you use API/Terraform. There is no provision on the GCP console yet.
Use the n2-standard-4 machine type if you're deploying an SBC that requires one management interface and only two or three media interfaces. Otherwise, use the n2-standard-8 or n2-standard-16 machine types for an SBC that requires one management interface and four media interfaces. Also use the n2-standard-4, n2-standard-8, or n2-standard-16 machine types if deploying the SBC in HA mode.
Before deploying your SBC, check the Available regions and zones to confirm that your region and zone support N2 shapes.
On GCP the SBC must use the virtio network interface card. The SBC will not work with the GVNIC
Platform Hyperthreading Support
Some platforms support SMT and enable it by default; others support SMT but don't enable it by default; others support SMT only for certain machine shapes; and others don't support SMT. Check your platform documentation to determine its level of SMT support.
DPDK Reference
The SBC relies on DPDK for packet processing and related functions. You may reference the Tested Platforms section of the DPDK release notes available at https://doc.dpdk.org. This information can be used in conjunction with this Release Notes document for you to set a baseline of:
- CPU
- Host OS and version
- NIC driver and version
- NIC firmware version
Note:
Oracle only qualifies a specific subset of platforms. Not all the hardware listed as supported by DPDK is enabled and supported in this software.The DPDK version used in this release, up to S-Cz9.3.0p5, is:
- 22.11
The DPDK version used as of the S-Cz9.3.0p5 release is:
- 23.11
Requirements for Machines on Private Virtual Infrastructures
In private virtual infrastructures, you choose the compute resources required by your deployment. This includes CPU core, memory, disk size, and network interfaces. Deployment details, such as the use of distributed DoS protection, dictate resource utilization beyond the defaults.
Default vSBC Resources
The default compute for the SBC image files is as follows:
- 4 vCPU Cores
- 8 GB RAM
- 20 GB hard disk (pre-formatted)
- 8 interfaces as follows:
- 1 for management (wancom0 )
- 2 for HA (wancom1 and 2)
- 1 spare
- 4 for media
Interface Host Mode for Private Virtual Infrastructures
The SBC VNF supports interface architectures using Hardware Virtualization Mode - Paravirtualized (HVM-PV):
- ESXi - No manual configuration required.
- KVM - HVM mode is enabled by default. Specifying PV as the interface type results in HVM plus PV.
Supported Interface Input-Output Modes for Private Virtual Infrastructures
- Para-virtualized
- SR-IOV
- PCI Passthrough
- Emulated - Emulated is supported for management interfaces only.
Supported Ethernet Controller, Driver, and Traffic Type based on Input-Output Modes
Note:
Virtual SBCs do not support media interfaces when media interfaces of different NIC models are attached. Media Interfaces are supported only when all media interfaces are of the same model, belong to the same Ethernet Controller, and have the same PCI Vendor ID and Device ID.For KVM and VMware, accelerated media/signaling using SR-IOV and PCI-pt modes are supported for the following card types.
Ethernet Controller | Driver | SR-IOV | PCI Passthrough |
---|---|---|---|
Intel 82599 / X520 / X540 | ixgbe | M | M |
Intel i210 / i350 | igb | M | M |
Intel X710 / XL710 / XXV710 | iavf (i40e, i40en)Foot 4Foot 5Foot 6, iavfFoot 7 | M | M |
Validated with E810-XXVDA4 at 10GB switch speeds.Foot 8 | iavf Foot 9 | M | N/A |
Mellanox Connect X-4 | mlx5 | M | M |
Mellanox Connect X-5Foot 10Foot 11 | mlx5 Foot 12Foot 13 | M | N/A |
Footnote 4 This driver is supported on VMware only.
Footnote 5 ESXi 7.0 deployments utilizing VLANs require the 1.14.1.0 version of this driver (or newer).
Footnote 6 ESXi 8.0 deployments utilizing VLANs require the 2.6.5.0 version of this driver (or newer)
Footnote 7 iavf driver is support in SR-IOV n/w mode.
Footnote 8 Intel E810-XXVDA2, E810-XXVDA4, E810-XXVDA4T all use the same driver.
Footnote 9 iavf driver is supported in SR-IOV n/w mode over KVM and VmWare
Footnote 10 KVM only.
Footnote 11 Supported as of S-Cz9.3.0p5.
Footnote 12 Device Part number: 7603662 Oracle Dual Port 25 Gb Ethernet Adapter, Mellanox (for factory installation) .
Footnote 13 Validated with 10G Speed using SFP- Fibre cables with 7604269 Oracle 10/25 GbE Dual Rate SFP28 Short Range (SR) Transceiver used during validation.
Note:
Although the OCI VM.Optimized3.Flex shapes provide three launch options to select networking modes, always select Option 3, Hardware-assisted (SR-IOV), for the SBC.For PV mode (default, all supported hypervisors), the following virtual network interface types are supported. You can use any make or model NIC card on the host as long as the hypervisor presents it to the VM as one of these vNIC types.
Virtual Network Interface | Driver | W/M |
---|---|---|
Emulated | e1000 | W |
KVM (PV) | virtio | W/M |
VMware (PV) | VMXNET3 | W/M |
Emulated NICs do not provide sufficient bandwidth/QoS, and are suitable for use as management only.
- W - wancom (management) interface
- M - media interface
Note:
Accelerated media/signaling using SR-IOV (VF) or PCI-pt (DDA) modes are not currently supported for Hyper-V when running on Private Virtual Infrastructures.CPU Core Resources for Private Virtual Infrastructures
Virtual SBCs for this release requires an Intel Core i7 processor or higher, or a fully emulated equivalent including 64-bit SSSE3 and SSE4.2 support.
If the hypervisor uses CPU emulation (for example, qemu), Oracle recommends that you set the deployment to pass the full set of host CPU features to the VM.
PCIe Transcoding Card Requirements
For virtual SBC (vSBC) deployments, you can install an Artesyn SharpMediaâ„¢ PCIe-8120 media processing accelerator with either 4, 8, or 12 DSPs in the server chassis in a full-height, full-length PCI slot to provide high density media transcoding.
- VMWare and KVM are supported
- PCIe-pass-through mode is supported
- Each vSBC can support 2 PCIE 8120 cards and the server can support 4 PCIE 8120 cards.
- Each PCIe-8120 card supports only one vSBC instance
- Do not configure transcoding cores for software-based transcoding when using a PCIe media card.
Session Router Recommendations
Oracle recommends the following resources when operating the SR or ESR, release S-Cz9.3.0 over Oracle servers.
Supported Platforms
The Session Router and Enterprise Session Router support the same Virtual Platforms as the SBC. Please see the Supported Private Virtual Infrastructures and Public Clouds section for these platform lists.
Recommendations for Oracle Server X8-2
Processor | Memory |
---|---|
2x 24-core Intel Platinum 8260 | 32GB DDR4 SDRAM |
Recommendations for Oracle Server X9-2
Processor | Memory |
---|---|
2x 32-core Intel Platinum 8358 | 64GB DDR4 SDRAM |