Platform Support
The S-Cz8.2.0 software supports the following platforms.
Acme Packet Platforms
- Acme Packet 1100
- Acme Packet 3900
- Acme Packet 4600
- Acme Packet 6300
- Acme Packet 6350
- Virtual Platforms
Qualified Hypervisors
Oracle qualified the following components for deploying version S-Cz8.2.0 as a Virtual Network Function.
- XEN 4.4: Specifically using Oracle Virtual Machine (OVM) 3.4.2
- KVM: Using version
embedded in Oracle Linux 7 with RHCK3.10
Note the use of the following KVM component versions:
- QEMU
- 2.9.0-16.el7_4.13.1 for qemu-img-ev, qemu-kvm-ev
- 3.9.0-14.el7_5.2 for libvirt-daemon-driver-qemu
- LIBVIRT
- 3.90-14-el7_5.2 for all components except -
- 3.2.0-3.el7_4.1 for libvirt-python
- QEMU
- VMware: Using ESXI 6.5 u1 on VMware vCenter Server
- Hyper-V Windows Server 2012 R2 (Generation 1)
Supported Cloud Computing Platforms
- OpenStack (including
support for Heat template versions Newton and Pike)
Note:
For information about deploying Heat, see the README in the TAR file that contains the Heat templates.
Public Cloud Support
- Microsoft Azure: The E-SBC can run in stand-alone mode in Microsoft Azure with version S-Cz8.2.0p3 and later. Customers must contact Oracle support prior to using this platform for important information and approval.
Supported Interface Input-Output Modes
- Para-virtualized
- SR-IOV
- PCI Passthrough
Supported Ethernet Controller, Driver, and Input-Output Modes
The following table lists supported Ethernet Controllers (chipset families) and their supported driver. Reference the host hardware specifications where you run your hypervisor to learn the Ethernet controller in use.
Ethernet Controller | Driver | PV | SR-IOV | PCI Passthrough |
---|---|---|---|---|
Intel 82599 / X520 / X540 | ixgbe | WM | M | M |
Intel i210 / i350 | igb | WM | M | M |
Intel X710 / XL710 | i40e | WM | M | M |
Broadcom (Qlogic Everest) | bnx2x | WM | NA | NA |
Broadcom BCM57417 | bnxt | WM | NA | NA |
Mellanox ConnectX-4 | mlx5 | NA | M | M |
Mellanox ConnectX-5 | mlx5 | NA | M | M |
- W - wancom interface
- M - media interface
- NA - not applicable
Virtual Machine Platform Resources
A Virtual Network Function (VNF) requires the CPU core, memory, disk size, and network interfaces specified for operation. Deployment details, such as the use of distributed DoS protection, dictate resource utilization beyond the defaults.
Default VNF Resources
VM resource configuration defaults to the following:
- 4 CPU Cores
- 8 GB RAM
- 20 GB hard disk (pre-formatted)
- 8 interfaces as follows:
- 1 for management (wancom0 )
- 2 for HA (wancom1 and 2)
- 1 spare
- 4 for media
Interface Host Mode
The E-SBC S-Cz8.2.0 VNF supports interface architectures using Hardware Virtualization Mode - Paravirtualized (HVM-PV):
- ESXi - No manual configuration required.
- KVM - HVM mode is enabled by default. Specifying PV as the interface type results in HVM plus PV.
- XEN (OVM) - You must configure HVM+PV mode.
Note:
When deploying the E-SBC over VMware and using PV interface mode, the number of forwarding cores you may configure is limited to 2, 4, or 8 cores.CPU Core Resources
The E-SBC S-Cz8.2.0 VNF requires an Intel Core7 processor or higher, or a fully emulated equivalent including 64-bit SSSE3 and SSE4.2 support .
If the hypervisor uses CPU emulation (qemu etc), Oracle recommends that you set the deployment to pass the full set of host CPU features to the VM.
PCIe Transcoding Card Requirements
For virtual SBC deployments, you can install an Artesyn SharpMediaâ„¢ PCIe-8120 media processing accelerator with either 4, 8, or 12 DSPs in the server chassis in a full-height, full-length PCI slot to provide high density media transcoding.
- VMWare and KVM are supported
- PCIe-pass-through mode is supported
- Each vSBC can support 2 PCIE 8120 cards and the server can support 4 PCIE 8120 cards.
- Each PCIe-8120 card can be devoted to only one vSBC instance
- Transcoding cores for software-based transcoding may not be configured in conjunction with PCIe media card use