Requirements for Machines on Private Virtual Infrastructures
In private virtual infrastructures, you choose the compute resources required by your deployment. This includes CPU core, memory, disk size, and network interfaces. Deployment details, such as the use of distributed DoS protection, dictate resource utilization beyond the defaults.
Default VSBC Resources
The default compute for the ESBC image files is as follows:
- 4 CPU Cores
- 8 GB RAM
- 20 GB hard disk (pre-formatted)
- 8 interfaces as follows:
- 1 for management (wancom0 )
- 2 for HA (wancom1 and 2)
- 1 spare
- 4 for media
Small Footprint VSBC
Minimum resources for a small footprint ESBC, which must perform SIP trunking to a PBX with a low traffic volume and cannot support transcoding or encryption, include the following:
- 2 CPU Cores
- 4 GB RAM
- 20 GB hard disk (pre-formatted)
- 2 interfaces as follows:
- 1 for management (wancom0 )
- 1 for media
The Small Footprint VSBC does not support the following:
- IMS-AKA Feature
- Transcoding
- IP-Sec Tunnels
- MSRP
Interface Host Mode for Private Virtual Infrastructures
The ESBC VNF supports interface architectures using Hardware Virtualization Mode - Paravirtualized (HVM-PV):
- ESXi - No manual configuration required.
- KVM - HVM mode is enabled by default. Specifying PV as the interface type results in HVM plus PV.
Supported Interface Input-Output Modes for Private Virtual Infrastructures
- Para-virtualized
- SR-IOV
- PCI Passthrough
- Emulated - Emulated is supported for management interfaces only.
Supported Ethernet Controller, Driver, and Traffic Type based on Input-Output Modes
Note:
Virtual SBCs do not support media interfaces when media interfaces of different NIC models are attached. Media Interfaces are supported only when all media interfaces are of the same model, belong to the same Ethernet Controller, and have the same PCI Vendor ID and Device ID.For KVM and VMware, accelerated media/signaling using SR-IOV and PCI-pt modes are supported for the following card types.
Ethernet Controller | Driver | SR-IOV | PCI Passthrough |
---|---|---|---|
Intel 82599 / X520 / X540 | ixgbe | M | M |
Intel i210 / i350 | igb | M | M |
Intel X710 / XL710 | i40e | M | M |
Intel X710 / XL710 / XXV710 | i40e, i40enFoot 1, iavfFoot 2 | M | M |
Mellanox Connect X-4 | mlx5 | M | M |
Footnote 1 This driver is not supported on KVM.
Footnote 2 iavf driver is support in SR-IOV n/w mode
For PV mode (default, all supported hypervisors), the following virtual network interface types are supported. You can use any make/model NIC card on the host as long as the hypervisor presents it to the VM as one of these vNIC types.
Virtual Network Interface | Driver | W/M |
---|---|---|
Emulated | e1000 | W |
KVM (PV) | virtio | W/M |
Hyper-V (PV) | NetVSC | M |
VMware (PV) | VMXNET3 | W/M |
Emulated NICs do not provide sufficient bandwidth/QoS, and are suitable for use as management only.
- W - wancom (management) interface
- M - media interface
Note:
Accelerated media/signaling using SR-IOV (VF) or PCI-pt (DDA) modes are not currently supported for Hyper-V when running on Private Virtual Infrastructures.CPU Core Resources for Private Virtual Infrastructures
The ESBC S-Cz9.0.0 VNF requires an Intel Core i7 processor or higher, or a fully emulated equivalent including 64-bit SSSE3 and SSE4.2 support.
If the hypervisor uses CPU emulation (for example, qemu), Oracle recommends that you set the deployment to pass the full set of host CPU features to the VM.