Supported Private Virtual Infrastructures and Public Clouds

You can run the SBC on the following private virtual infrastructures, which include individual hypervisors as well as private clouds based on architectures such as VMware or Openstack.

Note:

The SBC does not support automatic, dynamic disk resizing.

Note:

Virtual SBCs do not support media interfaces when media interfaces of different NIC models are attached. Media interfaces are supported only when all media interfaces are of the same model, belong to the same Ethernet Controller, have the same PCI Vendor ID and Device ID, and share the same network IO mode (SRIOV, PV, or PCI-PT).

Supported Hypervisors for Private Virtual Infrastructures

Oracle supports installation of the SBC on the following hypervisors:

  • KVM (the following versions or later)
    • Linux kernel (4.1.12-124)
    • QEMU (2.9.0_16)
    • libvirt (3.9.0_14)
  • VMware: vSphere ESXi (6.5 or later)
  • Microsoft Hyper-V: Microsoft Server (2012 R2 or later)

Compatibility with OpenStack Private Virtual Infrastructures

Oracle distributes Heat templates for the Newton and Pike versions of OpenStack. Download the source, nnSCZ1000p1_HOT.tar.gz, and follow the OpenStack Heat Template instructions.

The nnSCZ1000p1_HOT.tar.gz file contains two files:

  • nnSCZ1000p1_HOT_pike.tar
  • nnSCZ1000p1_HOT_newton.tar

Use the Newton template when running either the Newton or Ocata versions of OpenStack. Use the Pike template when running Pike or a later version of OpenStack.

Supported Public Cloud Platforms

You can run the SBC on the following public cloud platforms.

  • Oracle Cloud Infrastructure (OCI)

    After deployment, you can change the shape of your machine by, for example, adding disks and interfaces. OCI Cloud Shapes and options validated in this release are listed in the table below.

    Shape OCPUs/VCPUs vNICs Tx/Rx Queues Max Forwarding Cores DoS Protection Memory
    VM.Optimized3.Flex-Small 4/8 4 8 6Foot 1 Y 16
    VM.Optimized3.Flex-Medium 8/16 8 15 14Foot 2 Y 32
    VM.Optimized3.Flex-Large 16/32 16 15 15 Y 64

    Footnote 1 This maximum is 5 when using DoS Protection

    Footnote 2 This maximum is 13 when using DoS Protection

    Networking using image mode [SR-IOV mode - Native] is supported on OCI. PV and Emulated modes are not currently supported.

    Note:

    Although the VM.Optimized3.Flex OCI shape is flexible, allowing you to choose from 1-18 OCPUs and 1-256GB of memory, the virtual SBC requires a minimum of 4 OCPUs and 16GB of memory per instance on these Flex shapes.
  • Amazon Web Services (EC2)
    This table lists the AWS instance sizes that apply to the SBC when the use-sibling-core-datapath attribute is disabled and DoS protection is enabled.

    Note:

    The Subscriber-Aware Load Balancer is not supported on any c4 shape.
    Instance Type vNICs RAM vCPUs Max Forwarding Cores (with DoS protection) DoS Protection
    c4.xlarge 4 7.5 4

    1Foot 3

    NFoot 4
    c4.2xlarge 4 15 8 2 Y
    c4.4xlarge 8 30 16 6 Y
    c5.xlarge 4 8 4 1 N
    c5.2xlarge 4 16 8 2 Y
    c5.4xlarge 8 32 16 6 Y
    c5n.xlarge 4 10.5 4 1 N
    c5n.2xlarge 4 21 8 2 Y
    c5n.4xlarge 8 42 16 6 Y

    Footnote 3 2 forwarding cores if use-sibling-core-datapath is enabled and no DoS core is configured.

    Footnote 4 Enable use-sibling-core-datapath to support DoS protection. If a DoS core is configured, only 1 forwarding core can be used.

    For the x4.xlarge instance, you can have:
    • 2 forwarding cores, if use-sibling-core-datapath is enabled and no DoS core is configured
    • 1 forwarding core, if use-sibling-core-datapath is enabled and a DoS core is configured
    • 1 forwarding core and no DoS core, if use-sibling-core-datapath is disabled
    For the c4.2xlarge instance, you can have:
    • 6 forwarding cores, if use-sibling-core-datapath is enabled and no DoS core is configured
    • 5 forwarding cores, if use-sibling-core-datapath is enabled and a DoS core is configured
    • 3 forwarding cores, if use-sibling-core-datapath is disabled and no DoS core is configured
    • 2 forwarding cores, if use-sibling-core-datapath is disabled and a DoS core is configured

    Driver support detail includes:

    • ENA is supported on C5/C5n family only.

    Note:

    C5 instances use the Nitro hypervisor.
  • Microsoft Azure

    The following table lists the Azure instance sizes that you can use for the SBC.

    Size (Fs series) vNICs RAM vCPUs DoS Protection
    Standard_F4s 4 8 4 Y
    Standard_F8s 8 16 8 Y
    Standard_F16s 8 32 16 Y

    Note:

    The Subscriber-Aware Load Balancer is not supported on any Standard_F(x)s shape.
    Size vNICs RAM vCPUs DoS Protection
    Standard_F8s_v2 4 16 8 Y
    Standard_F16s_v2 4 32 16 Y

    Note:

    The Subscriber-Aware Load Balancer is not supported on any Standard_F(x)s_v2 shape.

    Size types define architectural differences and cannot be changed after deployment. During deployment you choose a size for the SBC, based on pre-packaged Azure sizes. After deployment, you can change the detail of these sizes to, for example, add disks or interfaces. Azure presents multiple size options for multiple size types.

    For higher performance and capacity on media interfaces, use the Azure CLI to create a network interface with accelerated networking. You can also use the Azure GUI to enable accelerated networking.

    Note:

    The SBC does not support Data Disks deployed over any Azure instance sizes.

    Note:

    Azure v2 instances have hyperthreading enabled.
  • Google Cloud Platform

    The following table lists the GCP instance sizes that you can use for the SBC.

    Table 1-1 GCP Machine Types

    Machine Type vCPUs Memory (GB) vNICs Egress Bandwidth (Gbps) Max Tx/Rx queues per VM Foot 5
    n2-standard-4 4 16 4 10 4
    n2-standard-8 8 32 8 16 8
    n2-standard-16 16 64 8 32 16

    Footnote 5 Using virtIO or a custom driver, the VM is allocated 1 queue for each vCPU with a minimum of 1 queue and maximum of 32 queues. Next, each NIC is assigned a fixed number of queues calculated by dividing the number of queues assigned to the VM by the number of NICs, then rounding down to the closest whole number. For example, each NIC has five queues if a VM has 16 vCPUs and three NICs. It is also possible to assign a custom queue count. To create a VM with specific queue counts for NICs, you use API/Terraform. There is no provision on the GCP console yet.

    Use the n2-standard-4 machine type if you're deploying an SBC that requires one management interface and only two or three media interfaces. Otherwise, use the n2-standard-8 or n2-standard-16 machine types for an SBC that requires one management interface and four media interfaces. Also use the n2-standard-4, n2-standard-8, or n2-standard-16 machine types if deploying the SBC in HA mode.

    Before deploying your SBC, check the Available regions and zones to confirm that your region and zone support N2 shapes.

    On GCP the SBC must use the virtio network interface card. The SBC will not work with the GVNIC

Platform Hyperthreading Support

Some platforms support SMT and enable it by default; others support SMT but don't enable it by default; others support SMT only for certain machine shapes; and others don't support SMT. Check your platform documentation to determine its level of SMT support.

DPDK Reference

The SBC relies on DPDK for packet processing and related functions. You may reference the Tested Platforms section of the DPDK release notes available at https://doc.dpdk.org. This information can be used in conjunction with this Release Notes document for you to set a baseline of:

  • CPU
  • Host OS and version
  • NIC driver and version
  • NIC firmware version

Note:

Oracle only qualifies a specific subset of platforms. Not all the hardware listed as supported by DPDK is enabled and supported in this software.

The DPDK version used in this release is:

  • 23.11