KVM Network Configuration

To configure and manage KVM virtual networks, see these topics:

Overview: Virtual Networking

Networking within a KVM environment is achieved by creating virtual Network Interface Cards (vNICs) on the KVM guest. vNICS are mapped to the host system's own network infrastructure in any of the following ways:

  • Connecting to the virtual network running on the host.
  • Directly using a physical interface on the host.
  • Using Single Root I/O Virtualization (SR-IOV) capabilities on a PCIe device.
  • Using a network bridge that enables a vNIC to share a physical network interface on the host.

vNICs are often defined when the KVM is first created, however the libvirt API can be used to add or remove vNICS as required, and because it can handle hot plugging, these actions can be performed on a running virtual machine without significant interruption.

Virtual Network Types:

A brief summary of the different types of virtual networks you can set up within a KVM environment are as follows:

  • Default Virtual Networking With NAT – KVM networking can be complex because it involves: (1) physical components directly configured on the host system, (2) KVM configuration within libvirt, and (3) network configuration within the running guest OS. Therefore for many development and testing environments, it's often enough to configure vNICs to use the virtual network provided by libvirt. By default, the libvirt virtual network uses Network Address Translation (NAT) to enable KVM guests to gain access to external network resources. This approach is considered easier to configure and often facilitates similar network access already configured on the host system.
  • Bridged Network and Mapped Virtual Interfaces – In cases where VMs might need to belong to specific subnetworks, a bridged network can be used. Network bridges use virtual interfaces that are mapped to and share a physical interface on the host. In this approach, network traffic from a KVM behaves as if it's coming from an independent system on the same physical network as the host system. Depending on the tools used, some manual changes to the host network configuration might be required before configuring it for KVM use.
  • Host Physical Network Interface – Networking for VMs can also be configured to directly use a physical interface on the host system. This configuration can provide network behavior similar to using a bridged network interface in that the vNIC behaves as if it's connected to the physical network directly. Direct connections tend to use the macvtap driver to extend physical network interfaces to provide a range of functionality that can also provide a virtual bridge that behaves similarly to a bridged network but is considered easier to configure and maintain and more likely to offer improved performance.
  • Direct and Shared PCIe Passthrough – Another KVM networking method is configuring PCIe passthrough where a PCIe interface supports the KVM network functionality. When using this method, administrators can choose to configure direct or shared PCIe passthrough networking. Direct PCIe passthrough allocates exclusive use of a PCIe device on the host system to a single KVM guest. Shared PCIe passthrough allocates shared use of an SR-IOV (Single Root I/O Virtualization) capable PCIe device to multiple KVM guests. Both of these configuration methods require some hardware set up and configuration on the host system before attaching the PCIe device to a KVM guest(s) for network use.

KVM Tools for Configuring Virtual Network

In cases where network configurations are likely to be more complex, we recommend using Oracle Linux Virtualization Manager. The fundamental purpose of the CLI networking configurations and operations described in this guide is to facilitate the most basic KVM network deployment scenarios.

For details about Oracle Linux Virtualization Manager for more complex network configurations, see Oracle Linux Virtualization Manager documentation.

Command Usage: Manage Virtual Network

To manage virtual networks in a KVM environment, use the virsh net-* command. For example:

  • virsh net-list all – List all virtual networks configured on a host system.
    virsh net-list --all
    Output example:
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     default              active     yes           yes      
  • virsh net-info – Display information about a network.
    virsh net-info default
    Output example:
    Name:           default
    UUID:           16318035-eed4-45b6-99f8-02f1ed0661d9
    Active:         yes
    Persistent:     yes
    Autostart:      yes
    Bridge:         virbr0
    Where:
    • Name = assigned network name.
    • UUID = assigned network identifier.
    • virbr0 = virtual network bridge.

      Note:

      virbr0 should not be confused with traditional bridge networking. In this case, the virtual bridge isn't connected to a physical interface. The virtual network bridge relies on NAT and IP forwarding to connect VMs to the physical network.
  • virsh net-dumpxml – View the full configuration of a network.
    virsh net-dumpxml default
    Output example:
    <network>
      <name>default</name>
      <uuid>16318035-eed4-45b6-99f8-02f1ed0661d9</uuid>
      <forward mode='nat'>
        <nat>
          <port start='1024' end='65535'/>
        </nat>
      </forward>
      <bridge name='virbr0' stp='on' delay='0'/>
      <mac address='52:54:00:82:75:1d'/>
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.2' end='192.168.122.254'/>
        </dhcp>
      </ip>
    </network>

    In this example, the virtual network uses a network bridge, called virbr0, not to be confused with traditional bridged networking. The virtual bridge isn't connected to a physical interface and relies on NAT and IP forwarding to connect VMs to the physical network beyond. libvirt also handles IP address assignment for VMs using DHCP. The default network is typically in the range 192.168.122.1/24.

  • virsh net-start – Start an inactive, previously defined virtual network.
     sudo virsh net-start [--network] <network-identifier>
    Where: network-identifier stands for either network name or network UUID
  • virsh net-destroy – Stop an active network and deallocate all resources used by it. For example, stopping appropriate dnsmasq process, releasing the bridge.
    sudo virsh net-start [--network] <network-identifier>

For a more complete list of libvirt's network management commands, see the section 'Basic Command-line Usage for Virtual Networks' on the libvirt Virtual Networking site (https://wiki.libvirt.org/VirtualNetworking.html#virsh-xml-commands).

Command Usage: Add or Remove vNIC

You can use the virsh attach-interface command to add a new vNIC to an existing KVM. This command can be used to create a vNIC on a KVM that uses any of the networking types available in KVM.

virsh attach-interface --domain guest --type network --source default --config

You must specify the following parameters with this command:

  • --domain – The KVM name, ID, or UUID.
  • --type – The type of networking that the vNIC uses.

    Available options include:

    • network for a libvirt virtual network using NAT

    • bridge for a bridge device on the host

    • direct for a direct mapping to one of the host's network interfaces or bridges

    • hostdev for a passthrough connection using a PCI device on the host.

  • --source – The source to be used for the network type specified.

    These values vary depending on the type:

    • For a network, specify the name of the virtual network.
    • For a bridge, specify the name of the bridge device.
    • For a direct connection, specify the name of the host's interface or bridge.
    • For a hostdev connection, specify the PCI address of the host's interface formatted as domain:bus:slot.function.
  • --config – Changes the stored XML configuration for the guest VM and takes effect when the guest is started.
  • --live – The guest VM must be running and the change takes place immediately, thus hot plugging the vNIC.
  • --current – Affects the current guest VM.

More options are available to further customize the interface, such as setting the MAC address or configuring the target macvtap device when using some other network types. You can also use --model option to change the model of network interface that's presented to the VM. By default, the virtio model is used, but other models, such as e1000 or rtl8139 are available, Run virsh help attach-interface for more information, or see the virsh(1) manual page.

Remove a vNIC from a VM using the virsh detach-interface command. For example:

virsh detach-interface --domain guest --type network --mac 52:54:00:41:6a:65 --config

The domain or VM name and type are required parameters. If the VM has more than one vNIC attached, you must specify the mac parameter to provide the MAC address of the vNIC that you want to remove. You can obtain this value by listing the vNICs that are attached to a VM. For example, you can run:

virsh domiflist guest

Output similar to the following is displayed:

Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet0      network    default    virtio      52:54:00:8c:d2:44
vnet1      network    default    virtio      52:54:00:41:6a:65

Bridged Networking: Setup

Using the CLI, administrators can set up a KVM bridged network with direct Virtual Network Interface Cards (vNICs). For more details, see these topics:

Setup Guidelines: Bridged Network

Traditional network bridging using Linux bridges is configurable by using the virsh iface-bridge command. With this command, administrators can create a bridge on a host system and add a physical interface to it. For example, the following command syntax creates a bridge named vmbridge1 with the Ethernet port named enp0s31f6:

virsh iface-bridge vmbridge1 enp0s31f6

After establishing a bridged network interface, administrators can then attach it to a VM by using the virsh attach-interface command.

Traditional Linux Bridge Networking Complexities

Consider the following when using traditional Linux bridged networking for KVM guests:

  • Setting up a software bridge on a wireless interface is considered complex because of the number of addresses available in 802.11 frames.
  • The complexity of the code to handle software bridges can result in reduced throughput, increased latency, and additional configuration complexity.
Bridge Networking Advantages Using MacVTap Driver

The main advantage of a bridged network is that it lets the host system communicate across the network stack directly with any guests configured to use bridged networking.

Most of the issues related to using traditional Linux bridges can be easily overcome by using the macvtap driver which simplifies virtualized bridge networking. For most bridged network configurations in KVM, this is the preferred approach because it offers better performance and it's easier to configure. The macvtap driver is used when the network type in the KVM XML configuration file is set to direct. For example:
<interface type="direct">
  <mac address="#:##:##:##:#:##"/>   
  <source dev="kvm-host-network-interface- name" mode="bridge"/> 
  <model type="virtio"/> 
  <driver name="vhost"/> 
</interface>
Where:
  • mac address="#:##:##:##:#:##" – The MAC address field is optional. If it is omitted, the libvirt daemon will generate a unique address.
  • interface type="direct" – Used for MacVTap. Specifies a direct mapping to an existing KVM host device.
  • source dev="kvm-host-device" mode="bridge" – Specifies the KVM host network interface name that will be used by the KVM guest's MacVTap interface. The mode keyword defines which MacVTap mode is used.
MacVTAP Driver Modes

The macvtap driver creates endpoint devices that follow the tun/tap ioctl interface model to extend an existing network interface so that KVM can use it to connect to the physical network interface directly to support different network functions. These functions can be controlled by setting a different mode for the interface. The following modes are available:

  • vepa (Virtual Ethernet Port Aggregator) is the default mode and forces all data from a vNIC out of the physical interface to a network switch. If the switch supports a hairpin mode, different vNICs connected to the same physical interface can communicate through the switch. Many switches today don't support a hairpin mode, which means that virtual machines with direct connection interfaces running in VEPA mode are unable to communicate, but can connect to the external network by using the switch.

  • bridge mode connects all vNICs directly to each other so that traffic between the virtual machines on same physical interface isn't sent to the switch but sent directly. The bridge mode option is the most useful for switches that don't support a hairpin mode, and when you need maximum performance for communications between VMs. Note the bridge mode, unlike a traditional software bridge, the host is unable to use this interface to communicate directly with the KVM.

  • private mode behaves like a VEPA mode vNIC in the absence of a switch supporting a hairpin mode option. However, even if the switch does support the hairpin mode, two VMs connected to the same physical interface are unable to communicate with each other. This option supports limited use cases.

  • passthrough mode attaches a physical interface device or an SR-IOV Virtual Function (VF) directly to the vNIC without losing the migration capability. All packets are sent directly to the configured network device. A one-to-one mapping exists between network devices and VMs when configured in passthrough mode because a network device can't be shared between VMs in this configuration.

Note:

The virsh attach-interface command doesn't provide an option for you to specify the different modes available when attaching a direct type interface that uses the macvtap driver and defaults to vepa mode. The graphical virt-manager utility makes setting up bridged networks using macvtap easier and provides options for each different mode.

Create: Bridge Network Connection

The following information describes how to create and attach a virtual bridged network interface to a KVM guest using the MacVTap driver.
What Do You Need?
  • Root privileges.
  • An existing KVM guest on the host system.
  • To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the host system.

Steps

Follow these steps to configure a bridge network using the macvtap driver on an existing host KVM instance.

  1. Create a bridge device and attach it to the physical network device interface on the host using the virsh iface-bridge command.
    Example:
    sudo virsh iface-bridge [bridge_name] [enp0s31f6]
    Where:
    • bridge_name – The name assigned to the bridge.
    • enp0s31f6 – The physical interface the Ethernet port name used in this example.
  2. Attach the bridge interface to the KVM instance using the virsh attach-interface command.
    Example:
    sudo virsh attach-interface --domain My_KVM_Guest_Name --type direct --source wlp4s0 --config
    Where:
    • My_KVM_Guest_Name – The name of the KVM instance.
    • wlp4s0 --config – The source interface name used in this example.

    For more details about using the virsh-attach command, see Command Usage: Add or Remove vNIC.

  3. Shut down the KVM instance. For details, see KVM: Shut Down Instance
  4. Edit the KVM XML configuration to set the source interface mode to bridge. For example:
    1. Use virsh edit to edit the file:
      sudo virsh edit [My_KVM_Guest_Name]

      Note:

      The virsh edit command opens the XML file in the text editor specified by the $EDITOR shell parameter. The vi editor is set by default.
    2. Set the source interface mode to bridge.

      Note:

      The source interface mode is set, by default, to vepa.

      For more details about how to set the source interface mode to bridge, see the macvtap driver example in Setup Guidelines: Bridged Network.

  5. Save the KVM XML configuration changes.
  6. Inform the libvirt daemon of the KVM XML configuration changes by using the virsh undefine and virsh define commands.
    Example:
    sudo virsh undefine [My_KVM_Guest_Name]
    sudo virsh define [My_KVM_Guest_Name-libvirt-xml-file]
    The virsh undefine command removes the existing KVM configuration and the virsh define replaces it with the updated configuration in the XML file.
  7. Start the KVM instance. For details, see KVM: Start Instance.
    The direct network interface is attached in bridge mode and starts automatically when starting the KVM instance.

Bonded Interfaces for Increased Throughput

The use of bonded interfaces for increased network throughput is common when hosts might run several concurrent VMs that are providing multiple services at the same time. In this case, where a single physical interface might have provided enough bandwidth for applications hosted on a physical server, the increase in network traffic when running multiple VMs can have a negative impact on network performance when a single physical interface is shared. By using bonded interfaces, the KVM network throughput can significantly increase, thereby enabling you to take advantage of the high availability features available with network bonding.

Because the physical network interfaces that a VM might use are on the host and not on the VM, setting up any form of bonded networking for greater throughput or for high availability, must be configured on the host system. This process involves configuring network bonds on the host, and then attaching a virtual network interface such as a network bridge directly to the bonded network on the host.

To achieve high availability networking for any VMs, you must first configure a network bond on the host system. For details on how to set up network bonding, see Working With Network Bonding in one of the following guides:

After the bond is configured, you can then configure the virtual machine network to use the bonded interface when you configure a network bridge. This can be done by either using: (1) the bridge mode for the interface type, or (2) a direct interface configured to use the macvtap driver's bridge mode. Note that the bonded interface can be used instead of a physical network interface when configuring the virtual network interface.

PCIe Passthrough: Setup

This section describes the following methods for configuring PCIe passthrough to KVM guests:
  • Direct PCIe Passthrough to KVM Guest Using libvirt. Use this method to allocate exclusive use of a PCIe device on a host system to a single KVM guest. This method uses libvirt device assignment to configure a direct I/O path to a single KVM guest.

    Note:

    Using direct PCIe passthrough can result in increased consumption of host system CPU resources and, thereby, decrease the overall performance of the host system.

    For more information about configuring PCIe passthrough using this method, see Create: Direct PCIe Passthrough Connection.

    .
  • Shared PCIe Passthrough to KVM Guests Using SR-IOV. Use this method to allocate shared use of SR-IOV (Single Root I/O Virtualization) capable PCIe devices to multiple KVM guests. This method uses SR-IOV device assignment to configure a PCIe resource to be shared amongst several KVM guests. SR-IOV device assignment is beneficial in workloads with high packet rates or low latency requirements. For more information about SR-IOV PCIe passthrough, see the following topics:

Create: Direct PCIe Passthrough Connection

The following information describes how to create a direct PCIe connection to a single KVM guest.

Exclusive PCIe Device Control

KVM guests can be configured to directly access the PCIe devices available on the host system and to have exclusive control over their capabilities. Use the virsh command to assign host PCIe devices to KVM guests. Note that after a PCIe device is assigned to a guest, the guest has exclusive access to the device and it's no longer available for use by the host or other guests on the system.

Note:

The following procedure doesn't cover the configuration of enabling passthrough of SR-IOV Ethernet virtual devices. For instructions on how to configure passthrough for SR-IOV capable PCIe devices, see Create: SR-IOV PCIe Passthrough Connection.

Steps

Follow these steps to directly assign a host PCIe device to a KVM guest:

  1. Shut down the KVM guest.
    sudo virsh shutdown GuestName
  2. To identify the host attached PCIe devices and their assigned IDs, use the lspci command as follows:
    lspci -D|awk '{gsub("[:\\.]","_",$0); sub("^","pci_",$0); print;}' 
    Where:
    • lspci lists all PCIe devices.
    • -D option lists the PCIe domain numbers for each device.
    • awk is a scripting language that manipulates the device IDs into a format usable by the virsh command.
    For example, the output might look as follows:
    pci_0000_00_00_0 Host bridge_ Intel Corporation 11th Gen Core Processor Host Bridge/DRAM Registers (rev 01)
    pci_0000_00_02_0 VGA compatible controller_ Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01)
    pci_0000_00_04_0 Signal processing controller_ Intel Corporation TigerLake-LP Dynamic Tuning Processor Participant (rev 01)
    pci_0000_00_06_0 PCI bridge_ Intel Corporation 11th Gen Core Processor PCI Express Controller (rev 01)
    pci_0000_00_07_0 PCI bridge_ Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #0 (rev 01)
    ...
  3. Select the device that you want to configure for passthrough and create a variable containing the device ID. For example:
    pci_dev="pci_0000_00_07_0" 
  4. Use the virsh nodedev-dumpxml command to calculate the PCIe device domain, bus, slot, and function parameters into usable variables. For example:
    domain=$(virsh nodedev-dumpxml $pci_dev --xpath '//domain/text()') 
    bus=$(virsh nodedev-dumpxml $pci_dev --xpath '//bus/text()') 
    slot=$(virsh nodedev-dumpxml $pci_dev --xpath '//slot/text()') 
    function=$(virsh nodedev-dumpxml $pci_dev --xpath '//function/text()')
  5. To identify the device source domain address required for passthrough, use the print function to convert the PCIe domain, bus, slot, and function variables to hexadecimal values.
    For example:
    printf "<address domain='0x%x' bus='0x%x' slot='0x%x' function='0x%x'/>\n" $domain $bus $slot $function
  6. Assign the PCIe device to a KVM guest.
    Run virsh edit, specify the KVM guest name, and add the PCIe device domain address in the <source> section.

    For example:

    # virsh edit GuestName
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
         <address domain='0x0' bus='0x0' slot='0x14' function='0x3'/>
      </source>
    </hostdev>

    Note:

    managed and unmanagedlibvirt recognizes two management modes for handling PCIe devices: managed='yes' (default) or managed='no". When the mode is set to managed='yes', libvirt handles the unbinding of the device from the existing driver, resetting the device, and then binding it to the vfio-pci driver before starting the domain. In cases when the domain is stopped or the device is removed from the domain, libvirt unbinds it from the vfio-pci driver and rebinds it to the original driver. When the mode is set to managed='no', you must manually detach the PCIe device from the host and then manually attach it to the vfio-pci driver.
    For example, to detach:
    sudo virsh nodedev-dettach pci_0000_device_ID_#
    To reattach:
    sudo virsh nodedev-reattach pci_0000_device_ID_#
    Alternatively, you can use Cockpit to attach and remove host devices. For more details, see Add or Remove VM Host Devices in the Oracle Linux: Using the Cockpit Web Console guide.
  7. On the host system, enable guest management for virtual PCIe pass-through.
    sudo setsebool -P virt_use_sysfs 1
  8. Start the KVM guest.
    sudo virsh start GuestName

    The PCIe device is successfully assigned to the KVM guest and the guest OS now has exclusive control over its capabilities.

Setup Guidelines: SR-IOV PCIe Passthrough

The Single Root I/O Virtualization (SR-IOV) specification is a standard for device assignment that can share a single PCIe resource among multiple KVM guests. SR-IOV provides the ability to partition a physical PCIe resource into virtual PCIe functions that can be discovered, managed, and configured as normal PCIe devices.

Passthrough configuration of PCIe devices using SR-IOV involves these functions:
  • Physical Functions (PF) The physical function (PF) refers to the physical PCIe adapter device. Each physical PCIe adapter can have up to eight functions (although the most common case is one function). Each function has a full configuration space and is seen by software as a separate PCIe device. When the configuration space of a PCIe function includes SR-IOV support, then that function is considered an SR-IOV physical function. SR-IOV physical functions enable you to manage and configure SR-IOV settings for enabling virtualization and exposing virtual functions (VFs).
  • Virtual Function (VF). The virtual function (VF) refers to a virtualized instance of the PCIe device. Each VF is designed to move data in and out. VFs are derived from the physical function (PF). For example, each VF is attached to an underlying PF and each PF can have from zero (0) to one (1) or more VFs. VFs have a reduced configuration space because they inherit most of their settings from the PF.
SR-IOV Advantages
Some key benefits for using SR-IOV for PCIe passthrough include:
  • Optimized performance and capacity by enabling efficient sharing of PCIe resources.
  • Reduced hardware costs through the creation of hundreds of VFs associated with a single PF.
  • Dynamic control by the PF through registers designed to turn on the SR-IOV capability, eliminating the need for time-intensive integration.
  • Increased performance through direct access to hardware from the virtual guest environment.

Create: SR-IOV PCIe Passthrough Connection

The following information describes how to create a SR-IOV PCIe passthrough connection for KVM guests.
SR-IOV Advantages and Capabilities

Single Root I/O Virtualization (SR-IOV) further extends Oracle Linux ability to operate as a high performance virtualization solution. With SR-IOV, Oracle Linux can assign virtual resources from PCI devices that have SR-IOV capabilities. These virtual resources known as virtual functions (VFs) appear as new assignable PCIe devices to KVM guests.

SR-IOV provides the same capabilities of assigning a physical PCI device to a guest. However, key benefits for using SR-IOV include optimization of I/O performance (as the guest OS interacts directly with device hardware), and the reduction of hardware costs (elimination for the need to manage a large system configuration of peripheral devices).

Steps

To configure SR-IOV PCIe passthrough to KVM guests, follow these steps:

  1. Verify if the Intel VT-d or AMD IOMMU options are enabled in the system firmware at the BIOS/UEFI level. For more details, see the applicable Oracle server model documentation.
  2. Verify if the Intel VT-d or AMD IOMMU options are activated in the kernel. If these kernel options haven't been enabled, perform the following.
    • For Intel virtualization, add the intel_iommu=on and iommu=pt parameters to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in the /etc/default/grub.cfg file.

      Note:

      A symlink exists between /etc/sysconfig/grub and /etc/default/grub, therefore, you could alternatively choose to configure the /etc/sysconfig/grub.cfg file.
    • For AMD virtualization, add the intel_iommu=on and iommu=pt parameters to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in the /etc/default/grub.cfg file.
      Regenerate grub.cfg file and then reboot the system for the changes to take affect.
      grub2-mkconfig -o /etc/grub.cfg
  3. Use the lspci command to verify if an SR-IOV capable PCIe device is detected on the host system. For example:
    lspci -D|awk '{gsub("[:\\.]","_",$0); sub("^","pci_",$0); print;}' 
    Where:
    • lspci lists all PCIe devices.
    • -D option lists the PCIe domain numbers for each device.
    • awk is a scripting language that manipulates the device IDs into a format usable by the virsh command.
    For example, the output might look as follows:
    pci_0000_00_00_0 Host bridge_ Intel Corporation 11th Gen Core Processor Host Bridge/DRAM Registers (rev 01)
    pci_0000_00_02_0 VGA compatible controller_ Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01)
    pci_0000_00_04_0 Signal processing controller_ Intel Corporation TigerLake-LP Dynamic Tuning Processor Participant (rev 01)
    pci_0000_00_06_0 PCI bridge_ Intel Corporation 11th Gen Core Processor PCI Express Controller (rev 01)
    pci_0000_00_07_0 PCI bridge_ Intel Corporation Tiger Lake-LP Thunderbolt 4 PCI Express Root Port #0 (rev 01)
    ...

    Note:

    For a list of SR-IOV compatible PCIe devices, see SR-IOV Enabled PCIe Devices.
  4. Load the device driver kernel module.

    If an SR-IOV PCIe device is detected, the driver kernel module automatically loads.

    If required, you can pass parameters to the module using the modprobe command. The following example output shows the igb driver for an 82576 network interface card.
    sudo modprobe igb [<option>=<VAL1>,<VAL2>,]
    sudo lsmod |grep igb
    igb    82576  0
    dca    6708    1 igb
  5. Activate the virtual functions (VFs) by performing the following:
    • To set the maximum VFs offered by a kernel driver, perform the following:
      1. To set the maximum VFs offered by a kernel driver, you must first remove the device driver kernel module. For example:
        sudo modprobe -r drivername

        In the previous example in Step 4, igb is name of the driver. To find the device driver name, use the ethtool command. For example:

        ethtool -i em1 | grep ^driver
      2. Start the module with max_vfs set to 7 (or up to the maximum number allowed). For example:
        sudo modprobe drivername max_vfs=7
      3. Make the VFs persistent at boot.

        Add the line options drivername max_vfs=7 to any file in /etc/modprobe.d, for example:

        sudo echo "options drivername max_vfs=7" >>/etc/modprobe.d/igb.conf
    • To allocate the required amount of VFs to create, issue the following:
      echo N > /sys/bus/pci/devices/${PF_DEV}/sriov_numvfs

      Where:

      • N is the number of VFs that you want the kernel driver to create.
      • ${PF_DEV} is the PCI bus/device/function ID for the physical device. For example: “0000:02:00.0” (as shown in the example output of Step 3.)
  6. Use the lspci | grep command to list the newly added VFs.

    For example, the following output lists VFs associated with the 82576 Network Controller.

    sudo lspci | grep 82576
    0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01)
    0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
    0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

    The physical functions (PFs) correspond to 0b:00.0 and 0b:00.1 entries. Where all the VFs appear as a Virtual Function entry in the description.

  7. Verify libvirt can detect the SR-IOV device by using the virsh nodedev-list | grep command.

    For the Intel 82576 network device example, the filtered output appears as follows:

    virsh nodedev-list | grep 0b
    pci_0000_0b_00_0
    pci_0000_0b_00_1
    pci_0000_0b_10_0
    pci_0000_0b_10_1
    pci_0000_0b_10_2
    pci_0000_0b_10_3
    pci_0000_0b_10_4
    pci_0000_0b_10_5
    pci_0000_0b_10_6
    pci_0000_0b_11_7
    pci_0000_0b_11_1
    pci_0000_0b_11_2
    pci_0000_0b_11_3
    pci_0000_0b_11_4
    pci_0000_0b_11_5

    Note that libvirt uses a similar notation to the lspci output. Punctuation characters, for example, such as a semicolon (;) and a period (.), appear in lspci output as underscores (_).

  8. Use virsh nodedev-dumpxml command to review the SR-IOV physical and virtual functions device details.

    For example, advanced output shows details associated with the pci_0000_0b_00_0 physical function and its first corresponding virtual function ( pci_0000_0b_10_0_),

    sudo virsh nodedev-dumpxml pci_0000_0b_00_0
    <device>
       <name>pci_0000_0b_00_0</name>
       <parent>pci_0000_00_01_0</parent>
       <driver>
          <name>igb</name>
       </driver>
       <capability type='pci'>
          <domain>0</domain>
          <bus>11</bus>
          <slot>0</slot>
          <function>0</function>
          <product id='0x10c9'>82576 Gigabit Network Connection</product>
          <vendor id='0x8086'>Intel Corporation</vendor>
       </capability>
    </device>
    sudo virsh nodedev-dumpxml pci_0000_0b_10_0
    <device>
       <name>pci_0000_0b_10_0</name>
       <parent>pci_0000_00_01_0</parent>
       <driver>
          <name>igbvf</name>
       </driver>
       <capability type='pci'>
          <domain>0</domain>
          <bus>11</bus>
          <slot>16</slot>
          <function>0</function>
          <product id='0x10ca'>82576 Virtual Function</product>
          <vendor id='0x8086'>Intel Corporation</vendor>
       </capability>
    </device>

    Note the bus, slot and function parameters of the VF. These parameters are required in the next step to assign a VF to a KVM guest.

    Copy these VF parameters into a temporary XML file, such as /tmp/new-interface.xml for example:
    <interface type='hostdev' managed='yes'>
         <source>
           <address type='pci' domain='0' bus='11' slot='16' function='0'/>
         </source>
       </interface>

    Note:

    • A MAC address is automatically generated if one isn't specified.
    • The <virtualport> element is only used when connecting to an 802.11Qbh hardware switch.
    • The <vlan> element transparently assigns a guest with a VLAN tagged 42.

      When the KVM guest starts, it sees a network device of the type provided by the physical adapter, with the configured MAC address. This MAC address remains unchanged across host and guest reboots.

      The following <interface> example shows the syntax for the following optional elements: <mac address>, <virtualport>, and <vlan>. In practice, use either the <vlan> or <virtualport> element, but not both simultaneously as shown in the following example:

    ...
     <devices>
       ...
       <interface type='hostdev' managed='yes'>
         <source><address type='pci' domain='0' bus='11' slot='16' function='0'/>
         </source><mac address='52:54:00:6d:90:02'><vlan><tag id='42'/>
         </vlan><virtualport type='802.1Qbh'>
         <parameters profileid='finance'/>
         </virtualport></interface>
       ...
     </devices>
  9. Using the new-interface.xml file created in the previous step, and the virsh attach-device command, assign a VF of a SR-IOV PCIe device to a KVM guest.
    For example:
    virsh attach-device MyGuestName /tmp/new-interface.xml  --config

    The --config option ensures that the new VF is available after future restarts of KVM guest.

SR-IOV Enabled PCIe Devices

Note:

Because of the continuous development of new SR-IOV PCIe devices and the Linux kernel, other SR-IOV capable PCIe devices might be available over time and aren't captured in the following table.

Table 5-1 PCIe Devices and Drivers

Device Name Device Driver
Intel 82599ES 10 Gigabit Ethernet Controller Intel xgbe Linux Base Drivers for Intel(R) Ethernet Network Connections

For a list of the latest xgbedrivers, see http://e1000.sourceforge.net or http://downloadcenter.intel.com

Intel Ethernet Controller XL710 Series

Intel Ethernet Network Adapter XXV710

Intel i40e Linux Base Drivers for Intel(R) Ethernet Network Connections

For a list of the latest i40edrivers, see http://e1000.sourceforge.net or http://downloadcenter.intel.com

NVIDA (Mellanox) ConnectX-5, ConnectX-6 DX, and ConnectX-7 NVIDA (Mellanox) mlx5_core Driver
Intel 82576 Gigabit Ethernet Controller Intel igb Linux* Base Drivers for Intel(R) Ethernet Network Connections

For a list of the latest xgbedrivers, see http://e1000.sourceforge.net or http://downloadcenter.intel.com

Broadcom NetXtreme II BCM57810 Broadcom bnx2x Linux Base Drivers for Broadcom NetXtreme II Network Connections
Ethernet Controller E810-C for QSFP Oracle Linux base driver packages available for Intel(R) Ethernet Network Connections
SFC9220 10/40G Ethernet Controller

sfc Linux base Driver

FastLinQ QL41000 Series 10/25/40/50GbE Controller qede Poll Mode Driver for FastLinQ Ethernet Network Connections