KVM Network Configuration
To configure and manage KVM virtual networks, see these topics:
Overview: Virtual Networking
Networking within a KVM environment is achieved by creating virtual Network Interface Cards (vNICs) on the KVM guest. vNICS are mapped to the host system's own network infrastructure in any of the following ways:
- Connecting to the virtual network running on the host.
- Directly using a physical interface on the host.
- Using Single Root I/O Virtualization (SR-IOV) capabilities on a PCIe device.
- Using a network bridge that enables a vNIC to share a physical network interface on the host.
vNICs are often defined when the KVM is first created, however the libvirt API can be used to add or remove vNICS as required, and because it can handle hot plugging, these actions can be performed on a running virtual machine without significant interruption.
Virtual Network Types:
A brief summary of the different types of virtual networks you can set up within a KVM environment are as follows:
- Default Virtual Networking With NAT – KVM networking can be complex because it involves: (1) physical components directly configured on the host system, (2) KVM configuration within
libvirt
, and (3) network configuration within the running guest OS. Therefore for many development and testing environments, it's often enough to configure vNICs to use the virtual network provided bylibvirt
. By default, thelibvirt
virtual network uses Network Address Translation (NAT) to enable KVM guests to gain access to external network resources. This approach is considered easier to configure and often facilitates similar network access already configured on the host system. - Bridged Network and Mapped Virtual Interfaces – In cases where VMs might need to belong to specific subnetworks, a bridged network can be used. Network bridges use virtual interfaces that are mapped to and share a physical interface on the host. In this approach, network traffic from a KVM behaves as if it's coming from an independent system on the same physical network as the host system. Depending on the tools used, some manual changes to the host network configuration might be required before configuring it for KVM use.
- Host Physical Network Interface – Networking for VMs can also be configured to directly use a physical interface on the host system. This configuration can provide network behavior similar to using a bridged network interface in that the vNIC behaves as if it's connected to the physical network directly. Direct connections tend to use the
macvtap
driver to extend physical network interfaces to provide a range of functionality that can also provide a virtual bridge that behaves similarly to a bridged network but is considered easier to configure and maintain and more likely to offer improved performance. -
Direct and Shared PCIe Passthrough – Another KVM networking method is configuring PCIe passthrough where a PCIe interface supports the KVM network functionality. When using this method, administrators can choose to configure direct or shared PCIe passthrough networking. Direct PCIe passthrough allocates exclusive use of a PCIe device on the host system to a single KVM guest. Shared PCIe passthrough allocates shared use of an SR-IOV (Single Root I/O Virtualization) capable PCIe device to multiple KVM guests. Both of these configuration methods require some hardware set up and configuration on the host system before attaching the PCIe device to a KVM guest(s) for network use.
KVM Tools for Configuring Virtual Network
In cases where network configurations are likely to be more complex, we recommend using Oracle Linux Virtualization Manager. The fundamental purpose of the CLI networking configurations and operations described in this guide is to facilitate the most basic KVM network deployment scenarios.
For details about Oracle Linux Virtualization Manager for more complex network configurations, see Oracle Linux Virtualization Manager documentation.
Command Usage: Manage Virtual Network
To manage virtual networks in a KVM environment, use the virsh net-*
command. For example:
virsh net-list all
– List all virtual networks configured on a host system.
Output example:virsh net-list --all
Name State Autostart Persistent ---------------------------------------------------------- default active yes yes
virsh net-info
– Display information about a network.
Output example:virsh net-info default
Where:Name: default UUID: 16318035-eed4-45b6-99f8-02f1ed0661d9 Active: yes Persistent: yes Autostart: yes Bridge: virbr0
- Name = assigned network name.
- UUID = assigned network identifier.
- virbr0 = virtual network bridge.
Note:
virbr0
should not be confused with traditional bridge networking. In this case, the virtual bridge isn't connected to a physical interface. The virtual network bridge relies on NAT and IP forwarding to connect VMs to the physical network.
virsh net-dumpxml
– View the full configuration of a network.
Output example:virsh net-dumpxml default
<network> <name>default</name> <uuid>16318035-eed4-45b6-99f8-02f1ed0661d9</uuid> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:82:75:1d'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip> </network>
In this example, the virtual network uses a network bridge, called
virbr0
, not to be confused with traditional bridged networking. The virtual bridge isn't connected to a physical interface and relies on NAT and IP forwarding to connect VMs to the physical network beyond.libvirt
also handles IP address assignment for VMs using DHCP. The default network is typically in the range 192.168.122.1/24.virsh net-start
– Start an inactive, previously defined virtual network.
Where: network-identifier stands for either network name or network UUIDsudo virsh net-start [--network] <network-identifier>
virsh net-destroy
– Stop an active network and deallocate all resources used by it. For example, stopping appropriate dnsmasq process, releasing the bridge.sudo virsh net-start [--network] <network-identifier>
For a more complete list of libvirt
's network management commands, see the section 'Basic Command-line Usage for Virtual Networks' on the libvirt
Virtual Networking site (https://wiki.libvirt.org/VirtualNetworking.html#virsh-xml-commands).
Command Usage: Add or Remove vNIC
You can use the virsh attach-interface command to add a new vNIC to an existing KVM. This command can be used to create a vNIC on a KVM that uses any of the networking types available in KVM.
virsh attach-interface --domain guest --type network --source default --config
You must specify the following parameters with this command:
--domain
– The KVM name, ID, or UUID.--type
– The type of networking that the vNIC uses.Available options include:
-
network
for a libvirt virtual network using NAT -
bridge
for a bridge device on the host -
direct
for a direct mapping to one of the host's network interfaces or bridges -
hostdev
for a passthrough connection using a PCI device on the host.
-
--source
– The source to be used for the network type specified.These values vary depending on the type:
- For a
network
, specify the name of the virtual network. - For a
bridge
, specify the name of the bridge device. - For a
direct
connection, specify the name of the host's interface or bridge. - For a
hostdev
connection, specify the PCI address of the host's interface formatted asdomain:bus:slot.function
.
- For a
--config
– Changes the stored XML configuration for the guest VM and takes effect when the guest is started.--live
– The guest VM must be running and the change takes place immediately, thus hot plugging the vNIC.--current
– Affects the current guest VM.
More options are available to further customize the interface, such as setting the MAC
address or configuring the target macvtap
device when using some other
network types. You can also use --model
option to change the model of network
interface that's presented to the VM. By default, the virtio
model is used,
but other models, such as e1000
or rtl8139
are available,
Run virsh help attach-interface for more information, or see the
virsh(1)
manual page.
Remove a vNIC from a VM using the virsh detach-interface command. For example:
virsh detach-interface --domain guest --type network --mac 52:54:00:41:6a:65 --config
The domain
or VM name and type
are required parameters. If the VM has more than one vNIC attached, you must specify the mac
parameter to provide the MAC address of the vNIC that you want to remove. You can obtain this value by listing the vNICs that are attached to a VM. For example, you can run:
virsh domiflist guest
Output similar to the following is displayed:
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 network default virtio 52:54:00:8c:d2:44
vnet1 network default virtio 52:54:00:41:6a:65
Bridged Networking: Setup
Setup Guidelines: Bridged Network
Traditional network bridging using Linux bridges is configurable by using the virsh iface-bridge
command. With this command, administrators can create a bridge on a host system and add a physical interface to it. For example, the following command syntax creates a bridge named vmbridge1 with the Ethernet port named enp0s31f6:
virsh iface-bridge vmbridge1 enp0s31f6
After establishing a bridged network interface, administrators can then attach it to a VM by using the virsh attach-interface command.
Consider the following when using traditional Linux bridged networking for KVM guests:
- Setting up a software bridge on a wireless interface is considered complex because of the number of addresses available in 802.11 frames.
- The complexity of the code to handle software bridges can result in reduced throughput, increased latency, and additional configuration complexity.
The main advantage of a bridged network is that it lets the host system communicate across the network stack directly with any guests configured to use bridged networking.
macvtap
driver which simplifies virtualized bridge networking. For most bridged network configurations in KVM, this is the preferred approach because it offers better performance and it's easier to configure. The macvtap
driver is used when the network type in the KVM XML configuration file is set to direct
. For example:<interface type="direct">
<mac address="#:##:##:##:#:##"/>
<source dev="kvm-host-network-interface- name" mode="bridge"/>
<model type="virtio"/>
<driver name="vhost"/>
</interface>
Where:- mac address="#:##:##:##:#:##" – The MAC address field is optional. If it is omitted, the
libvirt
daemon will generate a unique address. - interface type="direct" – Used for MacVTap. Specifies a direct mapping to an existing KVM host device.
- source dev="kvm-host-device" mode="bridge" – Specifies the KVM host network interface name that will be used by the KVM guest's MacVTap interface. The mode keyword defines which MacVTap mode is used.
The macvtap
driver creates endpoint devices that follow the tun/tap ioctl
interface model to extend an existing network interface so that KVM can use it to connect to
the physical network interface directly to support different network functions. These
functions can be controlled by setting a different mode for the interface. The following
modes are available:
-
vepa
(Virtual Ethernet Port Aggregator) is the default mode and forces all data from a vNIC out of the physical interface to a network switch. If the switch supports a hairpin mode, different vNICs connected to the same physical interface can communicate through the switch. Many switches today don't support a hairpin mode, which means that virtual machines with direct connection interfaces running in VEPA mode are unable to communicate, but can connect to the external network by using the switch. -
bridge
mode connects all vNICs directly to each other so that traffic between the virtual machines on same physical interface isn't sent to the switch but sent directly. Thebridge
mode option is the most useful for switches that don't support a hairpin mode, and when you need maximum performance for communications between VMs. Note thebridge
mode, unlike a traditional software bridge, the host is unable to use this interface to communicate directly with the KVM. -
private
mode behaves like a VEPA mode vNIC in the absence of a switch supporting a hairpin mode option. However, even if the switch does support the hairpin mode, two VMs connected to the same physical interface are unable to communicate with each other. This option supports limited use cases. -
passthrough
mode attaches a physical interface device or an SR-IOV Virtual Function (VF) directly to the vNIC without losing the migration capability. All packets are sent directly to the configured network device. A one-to-one mapping exists between network devices and VMs when configured inpassthrough
mode because a network device can't be shared between VMs in this configuration.
Note:
Thevirsh attach-interface
command doesn't provide an option for you to specify the different modes available when attaching a direct type interface that uses the macvtap
driver and defaults to vepa mode. The graphical virt-manager
utility makes setting up bridged networks using macvtap
easier and provides options for each different mode.
Create: Bridge Network Connection
- Root privileges.
- An existing KVM guest on the host system.
- To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the host system.
Steps
Follow these steps to configure a bridge network using the macvtap driver on an existing host KVM instance.
Bonded Interfaces for Increased Throughput
The use of bonded interfaces for increased network throughput is common when hosts might run several concurrent VMs that are providing multiple services at the same time. In this case, where a single physical interface might have provided enough bandwidth for applications hosted on a physical server, the increase in network traffic when running multiple VMs can have a negative impact on network performance when a single physical interface is shared. By using bonded interfaces, the KVM network throughput can significantly increase, thereby enabling you to take advantage of the high availability features available with network bonding.
Because the physical network interfaces that a VM might use are on the host and not on the VM, setting up any form of bonded networking for greater throughput or for high availability, must be configured on the host system. This process involves configuring network bonds on the host, and then attaching a virtual network interface such as a network bridge directly to the bonded network on the host.
To achieve high availability networking for any VMs, you must first configure a network bond on the host system. For details on how to set up network bonding, see Working With Network Bonding in one of the following guides:
- Oracle Linux 7: Setting Up Networking.
- Oracle Linux 8: Setting Up Networking
- Oracle Linux 9: Setting Up Networking
After the bond is configured, you can then configure the virtual machine network to use the
bonded interface when you configure a network bridge. This can be done by either using: (1)
the bridge
mode for the interface type, or (2) a direct
interface configured to use the macvtap
driver's bridge
mode. Note that the bonded interface can be used instead of a physical network interface when
configuring the virtual network interface.
PCIe Passthrough: Setup
- Direct PCIe Passthrough to KVM Guest Using
libvirt
. Use this method to allocate exclusive use of a PCIe device on a host system to a single KVM guest. This method useslibvirt
device assignment to configure a direct I/O path to a single KVM guest.Note:
Using direct PCIe passthrough can result in increased consumption of host system CPU resources and, thereby, decrease the overall performance of the host system.For more information about configuring PCIe passthrough using this method, see Create: Direct PCIe Passthrough Connection.
. - Shared PCIe Passthrough to KVM Guests Using SR-IOV. Use this method to allocate shared use of SR-IOV (Single Root I/O Virtualization) capable PCIe devices to multiple KVM guests. This method uses SR-IOV device assignment to configure a PCIe resource to be shared amongst several KVM guests. SR-IOV device assignment is beneficial in workloads with high packet rates or low latency requirements. For more information about SR-IOV PCIe passthrough, see the following topics:
Create: Direct PCIe Passthrough Connection
Exclusive PCIe Device Control
KVM guests can be configured to directly access the PCIe devices available on the
host system and to have exclusive control over their capabilities. Use the
virsh
command to assign host PCIe devices to KVM guests. Note
that after a PCIe device is assigned to a guest, the guest has exclusive access to
the device and it's no longer available for use by the host or other guests on the
system.
Note:
The following procedure doesn't cover the configuration of enabling passthrough of SR-IOV Ethernet virtual devices. For instructions on how to configure passthrough for SR-IOV capable PCIe devices, see Create: SR-IOV PCIe Passthrough Connection.Steps
Follow these steps to directly assign a host PCIe device to a KVM guest:
Setup Guidelines: SR-IOV PCIe Passthrough
The Single Root I/O Virtualization (SR-IOV) specification is a standard for device assignment that can share a single PCIe resource among multiple KVM guests. SR-IOV provides the ability to partition a physical PCIe resource into virtual PCIe functions that can be discovered, managed, and configured as normal PCIe devices.
- Physical Functions (PF) The physical function (PF) refers to the physical PCIe adapter device. Each physical PCIe adapter can have up to eight functions (although the most common case is one function). Each function has a full configuration space and is seen by software as a separate PCIe device. When the configuration space of a PCIe function includes SR-IOV support, then that function is considered an SR-IOV physical function. SR-IOV physical functions enable you to manage and configure SR-IOV settings for enabling virtualization and exposing virtual functions (VFs).
- Virtual Function (VF). The virtual function (VF) refers to a virtualized instance of the PCIe device. Each VF is designed to move data in and out. VFs are derived from the physical function (PF). For example, each VF is attached to an underlying PF and each PF can have from zero (0) to one (1) or more VFs. VFs have a reduced configuration space because they inherit most of their settings from the PF.
- Optimized performance and capacity by enabling efficient sharing of PCIe resources.
- Reduced hardware costs through the creation of hundreds of VFs associated with a single PF.
- Dynamic control by the PF through registers designed to turn on the SR-IOV capability, eliminating the need for time-intensive integration.
- Increased performance through direct access to hardware from the virtual guest environment.
Create: SR-IOV PCIe Passthrough Connection
Single Root I/O Virtualization (SR-IOV) further extends Oracle Linux ability to operate as a high performance virtualization solution. With SR-IOV, Oracle Linux can assign virtual resources from PCI devices that have SR-IOV capabilities. These virtual resources known as virtual functions (VFs) appear as new assignable PCIe devices to KVM guests.
SR-IOV provides the same capabilities of assigning a physical PCI device to a guest. However, key benefits for using SR-IOV include optimization of I/O performance (as the guest OS interacts directly with device hardware), and the reduction of hardware costs (elimination for the need to manage a large system configuration of peripheral devices).
Steps
To configure SR-IOV PCIe passthrough to KVM guests, follow these steps:
SR-IOV Enabled PCIe Devices
Note:
Because of the continuous development of new SR-IOV PCIe devices and the Linux kernel, other SR-IOV capable PCIe devices might be available over time and aren't captured in the following table.Table 5-1 PCIe Devices and Drivers
Device Name | Device Driver |
---|---|
Intel 82599ES 10 Gigabit Ethernet Controller | Intel xgbe Linux Base Drivers for Intel(R)
Ethernet Network Connections
For a list of the latest
|
Intel Ethernet Controller XL710 Series
Intel Ethernet Network Adapter XXV710 |
Intel i40e Linux Base Drivers for Intel(R)
Ethernet Network Connections
For a list of the latest
|
NVIDA (Mellanox) ConnectX-5, ConnectX-6 DX, and ConnectX-7 | NVIDA (Mellanox) mlx5_core Driver
|
Intel 82576 Gigabit Ethernet Controller | Intel igb Linux* Base Drivers for Intel(R)
Ethernet Network Connections
For a list of the latest
|
Broadcom NetXtreme II BCM57810 | Broadcom bnx2x Linux Base Drivers for Broadcom
NetXtreme II Network Connections
|
Ethernet Controller E810-C for QSFP | Oracle Linux base driver packages available for Intel(R) Ethernet Network Connections |
SFC9220 10/40G Ethernet Controller |
|
FastLinQ QL41000 Series 10/25/40/50GbE Controller | qede Poll Mode Driver for FastLinQ Ethernet
Network Connections
|