KVM Storage Configuration

Libvirt handles various different storage mechanisms that you can configure for use by KVMs. These mechanisms are organized into different pools or units. By default, libvirt uses directory-based storage pools for the creation of new disks, but pools can be configured for different storage types including physical disk, NFS, and iSCSI.

Depending on the storage pool type that's configured, different storage volumes can be made available to any KVMs to be used as block devices. Sometimes, such as when using iSCSI pools, volumes don't need to be defined as the LUNs for the iSCSI target are automatically presented to the KVM.

Note that you don't need to define different storage pools and volumes to use libvirt with KVM. These tools help you to manage how storage is used and consumed by KVMs as they need it. You can use the default directory-based storage and take advantage of manually mounted storage at the default locations.

We recommend using Oracle Linux Virtualization Manager to easily manage and configure complex storage requirements for KVM environments. Alternatively, you can use Cockpit to manage KVM storage. For more details, see Storage Management Tasks in Oracle Linux: Using the Cockpit Web Console.

For more details on how to use the command line to manage storage configurations for KVM use, see these topics:

Storage Pools: Create and Manage

Storage pools provide logical groupings of storage types that are available to host the volumes that can be used as virtual disks by a set of VMs. A wide variety of different storage types are provided. Local storage can be used in the form of directory based storage pools, file system storage, and disk based storage. Other storage types such as NFS and iSCSI provide standard network based storage, while the RBD type provides distributed storage. More information is provided at https://libvirt.org/storage.html.

Storage pools help abstract underlying storage resources from the VM configurations. This abstraction is useful if you suspect that resources such as virtual disks might change physical location or media type. Abstraction becomes even more important when using network based storage because target paths, DNS, or IP addressing might change over time. By abstracting this configuration information, you can manage resources in a consolidated way without needing to update multiple KVM instances.

You can create transient storage pools that are available until the host reboots, or you can define persistent storage pools that are restored after a reboot.

Transient storage pools are started automatically as soon as they're created and the volumes that are within them are made available to VMs immediately, however any configuration information about a transient storage pool is lost after the pool is stopped, the host reboots, or if the libvirtd service is restarted. The storage itself is unaffected, but VMs configured to use resources in a transient storage pool lose access to these resources. Transient storage pools are created using the virsh pool-create command.

For most use cases, consider creating persistent storage pools. Persistent storage pools are defined as a configuration entry that's stored within /etc/libvirt. Persistent storage pools can be stopped and started and can be configured to start when the host system boots. Libvirt can take care of automatically mounting and enabling access to network based resources when persistent storage is configured. Persistent storage pools are created using the virsh pool-define command, and usually need to be started after they have been created before you can use them.

For more details on how to use the command line to create and manage storage pools for KVM use, see these topics:

Creating a Storage Pool

Use the virsh tool to create a persistent storage pool.
  1. Define the pool.
    virsh pool-define-as pool_name type
    Where:
    • pool_name – The name you assign to the pool.
    • type – The storage type the pool uses.
    See libvirt documentation for details about the storage types you can specify:

    The following table provides examples of different pool types you can define:

    Command Configuration Details
    virsh pool-define-as pool_name dir \
    --target /share/storage_pool
    Creates a pool with the name pool_name for a directory that's at /share/storage_pool on the host system.
    virsh pool-define-as pool_name fs \
    --source-dev /dev/sdc1 \
    --target /share/storage_mount
    Creates file system based storage, that mounts a formatted block device, /dev/sdc1, at the mount point /share/storage_mount.
    virsh pool-define-as pool_name netfs \
    --source-path /ISO \
    --source-host nfs.example.com \
    --target /share/storage_nfs
    Creates an NFS share as a storage pool.
  2. Confirm the pool was defined.
    virsh pool-info pool_name
    You can also view a list of all pools on the system.
    virsh pool-list --all
  3. If the target path doesn't exist, build the directory.
    virsh pool-build pool_name
  4. Start the pool.
    virsh pool-start pool_name
  5. Configure the pool to start automatically when the system boots.
    virsh pool-autostart pool_name

After you create a pool, you can create a storage volume within the pool. See Creating a Storage Volume for more information.

You can also indicate which pool to use when you create a VM using virt-install. Include the --disk argument and the pool and size sub options. For example:

virt-install 
...
--disk pool=pool_name, size=80

Creating a Storage Pool from XML

Use the virsh tool to load a storage pool configuration from an XML file and create the pool.
  1. Create an XML file with definitions for the storage pool.

    For more information on the XML format for a storage pool definition, see Storage pool and volume XML format.

    For example, you could create a storage pool for an iSCSI volume by creating an XML file named pool_definition.xml with the following content:

    <pool type='iscsi'>
      <name>pool_name</name>
      <source>
        <host name='192.0.2.1'/>
        <device path='iqn.2024-12.com.mycompany:my-iscsi-host'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
      </target>
    </pool>

    The previous example assumes that an iSCSI server is already configured and running on a host with IP address 192.0.2.1 and that the iSCSI Qualified Name (IQN) is iqn.2024-12.com.mycompany:my-iscsi-host.

  2. Run virsh pool-define to load the configuration information from the XML file into libvirt.
    For example, to load the pool_definition.xml file from the previous step, run:
    virsh pool-define pool_definition.xml
  3. Confirm the pool was defined.
    virsh pool-info pool_name
    You can also view a list of all pools on the system.
    virsh pool-list --all
  4. If the target path doesn't exist, build the directory.
    virsh pool-build pool_name
  5. Start the pool.
    virsh pool-start pool_name
  6. Configure the pool to start automatically when the system boots.
    virsh pool-autostart pool_name

Removing a Storage Pool

Use the virsh tool to stop and remove a persistent storage pool.
  1. Stop the storage pool.
    virsh pool-destroy pool_name
  2. Delete the directory of the storage pool.

    Note:

    The directory must be empty for this command to delete the directory.
    virsh pool-delete pool_name
  3. Remove the storage pool definition from the system.
    virsh pool-undefine pool_name
  4. Confirm the removal of the storage pool.
    virsh pool-list --all

Storage Volumes: Create and Manage

Storage volumes are created within a storage pool and represent the virtual disks that can be loaded as block devices within one or more VMs. Some storage pool types don't need storage volumes to be created individually as the storage mechanism might present these to VMs as block devices already. For example, iSCSI storage pools present the individual logical unit numbers (LUNs) for an iSCSI target as separate block devices.

Sometimes, such as when using directory or file system based storage pools, storage volumes are individually created for use as virtual disks. In these cases, several disk image formats can be used although some formats, such as qcow2, might require extra tools such as qemu-img for creation.

For disk based pools, standard partition type labels are used to represent individual volumes; while for pools based on the logical volume manager, the volumes themselves are presented individually within the pool.

Storage volumes can be sparsely allocated when they're created by setting the allocation value for the initial size of the volume to a value lower than the capacity of the volume. The allocation indicates the initial or current physical size of the volume, while the capacity indicates the size of the virtual disk as it's presented to the KVM. Sparse allocation is often used to over-subscribe physical disk space where KVMs might eventually require more disk space than is initially available. For a non-sparsely allocated volume, the allocation matches or exceeds the capacity of the volume. Exceeding the capacity of the disk provides space for metadata, if required.

You can use the --pool option if you have volumes with matching names in different pools on the same system and you need to specify the pool to use for any virsh volume operation. This practice is replicated across subsequent examples.

For more details on how to use the command line to create and manage storage volumes for KVM use, see these topics:

Creating a Storage Volume

Depending on the storage pool type, you can create a storage volume using the virsh vol-create-as command.
  1. Run virsh vol-create-as and include the pool, volume name, and capacity as required arguments.
    For example:
    virsh vol-create-as \
    --pool pool_name \
    --name volume_name \
    --capacity 10G

    Many of the available options, such as the allocation or format have default values set, so you can typically only specify the name of the storage pool where the volume should be created, the name of the volume and the capacity that you require.

  2. Verify the creation of the storage volume.
    virsh vol-info --pool pool_name volume_name

    Output similar to the following is displayed:

    Name:           volume_name
    Type:           file
    Capacity:       9.31 GiB
    Allocation:     8.00 GiB

Creating a Storage Volume from XML

Depending on the storage pool type, you can create a storage volume from an XML file using the virsh vol-create command. This command expects you to provide an XML file representation of the volume parameters.

  1. Create an XML file where you define the storage volume.

    The XML for a volume might depend on the pool type and the volume that's being created, but in the case of a sparsely allocated 10 GB image in qcow2 format, the XML might look similar to the following:

    <volume>
    	<name>volume1</name>
    	<allocation>0</allocation>
    	<capacity unit="G">10</capacity>
    	<target>
    		<path>/home/testuser/.local/share/libvirt/images/volume1.qcow2</path>
    		<permissions>
    			<owner>107</owner>
    			<group>107</group>
                		<mode>0744</mode>
                		<label>virt_image_t</label>
              	</permissions>
            </target>
    </volume>

    For more information, see Storage pool and volume XML format in the libvirt documentation.

  2. Run virsh vol-create and include the pool and source XML file as required arguments.
    For example, to create a volume in storage pool named pooldir with an XML file named volume1.xml, run the following command:
    virsh vol-create pooldir volume1.xml

Cloning a Storage Volume

You can clone a storage volume using the virsh vol-clone command.

  1. Run the virsh vol-clone command and include the name of the original volume and the name of the cloned volume as required arguments.
    For example:
    virsh vol-clone --pool pool_name volume1 volume1-clone

    The clone is created in the same storage pool with identical parameters.

  2. Verify the creation of the cloned volume.
    virsh vol-list --pool pool_name --details

Resizing a Storage Volume

If a storage volume isn't being used by a VM, you can resize it by using the virsh vol-resize command.
  1. Run the virsh vol-resize command and provide the volume and capacity as required arguments.
    For example:
    virsh vol-resize --pool pool_name volume1 15G

    Caution:

    Reducing the size of an existing volume can risk destroying data. However, if you need to resize a volume to reduce it, you must specify the --shrink option with the new size value.

Deleting a Storage Volume

You can delete a storage volume by running the virsh vol-delete command.

  1. Run virsh vol-delete and provide the volume name as a required argument.

    For example, to delete the volume named volume1 in the storage pool named pool_name, run the following command:

    virsh vol-delete volume1 --pool pool_name

Virtual Disks: Create and Manage

Virtual disks are typically attached to VMs as block devices based on disk images stored at a given path. Virtual disks can be defined for a VM when it's created, or can be added to an existing VM.

Note:

Command line tools available for managing virtual disks aren't completely consistent in terms of their handling of storage volumes and storage pools.
For more details about how to create and manage virtual disks for KVM use, see these topics:

Attaching a Virtual Disk to an Existing VM

You can use the virsh attach-disk command to attach a disk image to an existing VM.

Command line tools to attach a volume to an existing VM are limited and GUI tools like cockpit are better suited for this operation. If you expect that you might need to work with volumes a lot, consider using Oracle Linux Virtualization Manager.

  1. If the disk image is a volume, obtain its path by running the virsh vol-list command.
    virsh vol-list storage_pool_1

    Output similar to the following is displayed:

     Name            Path                                    
    --------------------------------------------------------------------
     volume1         /share/disk-images/volume1.qcow2
  2. Attach the disk image within the existing VM configuration so that it is persistent and attaches itself on each subsequent restart of the VM:
    virsh attach-disk --config \
    --domain guest_name \
    --source /share/disk-images/volume1.qcow2 \
    --target sdb1

    This command requires that you provide the path to the disk image when you attach it to the VM.

    You can use the following options:

    • --live – temporarily attach a disk image to a running VM.
    • --persistent – attach a disk image to a running VM and also update its configuration so that the disk is attached on each subsequent restart.

Attaching a Virtual Disk when Creating a VM

You can attach a storage volume to a VM as a virtual disk when the VM is created. The virt-install command enables you to specify the volume or storage pool directly for any use of the --disk option.

  1. Create a VM using virt-install and include the required --disk argument.

    To use an existing volume when creating a VM, include the vol option. For example:

    virt-install \
    --name guest \
    --disk vol=storage_pool/volume1.qcow2
    ...

    To create a virtual disk as a volume within an existing storage pool automatically at install, include the pool option. In this case, the size option is also required. For example:

    virt-install \
    --name guest \
    --disk pool=storage_pool size=10
    ...

Detaching a Virtual Disk

You can remove a virtual disk from a VM by using the virsh detach-disk command.

Caution:

Before you detach a disk from a running VM, ensure that you perform the appropriate actions within the guest OS to offline the disk correctly first. Otherwise, you might corrupt the file system. For example, unmount the disk in the guest OS so that it performs any sync operations that might still be remaining before you detach the disk.
  1. Display a list of the block devices attached to a guest to identify the disk target.
    virsh domblklist guest_name
  2. Detach the virtual disk.
    virsh detach-disk --config guest_name target_name

    You can use the following options:

    • --live – temporarily detach a disk image from a running KVM.
    • --persistent – detach a disk image from a running KVM and also update its configuration so that the disk is permanently detached from the KVM on subsequent restarts.

    Detaching a virtual disk from the VM doesn't delete the disk image file or volume from the host system. If you need to delete a virtual disk, you can either manually delete the source image file or delete the volume from the host.

    For example, to remove the disk at the target sdb1 from the configuration for the KVM named guest1, you could run:

    virsh detach-disk --config guest1 sdb1

Resizing a Virtual Disk

You can resize a virtual disk image while a VM is running by using the virsh blockresize command.

  1. Check the current size of all block devices attached to the VM.
    virsh domblkinfo guest_name --all --human
  2. Find the path to the disk image and note the location.
    virsh domblklist guest_name --details
  3. Run virsh blockresize and include the guest name, path to the disk, and intended size as required arguments.
    For example, to increase the size of the disk image at the source location /share/disk-images/volume1.qcow2 on the running VM named guest1 to 20 GB, run:
    virsh blockresize guest_name /share/disk-images/volume1.qcow2 20GB

    The value you provide for size is a scaled integer which defaults to KiB if you omit a suffix.

    The virsh blockresize command enables you to scale up a disk on a live VM, but it doesn't guarantee that the VM can immediately identify that the additional disk resource is available. For some guest operating systems, restarting the VM might be required before the guest can identify the additional resources available.

    Individual partitions and file systems on the block device aren't scaled using this command. You need to perform these operations manually from within the guest, as required.

  4. Verify that resizing has worked as expected by checking the block device information of the VM again.
    virsh domblkinfo guest_name --all --human