5 Working With Software RAID
The Redundant Array of Independent Disks (RAID) feature provides the capability to spread data across multiple drives to increase capacity, implement data redundancy, and increase performance. RAID is implemented either in hardware through intelligent disk storage that exports the RAID volumes as LUNs, or in software by the operating system. The Oracle Linux kernel uses the multiple device (MD) driver to support software RAID to create virtual devices from two or more physical storage devices. MD enables you to organize disk drives into RAID devices and implement different RAID levels.
You can create RAID devices using mdadm or Logical Volume Manager (LVM).
Software RAID Levels
The following software RAID levels are commonly implemented with Oracle Linux:
- Linear RAID (spanning)
-
Combines drives as a larger virtual drive. This level provides no data redundancy nor performance benefit. Resilience decreases because the failure of a single drive renders the array unusable.
- RAID-0 (striping)
-
Increases performance but doesn't provide data redundancy. Data is broken down into units (stripes) and written to all the drives in the array. Resilience decreases because the failure of a single drive renders the array unusable.
- RAID 0+1 (mirroring of striped disks)
-
Combines RAID-0 and RAID-1 by mirroring a striped array to provide both increased performance and data redundancy. Failure of a single disk causes one of the mirrors to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror remains available. RAID 0+1 is usually as expensive as or slightly more expensive than RAID-1.
- RAID-1 (mirroring)
-
Provides data redundancy and resilience by writing identical data to each drive in the array. If one drive fails, a mirror can satisfy I/O requests. Mirroring is an expensive solution because the same information is written to all of the disks in the array.
- RAID 1+0 (striping of mirrored disks or RAID-10)
-
Combines RAID-0 and RAID-1 by striping a mirrored array to provide both increased performance and data redundancy. Failure of a single disk causes part of one mirror to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror retains a complete copy of the data. RAID 1+0 is typically as expensive as or slightly more expensive than RAID-1.
- RAID-5 (striping with distributed parity)
-
Increases read performance by using striping and provides data redundancy. The parity is distributed across all the drives in an array, but it doesn't take up as much space as a complete mirror. Write performance is reduced to some extent as a result of the need to calculate parity information and to write the information in addition to the data. If one disk in the array fails, the parity information is used to reconstruct data to satisfy I/O requests. In this mode, read performance and resilience are degraded until you replace the failed drive and repopulate the new drive with data and parity information. RAID-5 is intermediate in expense between RAID-0 and RAID-1.
- RAID-6 (Striping with double distributed parity)
-
A more resilient variant of RAID-5 that can recover from the loss of two drives in an array. The double parity is distributed across all the drives in an array, to ensure redundancy at the expense of taking up more space than RAID-5. For example, in an array of four disks, if two disks in the array fails, the parity information is used to reconstruct data. Usable disks are the number of disks minus two. RAID-6 is used when data redundancy and resilience are important, but performance is not. RAID-6 is intermediate in expense between RAID-5 and RAID-1.
Creating Software RAID Devices using mdadm
-
Run the mdadm command to create the MD RAID device as follows:
sudo mdadm --create md_device --level=RAID_level [options] --raid-devices=Ndevices
- md_device
-
Name of the RAID device, for example,
/dev/md0
. - RAID_level
-
Level number of the RAID to create, for example,
5
for a RAID-5 configuration. - --raid-devices=N
-
Number of devices to become part of the RAID configuration.
- devices
-
Devices to be configured as RAID, for example,
/dev/sd[bcd]
for 3 devices for the RAID configuration.The devices you list must total to the number you specified for
--raid-devices
.
This example creates a RAID-5 device
/dev/md1
from/dev/sdb
,/dev/sdc
, anddev/sdd
:sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 /dev/sd[bcd]
The previous example creates a RAID-5 device
/dev/md1
out of 3 devices. We can use 4 devices where one device is configured as a spare for expansion, reconfiguration, or replacement of failed drives:sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 --spare-devices=1 /dev/sd[bcde]
-
(Optional) Add the RAID configuration to
/etc/mdadm.conf
:sudo mdadm --examine --scan >> /etc/mdadm.conf
Based on the configuration file, mdadm assembles the arrays at boot time.
For example, the following entries define the devices and arrays that correspond to
/dev/md0
and/dev/md1
:DEVICE /dev/sd[c-g] ARRAY /dev/md0 devices=/dev/sdf,/dev/sdg ARRAY /dev/md1 spares=1 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
For more examples, see the sample configuration file
/usr/share/doc/mdadm-3.2.1/mdadm.conf-example
.
An MD RAID device is used in the same way as any physical storage device. For example, the RAID device can be configured as an LVM physical volume, a file system, a swap partition, an Automatic Storage Management (ASM) disk, or a raw device.
To check the status of the MD RAID devices, view
/proc/mdstat
:
cat /proc/mdstat
Personalities : [raid1] mdo : active raid1 sdg[1] sdf[0]
To display a summary or detailed information about MD RAID devices, use the --query or --detail option, respectively, with mdadm.
For more information, see the md(4)
,
mdadm(8)
, and
mdadm.conf(5)
manual pages.
Creating and Managing Software RAID Devices using LVM
-
Ensure that you have created a sufficient number of physical volumes in a volume group to accommodate your LVM RAID logical volume requirements. For more information about creating physical volumes and volume groups, see Working With Logical Volume Manager.
- Review the
raid_fault_policy
value in the/etc/lvm/lvm.conf
file that specifies how a RAID instance that uses redundant devices reacts to a drive failure. The default value is"warn"
which indicates that RAID is configured to log a warning in the system logs. This means that in the event of a device failure, manual action is required to replace the failed device. -
Run the lvcreate command to create the LVM RAID device. See the following sections for examples:
-
Create the filesystem you want on your device. For example, the following command creates an ext4 file system on a RAID 6 logical volume.
sudo mkfs.ext4 /dev/myvg/mylvraid6 mke2fs 1.46.2 (28-Feb-2021) Creating filesystem with 264192 4k blocks and 66096 inodes Filesystem UUID: 05a78be5-8a8a-44ee-9f3f-1c4764c957e8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376
- Consider persisting the logical volume by editing your
/etc/fstab
file. For example, adding the following line that includes the UUID created in the previous step ensures that the logical volume is remounted after a reboot.UUID=05a78be5-8a8a-44ee-9f3f-1c4764c957e8 /mnt ext4 defaults 0 0
For more information about using UUIDs with the
/etc/fstab
file, see Automatic Device Mappings for Partitions and File Systems . - In the event that a device failure occurs for LVM RAID levels 5, 6, and 10, ensure that
you have a replacement physical volume attached to the volume group that contains the failed
RAID device, and do one of the following:
-
Use the following command to switch to a random spare physical volume present in the volume group:
sudo lvconvert --repair volume_group/logical_volume
In the previous example, volume_group is the volume group and logical_volume is the LVM RAID logical volume.
- Use the following command to switch to a specific physical volume present in the
volume group:
sudo lvconvert --repair volume_group/logical_volume physical_volume
In the previous example, physical_volume is the specific volume you want to replace the failed physical volume with. For example,
/dev/sdb1
.
-
RAID Level 0 (Striping) LVM Examples
mylv
raid0 of size 2 GB in
the volume group
myvg
.lvcreate --type raid0 --size 2g --stripes 3 --stripesize 4 -n mylvraid0 myvg
The logical volume contains three stripes, which is the number of devices to use in
myvg
volume group. The stripesize of four kilobytes is the size of
data that can be written to one device before moving to the next device.
The following output is displayed:
Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB (513 extents). Logical volume "mylvraid0" created.
lsblk
command shows that three out of the four physical volumes are
now running the myvg-mylvraid0
RAID 0 logical volume. Additionally,
each instances of myvg-mylvraid0
is a subvolume included in another
subvolume containing the data for the RAID logical volume. Each subvolume contains the
data subvolume that are labled myvg-mylvraid0_rimage_0
,
myvg-mylvraid0_rimage_1
, and
myvg-mylvraid0_rimage_2
.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sdb 8:16 0 50G 0 disk
└─myvg-mylvraid0_rimage_0 252:2 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdc 8:32 0 50G 0 disk
└─myvg-mylvraid0_rimage_1 252:3 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdd 8:48 0 50G 0 disk
└─myvg-mylvraid0_rimage_2 252:4 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sde 8:64 0 50G 0 disk
To display information about logical volumes, use the lvdisplay, lvs, and lvscan commands.
To remove a RAID 0 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
RAID Level 1 (Mirroring) LVM Examples
mylvraid1
of size 1 GB in
the volume group
myvg
.lvcreate --type raid1 -m 1 --size 1G -n mylvraid1 myvg
The following output is displayed:
Logical volume "mylvraid1" created.
The -m specifies that you want 1 mirror device in the myvg
volume group
where identical data is written to the first device and second mirror device. You can
specify additional mirror devices if you want. For example, -m 2 would create two
mirrors of the first device. If one device fails the other device mirrors can continue
to process requests.
lsblk
command shows that two out of the four available physical
volumes are now part of the myvg-mylvraid1
RAID 1 logical volume.
Additionally, each instance of myvg-mylvraid1
includes subvolume pairs
for data and metadata. Each data subvolumes are labled
myvg-mylvraid1_rimage_0
and
myvg-mylvraid1_rimage_1
. Each metadata subvolumes are labled
myvg-mylvraid1_rmeta_0
and
myvg-mylvraid1_rmeta_1
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid1_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_0 252:3 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid1_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_1 252:5 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
sde 8:64 0 50G 0 disk
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid1 21.58 mylvraid1_rimage_0(0),mylvraid1_rimage_1(0)
[mylvraid1_rimage_0] /dev/sdf(1)
[mylvraid1_rimage_1] /dev/sdg(1)
[mylvraid1_rmeta_0] /dev/sdf(0)
[mylvraid1_rmeta_1] /dev/sdg(0)
To remove a RAID 1 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid1
Creating integrity metadata LV mylvraid1_rimage_0_imeta with size 20.00 MiB.
Logical volume "mylvraid1_rimage_0_imeta" created.
Creating integrity metadata LV mylvraid1_rimage_1_imeta with size 20.00 MiB.
Logical volume "mylvraid1_rimage_1_imeta" created.
Limiting integrity block size to 512 because the LV is active.
Using integrity block size 512 for file system block size 4096.
Logical volume myvg/mylvraid1 has added integrity.
lvconvert
to split a mirror into individual linear
logical volumes. For example, the following command splits the
mirror:sudo lvconvert --splitmirror 1 -n lvnewlinear myvg/mylvraid1
Are you sure you want to split raid1 LV myvg/mylvraid1 losing all resilience? [y/n]: y
If you had a three instance mirror, the same command would create a two way mirror and a linear logical volume.
sudo lvconvert -m 2 myvg/mylvraid1
Are you sure you want to convert raid1 LV myvg/mylvraid1 to 3 images enhancing resilience? [y/n]: y
Logical volume myvg/mylvraid1 successfully converted.
sudo lvconvert -m1 myvg/mylvraid1 /dev/sdd
Are you sure you want to convert raid1 LV myvg/mylvraid1 to 2 images reducing resilience? [y/n]: y
Logical volume myvg/mylvraid1 successfully converted.
For more information, see the lvmraid
, lvcreate
,
and lvconvert
manual pages.
RAID Level 5 (Striping with Distributed Parity) LVM Examples
mylvraid5
of size 1 GB in
the volume group
myvg
.lvcreate --type raid5 -i 2 --size 1G -n mylvraid5 myvg
The following output is displayed:
Using default stripesize 64.00 KiB. Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB (258 extents). Logical volume "mylvraid5" created.
The logical volume contains two stripes, which is the number of devices to use in the
myvg
volume group. However, the total usable number of devices
requires that an additional device be added to account for the parity information. And
so, a stripe size of two requires three available drives such that striping and parity
information is spread across all three, even though the total usable device space
available for striping is only equivalent to two devices. The parity information across
all three devices is sufficient to deal with the loss of one of the devices.
The stripesize is not specified in the creation command, so the default of 64 kilobytes is used. This is the size of data that can be written to one device before moving to the next device.
lsblk
command shows that three out of the four available physical
volumes are now part of the myvg-mylvraid5
RAID 5 logical volume.
Additionally, each instance of myvg-mylvraid5
includes subvolume pairs
for data and metadata. Each data subvolumes are labelled
myvg-mylvraid5_rimage_0
, myvg-mylvraid5_rimage_1
,
and myvg-mylvraid5_rimage_2
. Each metadata subvolumes are labelled
myvg-mylvraid5_rmeta_0
, myvg-mylvraid5_rmeta_1
and
myvg-mylvraid5_rmeta_2
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid5_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_2 252:7 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sde 8:64 0 50G 0 disk
myvg
:sudo lvs -a -o name,copy_percent,devices myvg
LV Cpy%Sync Devices
mylvraid5 25.00 mylvraid5_rimage_0(0),mylvraid5_rimage_1(0),mylvraid5_rimage_2(0)
[mylvraid5_rimage_0] /dev/sdf(1)
[mylvraid5_rimage_1] /dev/sdg(1)
[mylvraid5_rimage_2] /dev/sdh(1)
[mylvraid5_rmeta_0] /dev/sdf(0)
[mylvraid5_rmeta_1] /dev/sdg(0)
[mylvraid5_rmeta_2] /dev/sdh(0)
To remove a RAID 5 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid5
Creating integrity metadata LV mylvraid5_rimage_0_imeta with size 12.00 MiB.
Logical volume "mylvraid5_rimage_0_imeta" created.
Creating integrity metadata LV mylvraid5_rimage_1_imeta with size 12.00 MiB.
Logical volume "mylvraid5_rimage_1_imeta" created.
Creating integrity metadata LV mylvraid5_rimage_2_imeta with size 12.00 MiB.
Logical volume "mylvraid5_rimage_2_imeta" created.
Limiting integrity block size to 512 because the LV is active.
Using integrity block size 512 for file system block size 4096.
Logical volume myvg/mylvraid5 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.
RAID Level 6 (Striping with Double Distributed Parity) LVM Examples
mylvraid6
of size 1 GB in
the volume group
myvg
.lvcreate --type raid6 -i 3 -L 1G -n mylvraid6 myvg
The following output is displayed:
Using default stripesize 64.00 KiB. Rounding size 1.00 GiB (256 extents) up to stripe boundary size <1.01 GiB (258 extents). Logical volume "mylvraid6" created.
The logical volume contains three stripes, which is the number of devices to use in the
myvg
volume group. However, the total usable number of devices
requires that an additional two devices be added to account for the double parity
information. And so, a stripe size of three requires five available drives such that
striping and double parity information is spread across all five, even though the total
usable device space available for striping is only equivalent to three devices. The
parity information across all five devices is sufficient to deal with the loss of two of
the devices.
The stripesize is not specified in the creation command, so the default of 64 kilobytes is used. This is the size of data that can be written to one device before moving to the next device.
lsblk
command shows that all five of the available physical volumes
are now part of the myvg-mylvraid6
RAID 6 logical volume. Additionally,
each instance of myvg-mylvraid6
includes subvolume pairs for data and
metadata. Each data subvolumes are labelled myvg-mylvraid6_rimage_0
,
myvg-mylvraid6_rimage_1
, myvg-mylvraid6_rimage_2
,
myvg-mylvraid6_rimage_3
, and
myvg-mylvraid6_rimage_4
. Each metadata subvolumes are labelled
myvg-mylvraid6_rmeta_0
, myvg-mylvraid6_rmeta_1
,
myvg-mylvraid6_rmeta_2
, myvg-mylvraid6_rmeta_3
,
and
myvg-mylvraid6_rmeta_4
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid5_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_2 252:7 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sde 8:64 0 50G 0 disk
├─myvg-mylvraid5_rmeta_3 252:8 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_3 252:9 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdf 8:80 0 50G 0 disk
├─myvg-mylvraid5_rmeta_4 252:10 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_4 252:11 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid6 31.26 mylvraid6_rimage_0(0),mylvraid6_rimage_1(0),mylvraid6_rimage_2(0),mylvraid6_rimage_3(0),mylvraid6_rimage_4(0)
[mylvraid6_rimage_0] /dev/sdf(1)
[mylvraid6_rimage_1] /dev/sdg(1)
[mylvraid6_rimage_2] /dev/sdh(1)
[mylvraid6_rimage_3] /dev/sdi(1)
[mylvraid6_rimage_4] /dev/sdj(1)
[mylvraid6_rmeta_0] /dev/sdf(0)
[mylvraid6_rmeta_1] /dev/sdg(0)
[mylvraid6_rmeta_2] /dev/sdh(0)
[mylvraid6_rmeta_3] /dev/sdi(0)
[mylvraid6_rmeta_4] /dev/sdj(0)
To remove a RAID 6 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid6
Creating integrity metadata LV mylvraid6_rimage_0_imeta with size 8.00 MiB.
Logical volume "mylvraid6_rimage_0_imeta" created.
Creating integrity metadata LV mylvraid6_rimage_1_imeta with size 8.00 MiB.
Logical volume "mylvraid6_rimage_1_imeta" created.
Creating integrity metadata LV mylvraid6_rimage_2_imeta with size 8.00 MiB.
Logical volume "mylvraid6_rimage_2_imeta" created.
Creating integrity metadata LV mylvraid6_rimage_3_imeta with size 8.00 MiB.
Logical volume "mylvraid6_rimage_3_imeta" created.
Creating integrity metadata LV mylvraid6_rimage_4_imeta with size 8.00 MiB.
Logical volume "mylvraid6_rimage_4_imeta" created.
Limiting integrity block size to 512 because the LV is active.
Using integrity block size 512 for file system block size 4096.
Logical volume myvg/mylvraid6 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.
RAID Level 10 (Striping of Mirrored Disks) LVM Examples
mylvraid10
of size 10 GB
in the volume group
myvg
.sudo lvcreate --type raid10 -i 2 -m 1 --size 10G -n mylvraid10 myvg
The following output is displayed:
Logical volume "mylvraid10" created.
The -m specifies that you want 1 mirror device in the myvg
volume group
where identical data is written to pairs of mirrored device sets which are also using
striping across the sets. Logical volume data remains available if one or more devices
remains in each mirrored device set.
lsblk
command shows that four out of the five available physical
volumes are now part of the myvg-mylvraid10
RAID 10 logical volume.
Additionally, each instance of myvg-mylvraid10
includes subvolume pairs
for data and metadata. Each data subvolumes are labelled
myvg-mylvraid10_rimage_0
,
myvg-mylvraid10_rimage_1
, myvg-mylvraid10_rimage_2
,
and myvg-mylvraid10_rimage_3
. Each metadata subvolumes are labelled
myvg-mylvraid10_rmeta_0
, myvg-mylvraid10_rmeta_1
,
myvg-mylvraid10_rmeta_2
, and
myvg-mylvraid10_rmeta_3
.lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid10_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_0 252:3 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid10_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_1 252:5 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid10_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_2 252:7 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sde 8:64 0 50G 0 disk
├─myvg-mylvraid10_rmeta_3 252:8 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_3 252:9 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdf 8:80 0 50G 0 disk
myvg
:sudo lvs -a -o name,sync_percent,devices myvg
LV Cpy%Sync Devices
mylvraid101 68.82 mylvraid10_rimage_0(0),mylvraid10_rimage_1(0),mylvraid10_rimage_2(0),mylvraid10_rimage_3(0)
[mylvraid10_rimage_0] /dev/sdf(1)
[mylvraid10_rimage_1] /dev/sdg(1)
[mylvraid10_rimage_2] /dev/sdh(1)
[mylvraid10_rimage_3] /dev/sdi(1)
[mylvraid10_rmeta_0] /dev/sdf(0)
[mylvraid10_rmeta_1] /dev/sdg(0)
[mylvraid10_rmeta_2] /dev/sdh(0)
[mylvraid10_rmeta_3] /dev/sdi(0)
To remove a RAID 10 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange, lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
--raidintegrity y
option. This creates subvolumes used
to detect and correct data corruption in your RAID images. You can also add or remove
this subvolume after creating the logical volume using the following
lvconvert
command:sudo lvconvert --raidintegrity y myvg/mylvraid10
Creating integrity metadata LV mylvraid10_rimage_0_imeta with size 108.00 MiB.
Logical volume "mylvraid10_rimage_0_imeta" created.
Creating integrity metadata LV mylvraid10_rimage_1_imeta with size 108.00 MiB.
Logical volume "mylvraid10_rimage_1_imeta" created.
Creating integrity metadata LV mylvraid10_rimage_2_imeta with size 108.00 MiB.
Logical volume "mylvraid10_rimage_2_imeta" created.
Creating integrity metadata LV mylvraid10_rimage_3_imeta with size 108.00 MiB.
Logical volume "mylvraid10_rimage_3_imeta" created.
Using integrity block size 512 for unknown file system block size, logical block size 512, physical block size 4096.
Logical volume myvg/mylvraid10 has added integrity.
For more information, see the lvmraid
, lvcreate
, and
lvconvert
manual pages.