3 Working With Software RAID
WARNING:
Oracle Linux 7 is now in Extended Support. See Oracle Linux Extended Support and Oracle Open Source Support Policies for more information.
Migrate applications and data to Oracle Linux 8 or Oracle Linux 9 as soon as possible.
This chapter describes RAID features, with special focus on the use of software RAID for storage redundancy.
About Software RAID
The Redundant Array of Independent Disks (RAID) feature enables you to spread data across the drives to increase capacity, implement data redundancy, and increase performance. RAID is usually implemented either in hardware on intelligent disk storage that exports the RAID volumes as LUNs, or in software by the operating system. Oracle Linux kernel uses the multidisk (MD) driver to support software RAID by creating virtual devices from two or more physical storage devices. You can use MD to organize disk drives into RAID devices and implement different RAID levels.
The following software RAID levels are commonly used with Oracle Linux:
- Linear RAID (spanning)
-
Combines drives as a larger virtual drive. There is no data redundancy or performance benefit. Resilience decreases because the failure of a single drive renders the array unusable.
- RAID-0 (striping)
-
Increases performance but does not provide data redundancy. Data is broken down into units (stripes) and written to all the drives in the array. Resilience decreases because the failure of a single drive renders the array unusable.
- RAID-1 (mirroring)
-
Provides data redundancy and resilience by writing identical data to each drive in the array. If one drive fails, a mirror can satisfy I/O requests. Mirroring is an expensive solution because the same information is written to all of the disks in the array.
- RAID-5 (striping with distributed parity)
-
Increases read performance by using striping and provides data redundancy. The parity is distributed across all the drives in an array but it does not take up as much space as a complete mirror. Write performance is reduced to some extent from RAID-0 by having to calculate parity information and write this information in addition to the data. If one disk in the array fails, the parity information is used to reconstruct data to satisfy I/O requests. In this mode, read performance and resilience are degraded until you replace the failed drive and it is repopulated with data and parity information. RAID-5 is intermediate in expense between RAID-0 and RAID-1.
- RAID-6 (striping with double distributed parity)
-
A more resilient variant of RAID-5 that can recover from the loss of two drives in an array. RAID-6 is used when data redundancy and resilience are important, but performance is not. RAID-6 is intermediate in expense between RAID-5 and RAID-1.
- RAID 0+1 (mirroring of striped disks)
-
Combines RAID-0 and RAID-1 by mirroring a striped array to provide both increased performance and data redundancy. Failure of a single disk causes one of the mirrors to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror remains available. RAID 0+1 is usually as expensive as or slightly more expensive than RAID-1.
- RAID 1+0 (striping of mirrored disks or RAID-10)
-
Combines RAID-0 and RAID-1 by striping a mirrored array to provide both increased performance and data redundancy. Failure of a single disk causes part of one mirror to be unusable until you replace the disk and repopulate it with data. Resilience is degraded while only a single mirror retains a complete copy of the data. RAID 1+0 is usually as expensive as or slightly more expensive than RAID-1.
Creating Software RAID Devices
To create a software RAID device:
-
Use the mdadm command to create the MD RAID device:
sudo mdadm --create md_device --level=RAID_level [options] --raid-devices=N device ...
For example, to create a RAID-1 device
/dev/md0
from/dev/sdf
and/dev/sdg
:sudo mdadm --create /dev/md0 --level=1 -raid-devices=2 /dev/sd[fg]
Create a RAID-5 device
/dev/md1
from/dev/sdb
,/dev/sdc
, anddev/sdd
:sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 /dev/sd[bcd]
If you want to include spare devices that are available for expansion, reconfiguration, or replacing failed drives, use the --spare-devices option to specify their number, for example:
sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 --spare-devices=1 /dev/sd[bcde]
Note:
The number of RAID and spare devices must equal the number of devices that you specify.
-
Add the RAID configuration to
/etc/mdadm.conf
:# mdadm --examine --scan >> /etc/mdadm.conf
Note:
This step is optional. It helps mdadm to assemble the arrays at boot time.
For example, the following entries in
/etc/mdadm.conf
define the devices and arrays that correspond to/dev/md0
and/dev/md1
:DEVICE /dev/sd[c-g] ARRAY /dev/md0 devices=/dev/sdf,/dev/sdg ARRAY /dev/md1 spares=1 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
For more examples, see the sample configuration file
/usr/share/doc/mdadm-3.2.1/mdadm.conf-example
.
Having created an MD RAID device, you can configure and use it in the same way that you would a physical storage device. For example, you can configure it as an LVM physical volume, file system, swap partition, Automatic Storage Management (ASM) disk, or raw device.
You can view /proc/mdstat
to check the status
of the MD RAID devices, for example:
cat /proc/mdstat
Personalities : [raid1] mdo : active raid1 sdg[1] sdf[0]
To display summary and detailed information about MD RAID devices, you can use the --query and --detail options with mdadm.
For more information, see the md(4)
,
mdadm(8)
, and
mdadm.conf(5)
manual pages.