Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Administration: ZFS File Systems Oracle Solaris 11 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Oracle Solaris ZFS and Traditional File System Differences
4. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Faulted Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Designating Hot Spares in Your Storage Pool
Activating and Deactivating Hot Spares in Your Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
5. Managing ZFS Root Pool Components
6. Managing Oracle Solaris ZFS File Systems
7. Working With Oracle Solaris ZFS Snapshots and Clones
8. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
9. Oracle Solaris ZFS Delegated Administration
10. Oracle Solaris ZFS Advanced Topics
11. Oracle Solaris ZFS Troubleshooting and Pool Recovery
12. Archiving Snapshots and Root Pool Recovery
13. Recommended Oracle Solaris ZFS Practices
The following sections provide detailed information about the following storage pool components:
The most basic element of a storage pool is physical storage. Physical storage can be any block device of at least 128 MB in size. Typically, this device is a hard drive that is visible to the system in the /dev/dsk directory.
A storage device can be a whole disk (c1t0d0) or an individual slice (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not require special formatting. ZFS formats the disk using an EFI label to contain a single, large slice. When used in this way, the partition table that is displayed by the format command appears similar to the following:
Current partition table (original): Total disk sectors available: 286722878 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 34 136.72GB 286722911 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 286722912 8.00MB 286739295
Review the following considerations when using whole disks in your ZFS storage pools:
When using a whole disk, the disk is generally named by using the /dev/dsk/cNtNdN naming convention. Some third-party drivers use a different naming convention or place disks in a location other than the /dev/dsk directory. To use these disks, you must manually label the disk and provide a slice to ZFS.
On x86 based systems, the disk must have a valid Solaris fdisk partition. For more information about creating or changing a Solaris fdisk partition, see Chapter 13, x86: Setting Up Disks (Tasks), in Oracle Solaris Administration: Devices and File Systems.
ZFS applies an EFI label when you create a storage pool with whole disks. For more information about EFI labels, see EFI Disk Label in Oracle Solaris Administration: Devices and File Systems.
A disk that is intended for a ZFS root pool must be created with an SMI (VTOC) label, not an EFI label. You can relabel a disk with an SMI label by using the format -e command. Or, you can use the following command shortcuts to relabel a disk. Be cautioned that the shortcut commands do not include error checking.
The following commands can be used on a x86 system to relabel with an SMI label. The second command creates one Solaris fdisk partition that uses the entire disk.
x86# format -L vtoc -d c0t1d0 x86# fdisk -B /dev/rdsk/c0t1d0p0
The following command relabels the disk with an SMI label and the default partition table. The s0 slice in the default partition table might not be large enough for the root pool.
sparc# format -L vtoc -d c0t1d0
For more information about converting a EFI label to an SMI (VTOC) label or changing the default partition table, see Chapter 12, SPARC: Setting Up Disks (Tasks), in Oracle Solaris Administration: Devices and File Systems.
Disks can be specified by using either the full path, such as /dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid disk names:
c1t0d0
/dev/dsk/c1t0d0
/dev/foo/disk
Disks can be labeled with a traditional Solaris VTOC (SMI) label when you create a storage pool with a disk slice.
For a bootable ZFS root pool, the disks in the pool must contain slices and the disks must be labeled with an SMI label. The simplest configuration would be to put the entire disk capacity in slice 0 and use that slice for the root pool.
On a SPARC based system, a 72-GB disk has 68 GB of usable space located in slice 0 as shown in the following format output:
# format . . . Specify disk (enter its number): 4 selecting c1t1d0 partition> p Current partition table (original): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 14086 68.35GB (14087/0/0) 143349312 1 unassigned wm 0 0 (0/0/0) 0 2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0
On an x86 based system, a 72-GB disk has 68 GB of usable disk space located in slice 0, as shown in the following format output. A small amount of boot information is contained in slice 8. Slice 8 requires no administration and cannot be changed.
# format . . . selecting c1t0d0 partition> p Current partition table (original): Total disk cylinders available: 49779 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 49778 68.36GB (49778/0/0) 143360640 1 unassigned wu 0 0 (0/0/0) 0 2 backup wm 0 - 49778 68.36GB (49779/0/0) 143363520 3 unassigned wu 0 0 (0/0/0) 0 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wu 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.41MB (1/0/0) 2880 9 unassigned wu 0 0 (0/0/0) 0
An fdisk partition also exists on Solaris x86 systems. An fdisk partition is represented by a /dev/dsk/cN[tN]dNpN device name and acts as a container for the disk's available slices. Do not use a cN[tN]dNpN device for a ZFS storage pool component because this configuration is neither tested nor supported.
ZFS also allows you to use files as virtual devices in your storage pool. This feature is aimed primarily at testing and enabling simple experimentation, not for production use.
If you create a ZFS pool backed by files on a UFS file system, then you are implicitly relying on UFS to guarantee correctness and synchronous semantics.
If you create a ZFS pool backed by files or volumes that are created on another ZFS pool, then the system might deadlock or panic.
However, files can be quite useful when you are first trying out ZFS or experimenting with more complicated configurations when insufficient physical devices are present. All files must be specified as complete paths and must be at least 64 MB in size.
Review the following considerations when creating and managing ZFS storage pools.
Using whole physical disks is the easiest way to create ZFS storage pools. ZFS configurations become progressively more complex, from management, reliability, and performance perspectives, when you build pools from disk slices, LUNs in hardware RAID arrays, or volumes presented by software-based volume managers. The following considerations might help you determine how to configure ZFS with other hardware or software storage solutions:
If you construct a ZFS configuration on top of LUNs from hardware RAID arrays, you need to understand the relationship between ZFS redundancy features and the redundancy features offered by the array. Certain configurations might provide adequate redundancy and performance, but other configurations might not.
You can construct logical devices for ZFS using volumes presented by software-based volume managers. However, these configurations are not recommended. Although ZFS functions properly on such devices, less-than-optimal performance might be the result.
For additional information about storage pool recommendations, see Chapter 13, Recommended Oracle Solaris ZFS Practices.
Disks are identified both by their path and by their device ID, if available. On systems where device ID information is available, this identification method allows devices to be reconfigured without updating ZFS. Because device ID generation and management can vary by system, export the pool first before moving devices, such as moving a disk from one controller to another controller. A system event, such as a firmware update or other hardware change, might change the device IDs in your ZFS storage pool, which can cause the devices to become unavailable.