Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Administration: ZFS File Systems Oracle Solaris 11 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Oracle Solaris ZFS and Traditional File System Differences
4. Managing Oracle Solaris ZFS Storage Pools
5. Managing ZFS Root Pool Components
Managing ZFS Root Pool Components (Overview)
ZFS Root Pool Space Requirements
ZFS Root Pool Configuration Requirements
Troubleshooting ZFS Root Pool Installation Problems
How to Update Your ZFS Boot Environment
How to Configure a Mirrored Root Pool
Managing Your ZFS Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap and Dump Devices
Troubleshooting ZFS Dump Device Issues
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
Booting From a ZFS Root File System on a SPARC Based System
Booting From a ZFS Root File System on an x86 Based System
Booting For Recovery Purposes in a ZFS Root Environment
How to Boot the System For Recovery Purposes
6. Managing Oracle Solaris ZFS File Systems
7. Working With Oracle Solaris ZFS Snapshots and Clones
8. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
9. Oracle Solaris ZFS Delegated Administration
10. Oracle Solaris ZFS Advanced Topics
11. Oracle Solaris ZFS Troubleshooting and Pool Recovery
12. Archiving Snapshots and Root Pool Recovery
13. Recommended Oracle Solaris ZFS Practices
The following sections provide information about installing and updating a ZFS root pool and configuring a mirrored root pool.
The Oracle Solaris 11 Live CD installation method installs a default ZFS root pool on a single disk. With the Oracle Solaris 11 automated installation (AI) method, you can create an AI manifest to identify the disk or mirrored disks for the ZFS root pool.
The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target disk that you identify. You can specify the logical device, such as c1t0d0s0, or the physical device path. In addition, you can use the MPxIO identifier or the device ID for the device to be installed.
After the installation, review your ZFS storage pool and file system information, which can vary by installation type and customizations. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.49G 60.4G 40K /rpool rpool/ROOT 3.46G 60.4G 31K legacy rpool/ROOT/solaris 3.46G 60.4G 3.16G / rpool/ROOT/solaris/var 303M 60.4G 216M /var rpool/dump 2.00G 60.5G 1.94G - rpool/export 96.5K 60.4G 32K /rpool/export rpool/export/home 64.5K 60.4G 32K /rpool/export/home rpool/export/home/admin 32.5K 60.4G 32.5K /rpool/export/home/admin rpool/swap 1.03G 60.5G 1.00G -
Review your ZFS BE information. For example:
# beadm list # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 3.85G static 2011-09-26 08:37
In the above output, the Active field indicates whether the BE is active now represented by N and active on reboot represented by R, or both represented by NR.
The default ZFS boot environment (BE) is named solaris by default. You can identify your BEs by using the beadm list command. For example:
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 8.41G static 2011-01-13 15:31
In the above output, NR means the BE is active now and will be the active BE on reboot.
You can use the pkg update command to update your ZFS boot environment. If you update your ZFS BE by using the pkg update command, a new BE is created and activated automatically, unless the updates to the existing BE are very minimal.
# pkg update DOWNLOAD PKGS FILES XFER (MB) Completed 707/707 10529/10529 194.9/194.9 . . .
A new BE, solaris-1, is created automatically and activated.
# init 6 . . . # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 6.25M static 2011-09-26 08:37 solaris-1 NR / 3.92G static 2011-09-26 09:32
# beadm activate solaris # init 6
You might need to copy or access a file from another BE for recovery purposes.
# beadm mount solaris-1 /mnt
# ls /mnt bin export media pkg rpool tmp boot home mine platform sbin usr dev import mnt proc scde var devices java net project shared doe kernel nfs4 re src etc lib opt root system
# beadm umount solaris-1
If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.
For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool.
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 errors: No known data errors
SPARC: Confirm that the disk has an SMI (VTOC) disk label and a slice 0. If you need to relabel the disk and create a slice 0, see Creating a Disk Slice for a ZFS Root File System in Oracle Solaris Administration: Devices and File Systems.
x86: Confirm that the disk has an fdisk partition, an SMI disk label, and a slice 0. If you need to repartition the disk and create a slice 0, see Creating a Disk Slice for a ZFS Root File System in Oracle Solaris Administration: Devices and File Systems.
# zpool attach rpool c2t0d0s0 c2t1d0s0 Make sure to wait until resilver is done before rebooting.
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Sep 29 18:09:09 2011 1.55G scanned out of 5.36G at 36.9M/s, 0h1m to go 1.55G scanned out of 5.36G at 36.9M/s, 0h1m to go 1.55G resilvered, 28.91% done config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 c2t1d0s0 ONLINE 0 0 0 (resilvering) errors: No known data errors
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
resilvered 5.36G in 0h10m with 0 errors on Thu Sep 29 18:19:09 2011
SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86: Reconfigure the system BIOS.
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.
Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 <Confirm that the new disk has an SMI label and a slice 0> # zpool replace rpool c1t0d0s0 # zpool online rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
For information about relabeling a disk that is intended for the root pool, see How to Label a Disk in Oracle Solaris Administration: Devices and File Systems.
For example:
# zpool attach rpool c2t0d0s0 c2t1d0s0 Make sure to wait until resilver is done before rebooting.
For example:
# zpool status rpool pool: rpool state: ONLINE scan: resilvered 5.36G in 0h2m with 0 errors on Thu Sep 29 18:11:53 2011 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 c2t1d0s0 ONLINE 0 0 0 errors: No known data errors
For example, on a SPARC based system:
ok boot /pci@1f,700000/scsi@2/disk@1,0
Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c2t0d0s0) is:
/pci@1f,700000/scsi@2/disk@0,0
In the example below, the replacement boot disk is (c2t1d0s0):
boot /pci@1f,700000/scsi@2/disk@1,0
For example:
# zpool detach rpool c2t0d0s0
SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86: Reconfigure the system BIOS.
If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.
After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.
# zpool create rpool2 c4t2d0s0
# beadm create -p rpool2 solaris2
# zpool set bootfs=rpool2/ROOT/solaris2 rpool2
# beadm activate solaris2
ok boot disk2
Your system should be running under the new BE.
# zfs create -V 4g rpool2/swap
/dev/zvol/dsk/rpool2/swap - - swap - no -
# zfs create -V 4g rpool2/dump
# dumpadm -d /dev/zvol/dsk/rpool2/dump
SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86 – Reconfigure the system BIOS.
# init 6