Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 4.0 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
12. Backing Up and Restoring a Cluster
How to Perform Online Backups for Mirrors (Solaris Volume Manager)
You can restore the ZFS root file system to a new disk.
Before you start to restore files or file systems, you need to know the following information.
Which tapes you need
The raw device name on which you are restoring the file system
The type of tape drive you are using
The device name (local or remote) for the tape drive
The partition scheme of any failed disk, because the partitions and file systems must be exactly duplicated on the replacement disk
Table 12-2 Task Map: Restoring Cluster Files
|
Use this procedure to restore the ZFS root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Ensure that the cluster is running without errors before performing the restore procedure. UFS is supported, except as a root file system. UFS can be used on metadevices in Solaris Volume Manager metasets on shared disks.
Note - Because you must partition the new disk by using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Use a node other than the node that you are restoring.
Run this command from a node in the metaset other than the node that you are removing. Because the recovering node is offline, the system will display an RPC: Rpcbind failure - RPC: Timed out error. Ignore this error and continue to the next step.
# metaset -s setname -f -d -h nodelist
Specifies the disk set name.
Deletes the last host from the disk set.
Deletes from the disk set.
Specifies the name of the node to delete from the disk set.
To recover the ZFS root pool or root pool snapshots, follow the procedure in How to Replace a Disk in a ZFS Root Pool in Oracle Solaris Administration: ZFS File Systems.
Note - Ensure that you create the /global/.devices/node@nodeid file system.
If the /.globaldevices backup file exists in the backup directory, it is restored along with ZFS root restoration. The file is not created automatically by the globaldevices SMF service.
# reboot
# cldevice repair rootdisk
# metadb -c copies -af raw-disk-device
Specifies the number of replicas to create.
Raw disk device on which to create replicas.
Adds replicas.
See the metadb(1M) man page for more information.
phys-schost-2# metaset -s setname -a -h nodelist
Creates and adds the host to the disk set.
The node is rebooted into cluster mode. The cluster is ready to use.
Example 12-1 Restoring the ZFS Root (/) File System (Solaris Volume Manager)
The following example shows the root (/) file system restored to the node phys-schost-1. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the disk set schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4. For more information on restoring data, see Repairing Damaged Data in Oracle Solaris Administration: ZFS File Systems.
[Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on a cluster node other than the node to be restored.] [Remove the node from the metaset:] phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1 [Replace the failed disk and boot the node:] Restore the root (/) and /usr file system using the procedure in the Solaris system administration documentation [Reboot:] # reboot [Replace the disk ID:] # cldevice repair /dev/dsk/c0t0d0 [Re-create state database replicas:] # metadb -c 3 -af /dev/rdsk/c0t0d0s4 [Add the node back to the metaset:] phys-schost-2# metaset -s schost-1 -a -h phys-schost-1