Perform this task for each cluster file system you create after your initial Oracle Solaris Cluster installation.
![]() | Caution - Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you might not intend to delete. |
Ensure the following prerequisites have been completed prior to adding an additional cluster file system:
The root role privilege is established on a node in the cluster.
Volume manager software be installed and configured on the cluster.
A device group (such as a Solaris Volume Manager device group) or block disk slice exists on which to create the cluster file system.
If you used Oracle Solaris Cluster Manager to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
![]() | Caution - Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete. |
phys-schost# newfs raw-disk-device
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
|
A mount point is required on each node, even if the cluster file system is not accessed on that node.
phys-schost# mkdir -p /global/device-group/mount-point/
Name of the directory that corresponds to the name of the device group that contains the device.
Name of the directory on which to mount the cluster file system.
See the vfstab(4) man page for details.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/ and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.
phys-schost# cluster check -k vfstab
The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, no output is returned.
For more information, see the cluster(1CL) man page.
phys-schost# mount /global/device-group/mountpoint/
You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.