Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 4.1 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
Overview of Administering Global Devices and the Global Namespace
Global Device Permissions for Solaris Volume Manager
Dynamic Reconfiguration With Global Devices
Administering Storage-Based Replicated Devices
Administering EMC Symmetrix Remote Data Facility Replicated Devices
How to Configure an EMC SRDF Replication Group
How to Configure DID Devices for Replication Using EMC SRDF
How to Verify EMC SRDF Replicated Global Device Group Configuration
Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster
Overview of Administering Cluster File Systems
Cluster File System Restrictions
How to Update the Global-Devices Namespace
How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace
Migrating the Global-Devices Namespace
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Adding and Registering Device Groups
How to Add and Register a Device Group (Solaris Volume Manager)
How to Add and Register a Device Group (Raw-Disk)
How to Add and Register a Replicated Device Group (ZFS)
How to Remove and Unregister a Device Group (Solaris Volume Manager)
How to Remove a Node From All Device Groups
How to Remove a Node From a Device Group (Solaris Volume Manager)
How to Remove a Node From a Raw-Disk Device Group
How to Change Device Group Properties
How to Set the Desired Number of Secondaries for a Device Group
How to List a Device Group Configuration
How to Switch the Primary for a Device Group
How to Put a Device Group in Maintenance State
Administering the SCSI Protocol Settings for Storage Devices
How to Display the Default Global SCSI Protocol Settings for All Storage Devices
How to Display the SCSI Protocol of a Single Storage Device
How to Change the Default Global Fencing Protocol Settings for All Storage Devices
How to Change the Fencing Protocol for a Single Storage Device
Administering Cluster File Systems
How to Add a Cluster File System
Administering Disk-Path Monitoring
How to Print Failed Disk Paths
How to Resolve a Disk-Path Status Error
How to Monitor Disk Paths From a File
How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
The cluster file system is a globally available file system that can be read and accessed from any node of the cluster.
Table 5-4 Task Map: Administering Cluster File Systems
|
Perform this task for each cluster file system you create after your initial Oracle Solaris Cluster installation.
![]() | Caution - Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you might not intend to delete. |
Ensure the following prerequisites have been completed prior to adding an additional cluster file system:
The root role privilege is established on a node in the cluster.
Volume manager software be installed and configured on the cluster.
A device group (such as a Solaris Volume Manager device group) or block disk slice exists on which to create the cluster file system.
If you used Oracle Solaris Cluster Manager to install data services, one or more cluster file systems already exist if shared disks on which to create the cluster file systems were sufficient.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Tip - For faster file system creation, become the root role on the current primary of the global device for which you create a file system.
![]() | Caution - Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete. |
phys-schost# newfs raw-disk-device
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
|
A mount point is required on each node, even if the cluster file system is not accessed on that node.
Tip - For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
phys-schost# mkdir -p /global/device-group/mount-point/
Name of the directory that corresponds to the name of the device group that contains the device.
Name of the directory on which to mount the cluster file system.
See the vfstab(4) man page for details.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/ and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.
phys-schost# cluster check -k vfstab
The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, no output is returned.
For more information, see the cluster(1CL) man page.
phys-schost# mount /global/device-group/mountpoint/
You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.
Example 5-22 Creating a UFS Cluster File System
The following example creates a UFS cluster file system on the Solaris Volume Manager volume /dev/md/oracle/rdsk/d1. An entry for the cluster file system is added to the vfstab file on each node. Then from one node the cluster check command is run. After configuration check processing is completed successfully, the cluster file system is mounted from one node and verified on all nodes.
phys-schost# newfs /dev/md/oracle/rdsk/d1 … phys-schost# mkdir -p /global/oracle/d1 phys-schost# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging … phys-schost# cluster check -k vfstab phys-schost# mount /global/oracle/d1 phys-schost# mount … /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct 3 08:56:16 2005
You remove a cluster file system by merely unmounting it. To also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.
Note - Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run cluster shutdown to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.
Ensure that the following prerequisites have been completed prior to unmounting cluster file systems:
The root role privilege is established on a node in the cluster.
The file system is not busy. A file system is considered busy if a user is working in a directory in the file system, or if a program has a file open in that file system. The user or program could be running on any node in the cluster.
# mount -v
# fuser -c [ -u ] mountpoint
Reports on files that are mount points for file systems and any files within those mounted file systems.
(Optional) Displays the user login name for each process ID.
Specifies the name of the cluster file system for which you want to stop processes.
Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.
# fuser -c -k mountpoint
A SIGKILL is sent to each process that uses the cluster file system.
# fuser -c mountpoint
# umount mountpoint
Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.
Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.
See your volume manager documentation for more information.
Example 5-23 Removing a Cluster File System
The following example removes a UFS cluster file system that is mounted on the Solaris Volume Manager metadevice or volume/dev/md/oracle/rdsk/d1.
# mount -v ... /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles # fuser -c /global/oracle/d1 /global/oracle/d1: 4006c # fuser -c -k /global/oracle/d1 /global/oracle/d1: 4006c # fuser -c /global/oracle/d1 /global/oracle/d1: # umount /global/oracle/d1 (On each node, remove the highlighted entry:) # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging [Save and exit.]
To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.
The cluster(1CL) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If no errors occur, nothing is returned.
Note - Run the cluster check command after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.