Skip Navigation Links | |
Exit Print View | |
![]() |
Oracle Solaris Cluster Data Service for Network File System (NFS) Guide Oracle Solaris Cluster 4.0 |
1. Installing and Configuring HA for NFS
Overview of the Installation and Configuration Process for HA for NFS
Planning the HA for NFS Installation and Configuration
Service Management Facility Restrictions
Installing the HA for NFS Package
How to Install the HA for NFS Package
Registering and Configuring HA for NFS
Setting HA for NFS Extension Properties
Tools for Registering and Configuring HA for NFS
How to Register and Configure HA for NFS (clsetup)
How to Register and Configure HA for NFS (Command Line Interface)
How to Change Share Options on an NFS File System
How to Dynamically Update Shared Paths on an NFS File System
How to Tune HA for NFS Method Timeouts
Configuring SUNW.HAStoragePlus Resource Type
How to Set Up the HAStoragePlus Resource Type for an NFS-Exported ZFS
Tuning the HA for NFS Fault Monitor
Operations of HA for NFS Fault Monitor During a Probe
NFS System Fault Monitoring Process
NFS Resource Fault Monitoring Process
Upgrading the SUNW.nfs Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
This section contains the information that you need to plan the installation and configuration of your HA for NFS.
The following Service Management Facility (SMF) services are related to NFS.
/network/nfs/cbd
/network/nfs/mapid
/network/nfs/server
/network/nfs/rquota
/network/nfs/client
/network/nfs/status
/network/nfs/nlockmgr
The HA for NFS data service sets the property application/auto_enable to FALSE and the property startd/duration to transient for three of these services.
/network/nfs/server
/network/nfs/status
/network/nfs/nlockmgr
These property settings have the following consequences for these services.
When services that depend on these services are enabled, these services are not automatically enabled.
In the event of any failure, SMF does not restart the daemons that are associated with these services.
In the event of any failure, SMF does not restart these services.
If you are mounting file systems on the cluster nodes from external NFS servers, such as NAS filers, and you are using the NFSv3 protocol, you cannot run NFS client mounts and the HA for NFS data service on the same cluster node. If you do, certain HA for NFS data-service activities might cause the NFS daemons to stop and restart, interrupting NFS services. However, you can safely run the HA for NFS data service if you use the NFSv4 protocol to mount external NFS file systems on the cluster nodes.
Do not use the loopback file system (LOFS) if both conditions in the following list are met:
HA for NFS is configured on a highly available local file system.
The automountd daemon is running.
If both of these conditions are met, LOFS must be disabled to avoid switchover problems or other failures. If only one of these conditions is met, it is safe to enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS.
If you are using ZFS as the exported file system, you must set the sharenfs property to off.
To set the sharenfs property to off, run the following command.
$ zfs set sharenfs=off file_system/volume
To verify if the sharenfs property is set to off, run the following command.
$ zfs get sharenfs file_system/volume