Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use Oracle Cluster File System Tools on Oracle Linux
Introduction
Oracle Cluster File System 2 (OCFS2) is a general-purpose clustered file system used in clustered environments to increase storage performance and availability. In Oracle Cloud Infrastructure, you can deploy OCFS2 clustered file systems through Read/Write - shareable block storage attached to an instance.
Note: For optimal OCFS2 file system performance, reduce the number of files with the OCFS2 filesystem. Applications like Oracle e-Business Suite (EBS) or Oracle WebCenter Content (WCC) should leverage a separate NFS file system for directories containing large amounts of temporary files. During runtime, system administrators should actively archive, purge, remove, delete, and move any non-current files to one or more separate subdirectories or filesystems while regularly monitoring the OCFS2 filesystem file/inode usage.
This tutorial provides instructions on using ocfs2-tools
to deploy and test a two-node Oracle Cluster File System version 2 (OCFS2) on Oracle Cloud Infrastructure.
Objectives
In this tutorial, you’ll learn how to:
- Prepare for an OCFS2 configuration
- Configure security list
- Create and attach a block volume
- Install or upgrade the software required for OCFS2
- Configure the cluster layout
- Configure and start the O2CB cluster stack service
- Create an OCFS2 volume
- Mount an OCFS2 volume
Prerequisites
-
Two Oracle Linux systems running the UEK kernel
-
Each system should have Oracle Linux installed and configured with:
- a non-root user with sudo permissions
-
Attach a single 50GB block volume to each instance using iSCSI as read-write and shareable for use with OCFS2
-
OCI Ingress Rules allowing TCP and UDP traffic on port 7777
Security lists control the traffic in and out of the various subnets associated with the VCN. When configuring an OCFS2 cluster, add an ingress rule allowing access to instances through TCP and UDP port 7777.
Deploy Oracle Linux
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ol
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Update the Oracle Linux instance configuration.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ol-sys0" type: "server" add_bv: true 2: instance_name: "ol-sys1" type: "server" add_bv: true passwordless_ssh: true volume_type: "iscsi" use_ocfs2: true EOF
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@instances.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Linux is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Install the Cluster Software
-
Open a terminal and connect using SSH to the ol-sys0 instance.
ssh oracle@<ip_address_of_instance>
Substitute with your specified, cluster names, OCFS2 volume names, disk labels, instance hostnames, and private IP address when appropriate.
-
Install the OCFS2 Tools packages on both instances.
for host in ol-sys0 ol-sys1 do ssh $host "sudo dnf install -y ocfs2-tools" done
-
Configure the firewall rules on both instances.
for host in ol-sys0 ol-sys1 do ssh $host "sudo firewall-cmd --add-port=7777/tcp --add-port=7777/udp --permanent; sudo firewall-cmd --complete-reload" done
If you use iSCSI with OCFS2, you must also enable port 3260 for TCP using
--add-port=3260/tcp
. -
Check that both cluster members have the same kernel version.
for host in ol-sys0 ol-sys1 do ssh $host "echo $host: $(uname -r)" done
Configure the Cluster Layout
-
From ol-sys0, create the cluster.
sudo o2cb add-cluster ociocfs2
-
List the cluster information in the cluster layout.
sudo o2cb list-cluster ociocfs2
Sample output:
cluster: node_count = 0 heartbeat_mode = local name = ociocfs2
OCFS2 writes this information to the
/etc/ocfs2/cluster.conf
file.Note: The default heartbeat mode is
local
. -
Add ol-sys0 to the cluster.
sudo o2cb add-node ociocfs2 ol-sys0 --ip $(hostname -i)
-
Add ol-sys1 to the cluster.
sudo o2cb add-node ociocfs2 ol-sys1 --ip $(ssh ol-sys1 "hostname -i")
-
List the cluster information.
sudo o2cb list-cluster ociocfs2
Sample output:
node: number = 0 name = ol-sys0 ip_address = 10.0.0.150 ip_port = 7777 cluster = ociocfs2 node: number = 1 name = ol-sys1 ip_address = 10.0.0.151 ip_port = 7777 cluster = ociocfs2 cluster: node_count = 2 heartbeat_mode = local name = ociocfs2
-
Show the contents of the
/etc/ocfs2/cluster.conf
file.cat /etc/ocfs2/cluster.conf
-
Create a cluster configuration file on ol-sys1.
ssh ol-sys1 "sudo mkdir -p /etc/ocfs2; sudo tee -a /etc/ocfs2/cluster.conf" < /etc/ocfs2/cluster.conf > /dev/null
Configure and Start the O2CB Cluster Stack
-
On ol-sys0, get the list of options available for the O2CB program.
sudo /sbin/o2cb.init
Command usage output:
Usage: /sbin/o2cb.init {start|stop|restart|force-reload|enable|disable|configure|load|unload|online|offline|force-offline|status|online-status}et
-
Configure the node.
The configuration process prompts you for additional information. Some of the parameters you need to set are the following:
- Answer
y
to “Load O2CB driver on boot” - Accept the default (press Enter), `“o2cb”, as the cluster stack
- Enter
ociocfs2
as the cluster to start on boot - Accept the defaults (press Enter) for all other queries
sudo /sbin/o2cb.init configure
Command output:
Load O2CB driver on boot (y/n) [n]: y Cluster stack backing O2CB [o2cb]: Cluster to start on boot (Enter "none" to clear) [ocfs2]: ociocfs2 Specify heartbeat dead threshold (>=7) [31]: Specify network idle timeout in ms (>=5000) [30000]: Specify network keepalive delay in ms (>=1000) [2000]: Specify network reconnect delay in ms (>=2000) [2000]: Writing O2CB configuration: OK checking debugfs... Loading stack plugin "o2cb": OK Loading filesystem "ocfs2_dlmfs": OK Creating directory '/dlm': OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Setting cluster stack "o2cb": OK.
- Answer
-
Run the same command and enter the identical responses on ol-sys1.
ssh ol-sys1 "sudo /sbin/o2cb.init configure"
-
Check the cluster status of both members.
for host in ol-sys0 ol-sys1 do echo -e "$host:\n" sudo /sbin/o2cb.init status echo -e "\n" done
The output shows that the O2CB cluster is online; however, the O2CB heartbeat is inactive. The heartbeat becomes active after mounting a disk volume.
-
Enable the O2CB and OCFS2 service on each cluster member.
for host in ol-sys0 ol-sys1 do sudo systemctl enable o2cb done
for host in ol-sys0 ol-sys1 do sudo systemctl enable ocfs2 done
-
Add the following kernel settings to each cluster member.
for host in ol-sys0 ol-sys1 do ssh $host "printf 'kernel.panic = 30\nkernel.panic_on_oops = 1' | sudo tee -a /etc/sysctl.d/99-sysctl.conf > /dev/null" done
-
Implement the kernel changes immediately on both members.
for host in ol-sys0 ol-sys1 do sudo sysctl -p done
Create OCFS2 Volumes
Create different types of OCFS2 volumes on the available block volume. Enter y
when prompted to overwrite the existing OCFS2 partition.
-
Create an OCFS2 file system.
sudo mkfs.ocfs2 /dev/sdb
Note: Review the default values:
- Features
- Block size and cluster size
- Node slots
- Journal size
-
Create a file system with the
-T mail
option.- Specify this type when using the file system as a mail server.
- Mail servers perform many metadata changes to many small files, which requires using a large journal.
sudo mkfs.ocfs2 -T mail /dev/sdb
Note: Review the output and note the larger journal size.
-
Create a file system with the
-T vmstore
option.- Specify this type when you intend to store virtual machine images.
- These file types are sparsely allocated large files and require moderate metadata updates.
sudo mkfs.ocfs2 -T vmstore /dev/sdb
Note: Review the output and note the differences from the default file system:
- Cluster size
- Cluster groups
- Extent allocator size
- Journal size
-
Create a file system with the
-T datafiles
option.- Specify this type when you intend to use the file system for database files.
- These file types use fewer fully allocated large files, with fewer metadata changes, and do not benefit from a large journal.
sudo mkfs.ocfs2 -T datafiles /dev/sdb
Note: Review the output and note the differences in the journal size.
-
Create a file system with the label
ocfs2vol
.sudo mkfs.ocfs2 -L "ocfs2vol" /dev/sdb
Mount and Test the OCFS2 Volume
Mount the clustered OCFS2 volume to ol-sys0 and ol-sys1, and then create, modify, and remove files from one host to another.
-
Mount the OCFS2 volume on ol-sys0.
-
Make a mount point for the OCFS2 volume.
sudo mkdir /ocfs2
-
Add an entry to the
fstab
file.echo '/dev/sdb /ocfs2 ocfs2 _netdev,defaults 0 0' | sudo tee -a /etc/fstab > /dev/null
Where
/ocfs2
is the volume’s mount point, and_netdev
allows the system to mount the volume at boot time. -
Mount the OCFS2 volume.
sudo systemctl daemon-reload sudo mount -a
-
Display the status of the O2CB heartbeat mode.
sudo /sbin/o2cb.init status
Note: After mounting the volume, the output shows that the heartbeat mode is active.
-
Create a test file in the mount point directory.
echo "File created on ol-sys0" | sudo tee /ocfs2/shared.txt > /dev/null
The
tee
command reads from standard input and writes the output to theshared.txt
file. -
View the contents of the file.
sudo cat /ocfs2/shared.txt
-
-
Mount the OCFS2 volume on ol-sys1.
-
Make a mount point for the OCFS2 volume.
ssh ol-sys1 "sudo mkdir /ocfs2"
-
Mount the OCFS2 volume by label at the mount point.
ssh ol-sys1 "sudo mount -L ocfs2vol /ocfs2"
If the
mount
command fails with acan’t find LABEL
error message, then:-
Display the
/proc/partitions
file.ssh ol-sys1 "sudo cat /proc/partitions"
The
/proc/partitions
file displays a table of partitioned devices. -
If the
sdb
partition is not listed, use thepartprobe
command on/dev/sdb
to inform the OS of partition table changes.ssh ol-sys1 "sudo partprobe /dev/sdb"
-
Retry displaying the partition table.
ssh ol-sys1 "sudo cat /proc/partitions"
Confirm
sdb
appears in the table. -
Retry mounting the volume.
ssh ol-sys1 "sudo mount -L ocfs2vol /ocfs2"
-
-
Add an entry to the
fstab
file.ssh ol-sys1 "echo '/dev/sdb /ocfs2 ocfs2 _netdev,defaults 0 0' | sudo tee -a /etc/fstab > /dev/null"
This entry enables the mount point to automount on system startup.
-
List the contents of the mount point directory.
ssh ol-sys1 "sudo ls /ocfs2"
The output displays the
shared.txt
file and verifies that the OCFS2 shares the clustered file system between both cluster members. -
Modify the contents of the
shared.txt
file.ssh ol-sys1 "echo 'Modified on ol-sys1' | sudo tee -a /ocfs2/shared.txt > /dev/null"
-
-
From ol-sys0, display the contents of the
shared.txt
file.sudo cat /ocfs2/shared.txt
The output showing the added text within the file’s contents confirms both cluster members have read/write access.
Next Steps
This tutorial introduced you to using OCFS2 and the benefits of a clustered file system. Check out the Oracle Linux documentation to learn more about OCFS2 and find additional learning opportunities at the Oracle Linux Training Station.
Related Links
- Oracle Linux Documentation
- Managing the Oracle Cluster File System Version 2
- Oracle Learning Library
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Use Oracle Cluster File System Tools on Oracle Linux
F54500-06
Copyright ©2022, Oracle and/or its affiliates.