6 Hyperconverged Infrastructure Deployment Using GlusterFS Storage
Note:
If you are deploying a self-hosted engine as hyperconverged infrastructure with GlusterFS storage, you must deploy GlusterFS before you deploy the self-hosted engine or any KVM hosts. For more information about using GlusterFS, including prerequisites, see the Oracle Linux GlusterFS documentation.Oracle Linux Virtualization Manager is integrated with GlusterFS, an open source scale-out distributed filesystem, to provide a hyperconverged infrastructure (HCI) cluster where both compute and storage are provided from the same hosts. The HCI cluster with Gluster storage uses DAS disks to provide shared volumes and implements a KVM host in each node. The Gluster volumes are used as storage domains in the Manager to store the virtual machine images, and the Manager is run as a self-hosted engine within a virtual machine on these hosts.
For instructions on creating a GlusterFS storage domain, refer to the My Oracle Support (MOS) article How to Create Glusterfs Storage Domain (Doc ID 2679824.1).
Important:
You must deploy GlusterFS before you deploy the self-hosted engine or any KVM hosts. For more information about using GlusterFS, including prerequisites, see the Oracle Linux GlusterFS documentation.To deploy Oracle Linux Virtualization Manager in a HCI architecture, you need three KVM hosts with local disks. These disks can be combined into a RAID array or used alone as JBOD. All KVM hosts must have the same number of disks and be the same size between hosts. If you want more than three KVM hosts, they must be added in factors of three.
Host 1 Host 2 Host 3 disk 1 - 250GB disk 1 - 250GB disk 1 - 250GB disk 2 - 2TB disk 2 - 2TB disk 2 - 2TB
Host 1 Host 2 Host 3 disk 1 - 250GB disk 1 - 250GB disk 1 - 250GB disks 2-8 - 4TB disks 2-8 - 4TB disks 2-8 - 4TB
For instructions on creating a GlusterFS storage domain, refer to the My Oracle Support (MOS) article How to Create Glusterfs Storage Domain (Doc ID 2679824.1).
Configure KVM Hosts for HCI Deployment
Before you can create Gluster volumes or deploy the Engine on the hyperconverged hosts, you must do a fresh installation of Oracle Linux 8.8 (or later) and enabling the required repositories. For detailed instructions, see Preparing a KVM host in the Installation and Configuration section of Oracle Linux Virtualization Manager: Getting Started. (Do not proceed with Adding a KVM host.)
Important:
You must have at least three (3) KVM hosts. If you want more than three KVM hosts, they must be added in factors of three.
After installing the operating system on each host, prepare for deployment by completing the prerequisite tasks:
- Cleanup host partitions/volumes
- Configure KVM hosts and choose one as a deployment host
- Install required packages
Ensure hosts have no partitions or LVM volumes on disks for Gluster use.
If you find any partions or LVM volumes, remove them before continuing, for example:
[root@host1 ~]# lvscan | grep -i gluster
[root@host1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 250G 0 disk
|-sda1 8:1 0 1G 0 part /boot
+-sda2 8:2 0 249G 0 part
|-ol-root 252:0 0 247G 0 lvm /
+-ol-swap 252:1 0 2.1G 0 lvm [SWAP]
sdb 8:16 0 500G 0 disk
sr0 11:0 1 1024M 0 rom
Configure KVM hosts
- Choose a deployment host referred to here as
kvmhost1
. The deployment host is used to start your Gluster and SHE deployment. - On the deployment host, use the
ssh-keygen
command to create an ssh keyring. This is used to configure Gluster nodes and volumes.root@kvmhost1 ~]# ssh-keygen
- Publish the ssh public key to the deployment host itself using its FQDN. For example:
root@kvmhost1 ~]# ssh-copy-id kvmhost1.example.com
- Publish the ssh public key from the deployment host to all other hosts using their FQDNs. For example:
root@kvmhost1 ~]# ssh-copy-id kvmhost2.example.com root@kvmhost1 ~]# ssh-copy-id kvmhost3.example.com
- On the deployment host only, create a hard link to $HOME/.ssh/known_hosts for Gluster. For example:
[root@kvmhost1 ~]# ln $HOME/.ssh/known_hosts $HOME/.known_hosts
Install common rpm packages on all hosts and additional packages on the deployment host.
-
On all hosts
- Log in as root and install
cockpit-ovirt-dashboard
to provide a web UI for installationvdsm-gluster
to manage Gluster servicesovirt-host
to configure the host as a KVM hypervisor when added to the Engine console
For example, run the following command on the
kvmhost1
,kvmhost2
, andkvmhost3
:# dnf install cockpit-ovirt-dashboard ovirt-host vdsm-gluster
- Run the following commands to ensure the
cockpit.socket
is enabled and started and to open the cockpit port infirewalld
.For example, run the following commands on the
kvmhost1
,kvmhost2
, andkvmhost3
:# systemctl enable --now cockpit.socket # firewall-cmd --permanent --add-service cockpit # firewall-cmd --reload
- Log in as root and install
- On the deployment host only, install the
ovirt-engine-appliance
andgluster-ansible-roles
packages.[root@kvmhost1 ~]# dnf install ovirt-engine-appliance gluster-ansible-roles
Deploy GlusterFS Storage Using Cockpit
To deploy GlusterFS using the Cockpit web interface, complete the following steps.
Important:
Before you deploy Gluster, ensure you have read about deploying Oracle Linux Virtualization Manager in a HCI architecture and completed the required configuration for all KVM hosts.
- From the deployment host, access the Cockpit web interface from
https://host_IP_or_FQDN:9090
, for example,https://kvmhost1.example.com:9090
. - Log in using the user name and password of the root account.
- From the Cockpit left navigation, click Virtualization.
- From the Virtualization menu, click Hosted Manager.
- On the Hosted Engine Setup page there are two Start buttons. Under the Hyperconverged statement Configure Gluster storage and Oracle Linux Virtualization Manager, click Start.
- From the Gluster Configuration popup, click Run Gluster Wizard.
The Gluster Deployment wizard displays.
- On the Hosts screen, enter the FQDN for each Gluster host.
- If the host has different network connections for the public network and the storage network, enter those different hostnames.
- If hosts have only one network connection, check Use same hostname for Storage and Public Network.
- Click the Next.
- On the Packages screen, do not enter any information. Click Next.
- On the Volumes screen, create the minimum required volumes of
engine
anddata
. You can also createexport
andiso
volumes. Be sure to check the Arbiter box next to each volume you create.For example:
-
Name:
engine
-
Volume Type:
Replicate
(default) -
Arbiter: Ensure the check box is selected.
-
Brick Dirs:
/gluster_bricks/engine/engine
(default)
-
Name:
data
-
Volume Type:
Replicate
(default) -
Arbiter: Ensure the check box is selected.
-
Brick Dirs:
/gluster_bricks/data/data
(default)
-
- Click Next.
- On the Bricks screen:
- Select the appropriate Raid Type. Use JBOD for internal disks or select the appropriate RAID level if internal disks are configured as RAID devices.
- Under Multipath Configuration, ensure the Blacklist Gluster Devices checkbox is selected.
- (Optional) Under Brick Configuration, adjust the LV size for each host's block device.
- Click Next.
- On the Review screen, review the configuration and then click Next to deploy the Gluster configuration and create volumes.
This process takes some time to complete as the
gdeploy
tool installs required packages and configures Gluster volumes and their underlying storage.If successful, Cockpit displays the Successfully deployed Gluster message and your Gluster deployment is ready for use.
- Click the Continue to Hosted Engine Deployment button.
Important:
You can only continue with deploying the hosted engine with Cockpit if your hosts have a direct connection to the internet. If you do not have a direct internet connection, are behind a proxy, or click Close to continue deployment at a later date, you must use the command line to deploy the self- hosted engine.
Deploy Self-Hosted Engine Using Cockpit
If your hosts do not have a direct internet connection, are behind a proxy, or you clicked Close in Cockpit after deploying Gluster, you must use the command line to deploy the self- hosted engine.
To deploy the self-hosted engine using the Cockpit web interface immediately after deploying Gluster, you should have clicked Continue to Hosted Engine Deployment in the last step of the Gluster deployment instructions.
Complete the following steps using the Hosted Engine Deployment wizard.
- On the VM screen, fill in the following VM settings information:
- In the Engine VM FQDN field, enter the Engine virtual machine FQDN, which must be resolvable by a DNS search. Do not use the FQDN of the host.
- In the MAC Address field, enter a MAC address for the Engine virtual machine only if you do not want to use the auto-generated address.
- From the Network Configuration list, select either DHCP or Static.
- To use DHCP, you must have a DHCP reservation (a pre-set IP address on the DHCP server) for the Engine virtual machine.
- To use Static, enter the virtual machine IP, the netmask and gateway addresses, and DNS server. The IP address must belong to the same subnet as the host.
- From the Bridge Interface list, select the physical network interface to configure the bridge on.
- Enter and confirm the virtual machine’s Root Password.
- Specify whether to allow Root SSH Access.
- Enter the Number of Virtual CPUs for the virtual machine.
- Enter the Memory Size (MiB). The available memory is displayed next to the field.
- (Optional) Click Advanced to provide any of the following information.
- Enter a Root SSH Public Key to use for root access to the Engine virtual machine.
- Select the Edit Hosts File check box if you want to add entries for the Engine virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. - Change the management Bridge Name, or accept the default of
ovirtmgmt
. - Enter the Gateway Address for the management bridge.
- Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the host you are using for the deployment.
- Click Next.
- On the Engine screen, enter a password for the Admin user in the Admin Portal Password field. Do not change any other fields.
- Click Next.
-
Review the options in the Prepare VM screen. Click Prepare VM to continue or the Back if you need to change any options.
-
When the Prepare VM completes successfully, click Next.
-
On the Storage screen, select Gluster as the Storage Type. The Storage Connection should have the deployment node as the primary connection and other nodes as backup mount servers.
Do not change any other fields.
- Click Next.
-
On the Finish screen, review the mount information and click Finish Deployment.
This process
- transfers the Hosted Engine virtual disk to the Gluster engine volume
- creates a VM named
Hostedengine
- configures services to start this instance automatically when the hyperconverged hosts boots
- configures the deployment host as a KVM host in the Administration Portal
- Add the remaining hyperconverged nodes as KVM hosts. See Add Hyperconverged Hosts to Cluster for instructions.
Add Hyperconverged Hosts to Cluster
After deploying the self-hosted engine, you must add the remaining hyperconverged hosts to the virtualization cluster.
- Log in to the Administration Portal.
- Go to Compute and then click Hosts.
- On the Hosts pane, click New.
The New Host dialog box opens with the General tab selected on the sidebar.
- From the Host Cluster drop-down list, select the data center and host cluster for the host.
The Default data center is auto-selected.
When you install Oracle Linux Virtualization Manager, a data center and cluster named Default is created. You can rename and configure this data center and cluster, or you can add new data centers and clusters, to meet your needs. See the Data Centers or Clusters tasks in the Oracle Linux Virtualization Manager: Administration Guide.
- In the Name field, enter a name for the host. This is the name you see in the UI.
- In the Hostname field, enter the fully-qualified domain name or IP address of the host.
- In the SSH Port field, change the standard SSH port 22 if the SSH server on the host uses a different port.
- Under Authentication, select the authentication method to use.
Oracle recommends that you select SSH PublicKey authentication. If you select this option, copy the key displayed in the SSH PublicKey field to the
/root/.ssh/authorized_keys
file on the host.Otherwise, enter the root user's password to use password authentication.
- In the Power Management tab, check Enable Power Management and click
+
(plus) to configure an IPMI, iDRAC, ILO, or any other hardware management connection available.Note:
You should configure the KVM host Power Management to allow the Engine application and system administrators to manage (reboot or power off) hosts in NonResponsive or NonOperational states when recovering from a host failure.
In NonResponsive or NonOperational states,
ssh
management might not be able to recover the host forcing manual intervention. See Configure Power Management and Fencing for Host for more information. - In the Hosted Engine tab, select Deploy from the Choose hosted engine deployment action dropdown.
- Click OK to configure the host as a virtualization node.
- Repeat this process for all remaining hyperconverged hosts.
Important:
Do not deploy the hosted engine on more than seven KVM hosts.