Deploy the Self-Hosted Engine
You must perform a fresh installation of Oracle Linux 8.8 or later (8.x), or 9.6 or later (9.x) on an Oracle Linux Virtualization Manager host before deploying a self-hosted engine. You can download the installation ISO from the Oracle Software Delivery Cloud at https://edelivery.oracle.com.
Configure the host
Complete the following steps to prepare the host for deployment.
-
Install Oracle Linux 8.8 or later (8.x), or 9.6 or later (9.x) on the host using the Minimal Install base environment.
Caution:
Do NOT select any other base environment than Minimal Install for the installation or the hosts will have incorrect qemu and libvirt versions, incorrect repositories configured, and no access to virtual machine consoles.
Don't install any extra packages until after you have installed the Manager packages, because they might cause dependency issues.
Follow the instructions in the appropriate guide:
-
Ensure that the firewalld service is enabled and started.
For more information about configuring
firewalld, see Configuring a Packet Filtering Firewall in the appropriate guide: -
Complete one of the following sets of steps:
-
For ULN registered hosts or using Oracle Linux Manager
Subscribe the system to the required channels.
-
For ULN registered hosts, sign in to https://linux.oracle.com with a ULN username and password. For Oracle Linux Manager registered hosts, access the internal server URL.
-
On the Systems tab, select the link named for the host in the list of registered machines.
-
On the System Details page, select Manage Subscriptions.
-
On the System Summary page, select each required channel from the list of available channels and select the right arrow to move the channel to the list of subscribed channels. Subscribe the system to the following channels, where n is the major Oracle Linux version (8 or 9):
-
oln_x86_64_baseos_latest -
oln_x86_64_appstream -
ol8_x86_64_kvm_appstreamorol9_x86_64_kvm_utils -
oln_x86_64_ovirt45 -
oln_x86_64_ovirt45_extras -
(Oracle Linux 8 only)
ol8_x86_64_gluster_appstream -
(For VDSM)
oln_x86_64_UEKR7, orol9_x86_64_UEKR8(Oracle Linux 9 only)
Note:
Gluster is only available on hosts running Oracle Linux 8.
-
-
Select Save Subscriptions.
-
Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.
dnf install oracle-ovirt-release-45-eln
-
-
For Oracle Linux yum server hosts
Install the Oracle Linux Virtualization Manager Release 4.5 package and enable the required repositories. In the following instructions, n is the major Oracle Linux version (8 or 9):
-
Enable the
oln_baseos_latestyum repository.dnf config-manager --enable oln_baseos_latest -
Install the Oracle Linux Virtualization Manager Release 4.5 package, which automatically enables/disables the required repositories.
dnf install oracle-ovirt-release-45-eln -
Use the
dnfcommand to verify that the required repositories are enabled.-
Clear the yum cache.
dnf clean all -
List the configured repositories and verify that the required repositories are enabled.
dnf repolistThe following repositories must be enabled:
-
oln_x86_64_baseos_latest -
oln_x86_64_appstream ol8_x86_64_kvm_appstreamorol9_x86_64_kvm_utils-
oln_x86_64_ovirt45 -
oln_x86_64_ovirt45_extras -
oln_x86_64_addons -
(For Oracle Linux 8 only)
ol8_x86_64_gluster_appstream - (For VDSM)
oln_x86_64_UEKR7, orol9_x86_64_UEKR8(Oracle Linux 9 only)
Note:
Gluster is only available on hosts running Oracle Linux 8.
-
- If a required repository isn't enabled, use the
dnf config-managercommand to enable it.dnf config-manager --enable repository
-
-
-
- If the host runs the Unbreakable Linux Kernel (UEK):
- Install the Extra kernel modules
package.
dnf install kernel-uek-modules-extra - Reboot the host.
- Install the Extra kernel modules
package.
Check host configuration
To ensure that the hosted engine host is configured correctly, run the precheck script BEFORE you deploy the hosted engine. You must also run the precheck script on all KVM hosts in the environment.
Note:
To run the script on several hosts simultaneously, we recommend using an Ansible playbook.
- Connect to the hosted engine host from a command line and run the precheck script:
sudo olvm-pre-check.pyA series of checks begins and you see something similar to
----------------------------------- OLVM 4.5.5 PRE-CHECK SCRIPT ----------------------------------- +++ Checking oracle-ovirt-release-45 [PASS] +++ Checking if Host is installed [WARN] The 'ovirt-engine' package is already installed. DO NOT configure this Server as a KVM Host. +++ Checking if a Minimal Installation [PASS] +++ Validating the 'Minimal Install' Group [PASS] +++ Checking enabled repositories [WARN] Extra repositories are enabled: update-pcp Please run the command: dnf config-manager --set-disabled update-pcp +++ Running 'dnf makecache' [PASS] +++ Dry run 'dnf update --assumeno' [PASS] +++ Checking Linux Kernel [PASS] +++ Checking kernel-uek-modules-extra [PASS] +++ Checking Firewalld status [PASS] +++ Checking SELinux status [PASS] +++ Checking FIPS status [PASS] FIPS is disabled. +++ If installed, check ansible version [PASS] +++ If installed, check qemu-kvm version [PASS] +++ If installed, check libvirt version [PASS] +++ Checking Hostname/FQDN [PASS] - If any checks are marked WARN or FAIL, the script output provides information that can help you resolve the issues:
+++ Checking if Host is installed [WARN] The 'ovirt-engine' package is already installed. DO NOT configure this Server as a KVM Host. +++ Checking enabled repositories [WARN] Extra repositories are enabled: update-pcp Please run the command: dnf config-manager --set-disabled update-pcp - If you had warnings or failures to address, rerun the script to ensure that the system
passes all configuration checks. For example:
sudo olvm-pre-check.py ----------------------------------- OLVM 4.5.5 PRE-CHECK SCRIPT ----------------------------------- +++ Checking oracle-ovirt-release-45 [PASS] +++ Checking if Host is installed [PASS] +++ Checking if a Minimal Installation [PASS] +++ Validating the 'Minimal Install' Group [PASS] +++ Checking enabled repositories [PASS] +++ Running 'dnf makecache' [PASS] +++ Dry run 'dnf update --assumeno' [PASS] +++ Checking Linux Kernel [PASS] +++ Checking kernel-uek-modules-extra [PASS] +++ Checking Firewalld status [PASS] +++ Checking SELinux status [PASS] +++ Checking FIPS status [PASS] FIPS is disabled. +++ If installed, check ansible version [PASS] +++ If installed, check qemu-kvm version [PASS] +++ If installed, check libvirt version [PASS] +++ Checking Hostname/FQDN [PASS]
Install the engine
After you have successfully configured and verified the hosted engine host, install the hosted engine deployment tool and engine appliance:
dnf install ovirt-hosted-engine-setup ovirt-engine-appliance
Proceed to Use Command Line to Deploy Self-Hosted Engine or Use Cockpit to Deploy Self-Hosted Engine.
Use Command Line to Deploy Self-Hosted Engine
You can deploy the self-hosted engine from the command line. A script collects the details of the environment and uses them to configure the host and the engine.
-
Start the deployment. IPv6 is used by default. To use IPv4, specify the
--4option:hosted-engine --deploy --4Optionally, use the
--ansible-extra-varsoption to define variables for the deployment. For example:hosted-engine --deploy --4 --ansible-extra-vars="@/root/extra-vars.yml" cat /root/extra-vars.yml --- he_pause_host: true he_proxy: "http://<host>:<port>" he_enable_keycloak: falseSee the oVirt Documentation for more information.
-
Enter Yes to begin deployment.
Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]:Note:
The hosted-engine script creates a virtual machine and uses cloud-init to configure it. The script also runs engine-setup and reboots the system so that the virtual machine can be managed by the high availability agent.
-
Enter the name of the data center or accept the default.
Please enter the name of the data center where you want to deploy this hosted-engine host. Data center [Default]: -
Enter a name for the cluster or accept the default.
Please enter the name of the cluster where you want to deploy this hosted-engine host. Cluster [Default]: -
Keycloak integration is a technology preview feature for internal Single-Sign-On (SSO) provider for the Engine and it deprecates AAA. The default response is Yes. However, because this is a preview feature, enter No.
Configure Keycloak integration on the engine(Yes, No) [Yes]:No -
Configure the network.
-
If the gateway that displays is correct, press Enter to configure the network.
-
Enter a pingable address on the same subnet so the script can check the host's connectivity.
Please indicate a pingable gateway IP address [X.X.X.X]: -
The script detects possible NICs to use as a management bridge for the environment. Select the default.
Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]:
-
-
Enter the path to an OVA archive to use a custom appliance for the virtual machine installation. Otherwise, leave this field empty to use the oVirt Engine Appliance.
If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use. Entering no value will use the image from the ovirt-engine-appliance rpm, installing it if needed. Appliance image path []: -
Specify the fully-qualified domain name for the engine virtual machine.
Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: manager.example.com Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [example.com] -
Enter and confirm a root password for the engine.
Enter root password that will be used for the engine appliance: Confirm appliance root password: -
Optionally, enter an SSH public key to enable you to sign in to the engine as the root user and specify whether to enable SSH access for the root user.
Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: You may provide an SSH public key, that will be added by the deployment script to the authorized_keys file of the root user in the engine appliance. This should allow you passwordless login to the engine machine after deployment. If you provide no key, authorized_keys will not be touched. SSH public key []: [WARNING] Skipping appliance root ssh public key Do you want to enable ssh access for the root user? (yes, no, without-password) [yes]: -
Enter the virtual machine's CPU and memory configuration.
Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: Please specify the memory size of the VM in MB. The default is the appliance OVF value [16384]: -
Enter a MAC address for the engine virtual machine or accept a randomly-generated MAC address.
You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:Note:
To provide the engine virtual machine with an IP address using DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script doesn't configure the DHCP server for you.
-
Enter the virtual machine's networking details.
How should the engine VM network be configured (DHCP, Static)[DHCP]?Note:
If you specified Static, enter the IP address of the Engine. The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine's IP must be in the same subnet range (10.1.1.1-254/24).
Please enter the IP address to be used for the engine VM [x.x.x.x]: Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip): -
Specify whether to add entries in the virtual machine's
/etc/hostsfile for the engine virtual machine and the base host. Ensure that the host names are resolvable.Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you. Add lines to /etc/hosts? (Yes, No)[Yes]: -
Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Or, press Enter to accept the defaults.
Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: -
Enter and confirm a password for the
admin@internaluser to access the Administration Portal.Enter engine admin password: Confirm engine admin password:The script creates the virtual machine which can take time if it needs to install the oVirt Engine Appliance. After creating the virtual machine, the script continues gathering information.
-
Select the type of storage to use.
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:-
If you selected NFS, enter the version, full address, and path to the storage, and any mount options.
Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs If needed, specify additional mount options for the connection to the hosted-engine storage domain []: -
If you selected iSCSI, enter the portal details and select a target and LUN from the automatically detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
Note:
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There's also a Multipath Helper tool that generates a script to install and configure multipath with different options.
Please specify the iSCSI portal IP address: Please specify the iSCSI portal port [3260]: Please specify the iSCSI discover user: Please specify the iSCSI discover password: Please specify the iSCSI portal login user: Please specify the iSCSI portal login password: The following targets have been found: [1] iqn.2017-10.com.redhat.example:he TPGT: 1, portals: 192.168.1.xxx:3260 192.168.2.xxx:3260 192.168.3.xxx:3260 Please select a target (1) [1]: 1 The following luns have been found on the requested target: [1] 360003ff44dc75adcb5046390a16b4beb 199GiB MSFT Virtual HD status: free, paths: 1 active Please select the destination LUN (1) [1]: -
If you selected GlusterFS, enter the full address and path to the storage, and any mount options. Only replica 3 Gluster storage is supported.
* Configure the volume as follows as per [Gluster Volume Options for Virtual Machine Image Store] (documentation/admin-guide/chap-Working_with_Gluster_Storage#Options set on Gluster Storage Volumes to Store Virtual Machine Images) Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume If needed, specify additional mount options for the connection to the hosted-engine storage domain []: -
If you selected Fibre Channel, select a LUN from the automatically detected list. The host bus adapters must be configured and connected. The deployment script automatically detects the available LUNs, and the LUN must not contain any existing data.
The following luns have been found on the requested target: [1] 3514f0c5447600351 30GiB XtremIO XtremApp status: used, paths: 2 active [2] 3514f0c5447600352 30GiB XtremIO XtremApp status: used, paths: 2 active Please select the destination LUN (1, 2) [1]:
-
-
Enter the engine disk size:
Please specify the size of the VM disk in GB: [50]:If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.
-
Optionally, sign in to the Oracle Linux Virtualization Manager Administration Portal to add any other resources.
In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.
-
Enable the required repositories on the Engine virtual machine.
-
Optionally, add a directory server using the
ovirt-engine-extension-aaa-ldap-setupinteractive setup script so you can add users to the environment.
Use Cockpit to Deploy Self-Hosted Engine
Caution:
- If the system is behind a proxy, you must use the command line option to deploy the self-hosted engine.
- Cockpit deployment is available only for Oracle Linux 8 hosts.
To deploy the self-hosted engine using the Cockpit portal, complete the following steps.
-
Install the Cockpit dashboard.
dnf install cockpit-ovirt-dashboard -y -
Open the Cockpit port 9090 on firewalld.
firewall-cmd --permanent --zone=public --add-port=9090/tcpfirewall-cmd --reload -
Enable and start the Cockpit service
systemctl enable --now cockpit.socket -
Sign in to the Cockpit portal at the following URL:
https://host_IP_or_FQDN:9090 -
To start the self-hosted engine deployment, select Virtualization and select Hosted Manager.
-
Select Start under Hosted Manager.
-
Provide the following details for the Engine virtual machine.
-
In the Engine VM FQDN field, enter the Engine virtual machine FQDN. Don't use the FQDN of the host.
-
In the MAC Address field, enter a MAC address for the Engine virtual machine or leave blank and the system provides a randomy-generated address.
-
From the Network Configuration dropdown list, select DHCP or Static.
-
To use DHCP, you must have a DHCP reservation (a preset IP address on the DHCP server) for the Engine virtual machine. In the MAC Address field, enter the MAC address.
-
To use Static, enter the virtual machine IP, the gateway address, and the DNS servers. The IP address must belong to the same subnet as the host.
-
-
Select the Bridge Interface from the dropdown list.
-
Enter and confirm the virtual machine’s Root Password.
-
Specify whether to enable Root SSH Access.
-
Enter the Number of Virtual CPUs for the virtual machine.
-
Enter the Memory Size (MiB). The available memory is displayed next to the field.
-
-
Optionally, select Advanced to provide any of the following information.
-
Enter a Root SSH Public Key to use for root access to the Engine virtual machine.
-
Select the Edit Hosts File checkbox to add entries for the Engine virtual machine and the base host to the virtual machine's
/etc/hostsfile. You must ensure that the host names are resolvable. -
Change the management Bridge Name, or accept the default of
ovirtmgmt. -
Enter the Gateway Address for the management bridge.
-
Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the host you're using for the deployment.
-
-
Select Next.
-
Enter and confirm the Admin Portal Password for the
admin@internaluser. -
Optionally, configure event notifications.
-
Enter the Server Name and Server Port Number of the SMTP server.
-
Enter a Sender E-Mail Address.
-
Enter Recipient E-Mail Addresses.
-
-
Select Next.
-
Review the configuration of the Engine and its virtual machine. If the details are correct, select Prepare VM.
-
When the virtual machine installation is complete, select Next.
-
Select the Storage Type from the dropdown list and enter the details for the self-hosted engine storage domain.
-
For NFS:
-
In the Storage Connection field, enter the full address and path to the storage.
-
If required, enter any Mount Options.
-
Enter the Disk Size (GiB).
-
Select the NFS Version from the dropdown list.
-
Enter the Storage Domain Name.
-
-
For iSCSI:
-
Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.
-
Select Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
Note:
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. You can use the Multipath Helper tool to generates a script that installs and configures multipath with different options.
-
Enter the Disk Size (GiB).
-
Enter the Discovery Username and Discovery Password.
-
-
For FibreChannel:
-
Enter the LUN ID. The host bus adapters must be configured and connected and the LUN must not contain any existing data.
-
Enter the Disk Size (GiB).
-
-
For Gluster Storage:
-
In the Storage Connection field, enter the full address and path to the storage.
-
If required, enter any Mount Options.
-
Enter the Disk Size (GiB).
-
-
-
Select Next.
-
Review the storage configuration. If the details are correct, select Finish Deployment.
-
When the deployment is complete, select Close.
If successful, one data center, cluster, host, storage domain, and the engine virtual machine are already running.
-
Optionally, sign in to the Oracle Linux Virtualization Manager Administration Portal to add any other resources.
In the Administration Portal, the engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown.
-
Enable the required repositories on the Engine virtual machine.
-
Optionally, add a directory server using the
ovirt-engine-extension-aaa-ldap-setupinteractive setup script so you can add more users to the environment. -
To view the self-hosted engine’s status in Cockpit, under Virtualization select Hosted Engine.