3 Technology Preview
For the Red Hat Compatible Kernel in the current Oracle Linux 8 release, the following features are under technology preview:
Infrastructure Services
The following features for infrastructure services are available as technology previews.
Socket API for TuneD
The socket API for TuneD maps one-to-one with the D-Bus API and provides an alternative
communication method for cases where D-Bus isn't available. With the socket API, you can
control the TuneD daemon to optimize the performance, and change the values of various tuning
parameters. The socket API is disabled by default. You can enable it in the
tuned-main.conf
file.
Networking
The following networking features are available as technology previews.
Multi-Protocol Label Switching
Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism that routes the traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets, based on the labels that are attached to the packet. With the usage of labels, the MPLS network can handle packets with particular characteristics.
XDP Features
XDP programs can be loaded on architectures other than AMD and Intel® 64-bit. Note, however,
that the libxdp
library is available only for AMD and Intel® 64-bit platforms.
Likewise, in this technology preview feature, you can offload XDP hardware.
Also, XDP includes the Address Family eXpress Data Path (AF_XDP
) socket for
high-performance packet processing. It grants efficient redirection of programmatically
selected packets to user space applications for further processing.
act_mpls
Module
The act_mpls
module in the kernel-modules-extra
rpm applies
Multi-Protocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example,
push and pop MPLS label stack entries with TC filters. The module also accepts the Label,
Traffic Class, Bottom of Stack, and Time to Live fields to be set independently.
systemd-resolved
Service
The systemd-resolved
service provides name resolution to local applications.
Its components include a caching and validating DNS stub resolver, a Link-Local Multicast Name
Resolution (LLMNR), and Multicast DNS resolver and responder.
nispor
Package
The nispor
package is a unified interface for Linux network state querying
all running network status. Version 1.2.10 includes the following features and changes:
-
NetstateFilter
can use the kernel filter on network routes and interfaces. -
SR-IOV interfaces can query SR-IOV Virtual Function (SR-IOV VF) for every (VF).
-
The
lacp_active
,missed_max
, andns_ip6_target
bonding options are available.
You can install nispor
in one of two ways:
-
As an individual package:
sudo dnf install nispor
-
As a dependency of
nmstate
:sudo dnf install nmstate
nispor
is listed as the dependency.
For more information on using nispor
, see the
/usr/share/doc/nispor/README.md
file.
Kernel
The following kernel features are available as technology previews.
kexec
Fast Reboot
The kexec fast reboot feature is available as a technology preview
feature in Oracle Linux 8. This feature significantly speeds up the boot process by enabling
the kernel to boot directly into the second kernel without first passing through the Basic
Input/Output System (BIOS). To use this feature, load the kexec
module first,
then reboot the system.
SGX Available
Software Guard Extensions (SGX) from Intel® protects software code and data from disclosure and modification. The Linux kernel partially supports SGX v1 and SGX v1.5. Version 1 enables platofmrs by using the Flexible Launch Control mechanism to use the SGX technology.
Soft-RoCE Driver
The Soft-RoCE rdma_rxe
is the software implementation of the Remote Direct
Memory Access (RDMA) over Converged Ethernet (RoCE) network protocol for processing RDMA over
Ethernet. Soft-RoCE maintains two protocol versions, RoCE v1 and RoCE v2.
Extended Berkeley Packet Filter (eBPF)
eBPF
is an in-kernel virtual machine code is processed in the kernel space,
in the restricted sandbox environment with access to a limited set of
functions.
eBPF
has a new system call bpf()
for creating various types
of maps and for loading programs that can be attached onto various points (sockets,
tracepoints, packet reception) to receive and process data.
An eBPF
component is AF_XDP
, a socket for
connecting the eXpress Data Path (XDP) path to user space for
applications that prioritize packet processing performance.
File Systems and Storage
The following features that are related to file systems and storage are available as technology preview.
DAX File System Available
In this release,
the DAX file system is available as a Technology Preview for the ext4 and XFS file systems.
DAX enables an application to directly map persistent memory into its address space. The
system must have some form of persistent memory available to use DAX. Persistent memory can be
in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs). In addition, a
file system that supports DAX must be created on the NVDIMMs; the file system must be mounted
with the dax
mount option. Then, an mmap
of a file on the
DAX mounted file system results in a direct mapping of storage into the application's address
space.
NVMe/TCP Available
NVMe over Fabrics TCP host and the target drivers are included in RHCK as a technology preview in this release.
Note:
Support for NVMe/TCP is already available in Unbreakable Enterprise Kernel Release 6.
OverlayFS
OverlayFS is a type of union file system. With OverlayFS, you can overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. Several users can then share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media.
As a technology preview, use of OverlayFS with containers is under certain restrictions. Certain cases of OverlayFS use aren't compliant with POSIX. Therefore you must test any applications before deploying them with OverlayFS.
To check if an existing XFS file system can be used as an overlay, type the following
command and see if ftype
is enabled (ftype=1
):
# xfs_info /mount-point | grep ftype
For more information about OverlayFS, including known issues, see Linux kernel documentation
Stratis
A local storage manager, Stratis manages file systems on top of pools of storage and provides features such as the following:
-
Manage snapshots and thin provisioning
-
Automatically grow file system sizes as needed
-
Maintain file systems
You administer Stratis storage through the stratis
utility, which
communicates with the stratisd
background service.
High Availability and Clusters
The following features for high availability and clusters are available as technology previews.
Pacemaker Podman Bundles
Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview.
Heuristics in corosync-qdevice
Heuristics are a set of commands that run locally on startup, cluster membership change,
successful connect to corosync-qnetd
, and, optionally, on a periodic basis.
When all commands finish successfully, heuristics have passed; otherwise, they have failed.
The heuristics result is sent to corosync-qnetd
where it's used in
calculations to decide which partition is quorate.
Fence Agent
The fence_heuristics_ping
agent is available with Pacemaker. The agent aims
to open a class of experimental fence agents that do no actual fencing by themselves but
instead exploit the behavior of fencing levels in a new way.
Through the agent, particularly by its issuing an off
action, Pacemaker
can be informed if fencing would succeed or not. The heuristics agent can prevent the
agent that does the actual fecing from fencing a node under certain conditions.
Desktop
The following desktop features are available as a technology preview.
GNOME for 64-Bit Arm
You can use the Gnome desktop on an aarch64 system as a technical preview.
A limited set of graphical applications is available, including:
- The Firefox web browser
- Firewall Configuration (
firewall-config
) - Disk Usage Analyzer (
baobab
)
You can use Firefox to connect to the Cockpit service on the server.
Certain applications, such as LibreOffice, only provide a CLI, and their graphical interface is disabled.
Graphics
The following graphics features are available as technology previews in Oracle Linux.
Virtualization
The following virtualization features are available as technology previews.
KVM Virtualization
Nested KVM virtualization can be used on the Microsoft Hyper-V hypervisor. You can create virtual machines on an Oracle Linux 8 guest system running on a Hyper-V host.
Note that currently, this feature only works on Intel® and AMD systems. In addition, nested virtualization is sometimes not enabled by default on Hyper-V. To enable it, see the https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization.
SEV and SEV-ES
The Secure Encrypted Virtualization (SEV) feature is provided for AMD EPYC host machines that use the KVM hypervisor. It encrypts a virtual machine's memory and protects the VM from access by the host.
SEV's enhanced Encrypted State version (SEV-ES) encrypts all CPU register contents when a VM stops running, thus preventing the host from modifying the VM's CPU registers or reading any information from them.
Note that SEV is supported in UEK.
Intel® vGPU
A physical Intel® GPU device can be divided into several virtual devices referred to as
mediated devices
. These mediated devices can then be assigned to several
virtual machines (VMs) as virtual GPUs. Thus, these VMs share the performance of a single
physical Intel® GPU.
Note that only selected Intel® GPUs are compatible with the vGPU feature.
You can also enable a VNC console operated by Intel® vGPU. Then, users can connect to a VNC console of the VM and see the VM's desktop hosted by Intel® vGPU. However, this functionality currently only works for Oracle Linux guest operating systems.
Nested Virtual Machines
Nested KVM virtualization is provided for KVM virtual machines (VMs) running on Intel® based systems and AMD64 systems. With this feature, an Oracle Linux 7 VM or an Oracle Linux 8 VM that runs on a physical Oracle Linux 8 host can act as a hypervisor, and host its own VMs.
SR-IOV Adapters
Oracle Linux guest operating systems running on a Hyper-V hypervisor can now use the
single-root I/O virtualization (SR-IOV) feature for Intel® network adapters that are supported
by the ixgbevf
and iavf
drivers. This feature is enabled
when the following conditions are met:
-
SR-IOV support is enabled for the network interface controller (NIC), the virtual NIC, and the virtual switch.
-
The virtual function (VF) from the NIC is attached to the virtual machine.
The feature is currently provided with Microsoft Windows Server 2016 and later.
Containers
The following features for containers are available as technology previews.
Podman Sigstore Signatures
Podman recognizes the sigstore format of container image signatures. The sigstore signatures can be stored in the container registry with the container image without the need to have a separate signature server to store image signatures.
Quadlet for Podman
Quadlet for Podman v4.4 and later can be used to automatically generate a
systemd
service file from the container description. Quadlet formatted
descriptions are easier to write and maintain than systemd
unit files. See
the upstream
documentation for more information.
Creating Sigstore Signatures With Fulcio and Rekor Are Available
With Fulcio and Rekor servers, you can create signatures by using short-term
certificates based on an OpenID Connect (OIDC) server authentication instead of manually
managing a private key. This added functionality is the client side support only, and
doesn't include either the Fulcio or Rekor servers. To use Fulcio, add the
fulcio
section in the policy.json
file.
To sign container images, use the podman push
--sign-by-sigstore=file.yml
or skopeo copy
--sign-by-sigstore=file.yml
commands, where
file.yml is the sigstore signing parameter file.
To verify signatures, add the fulcio
section and the
rekorPublicKeyPath
or rekorPublicKeyData
fields in
the policy.json
file. For more information, see
containers-policy.json
manual page.