Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use Systemd on Oracle Linux
Introduction
Systemd is a collection of software components that manages Oracle Linux system services and settings. During system startup, the boot process initializes as PID 1, which systemd later manages. From there, systemd or one of its child processes starts all subsequent processes. Just like it is the first process to start after the system boots, it is the last to finish running when shutting down the system. Administrators and users use systemctl
as the primary management tool to interact with systemd services and journalctl
for troubleshooting.
Within systemd, different types of units manage various types of system behavior or functions. For example, daemon processes or system services are run as service units, while target units usually define system states. You also have timer units to schedule tasks, similar to how you might use the system cron service, and mount units to configure a mount point instead of configuring it in the system fstab.
While systemd manages all system-level processes and functions, it can also manage processes running in user space. Users can manage services and timers they create without administrator access and even configure them to continue after the user session has terminated.
Objectives
In this tutorial, you’ll learn to:
- Discover different systemd unit types
- Use systemd target units
- Run various systemctl commands
- Configure systemd to allow user space processes to run after logout
Prerequisites
-
Minimum of a single Oracle Linux system
-
Each system should have Oracle Linux installed and configured with:
- A non-root user account with sudo access
- Access to the Internet
Deploy Oracle Linux
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ol
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.The default deployment shape uses the AMD CPU and Oracle Linux 8. To use an Intel CPU or Oracle Linux 9, add
-e instance_shape="VM.Standard3.Flex"
or-e os_version="9"
to the deployment command.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Linux is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Explore Systemd Unit Files
-
Open a terminal and connect via SSH to the ol-node-01 instance.
ssh oracle@<ip_address_of_instance>
-
List all systemd units that are currently loaded.
systemctl list-units
If you run
systemctl
without any parameters, it returns the same information. To navigate the output, use the Up or Down arrow, Space Bar, or PgDn/PgUp keys. When done, then pressq
to exit.The output shows all currently active configuration units that systemd is managing. These units have names with different suffixes based on their type, including .device, .mount, .service, .target, and .timer.
-
List all systemd units present on your system, regardless of their status.
systemctl list-units --all | column
The output consists of 5 columns, where:
- UNIT: The systemd unit name
- LOAD: Indicates whether the unit definition is loaded properly
- ACTIVE: The high-level unit activation state, i.e. generalization of SUB
- SUB: The low-level unit activation state values depend on the unit type
- DESCRIPTION: A short description of the unit’s purpose
Active units are either started, running, mounted, or plugged in, depending on their purpose. At the same time, inactive units are stopped, unmounted, or disconnected. The output also shows a selection of the different types of systemd units.
- .automount: Provides automount capabilities for the matching .mount unit
- .mount: Details any mount point managed by systemd. It is usually named the same as the defined mount path
- .path: Can activate services when the file system path information changes
- .scope: Similar to service units but manages external system processes
- .service: Defines how to start, stop, and control services
- .slice: A collection of groups, services, scopes, etc., grouped with other slices and organized in a hierarchical cgroup tree structure
- .socket: Encapsulates local interprocess communication (IPC) or network sockets and is used for socket-based activation
- .target: Groups and synchronizes related units during boot-up
- .timer: Use this unit type to trigger the activation of other service units on a given schedule. You can use it as an alternative to using the cron service
- .device: Describes any udev or sysfs device managed by systemd. Not all devices use this file type
- .swap: Defines system swap space
-
Display all the loaded .service unit types.
systemctl list-units --type service
-
Display all the .service unit types regardless of status.
systemctl list-units --type service --all
You can repeat this command for any service type by changing the
--type
to the unit type you want to focus on.
Work With Targets
Target units define and group different units together so the system can achieve its configured runlevel. Target units are analogous to the older SysVinit Run Levels. Examples of commonly used systemd targets include:
- poweroff.target: Shut down and power off the system
- rescue.target: Starts up in single-user mode for maintenance and recovery
- multi-user.target: Start up a console-based multi-user system. You can define it with or without networking or a user-defined mode
- graphical.target: Starts a multi-user system with network and GUI services
- reboot.target: Stops all services and reboots the system
-
List the available target units.
systemctl list-units --type target
When creating units, the developer can make the unit dependent on other units to load, or they can configure a unit to conflict with particular units. For example, the multi-user.target requires the basic.target to function but conflicts with the rescue.service and the rescue.target units. Units also specify other units that they want to load to be able to function.
This flexibility enables targets to be chained together to set up a particular state but is also modular enough to be reused to trigger an alternate state.
-
View the default target that the systemd process tries to achieve during boot up.
systemctl get-default
The output shows the system defaults to a multi-user.target as its runlevel.
-
Confirm this matches the setting on the file system.
ls -l /etc/systemd/system/default.target
The default.target file is a symbolic link to the current target unit file, which is a multi-user.target.
-
Change the default target unit.
sudo systemctl set-default graphical.target
The output shows the default.target symbolic link now redirects to the graphical.target.
-
Display the properties of a unit.
systemctl show multi-user.target
The output shows all the key=value pair properties defined in the associated unit file. This information can help you identify which units the target expects and their starting order.
-
Display the dependency tree of a unit.
systemctl list-dependencies default.target
The output returns a list of all units invoked when starting the default.target. The white dot at the beginning of a line indicates the service is inactive, while the green means it is active.
Review the Current System State
Being able to explore the overall state of your system’s services allows you to review and update the system as needed.
-
List all the units available on the system.
systemctl list-unit-files
The output shows the units marked with one of the following statuses:
- enabled: Configured to start at boot
- disabled: Available on the system but not configured to start at boot
- static: The unit does not contain an install section, which is used to start a unit
- masked: The unit is present but configured not to start
- generated: Systemd creates these units using the systemd generator and makes a symbolic link in an ephemeral place, such as
/run/systemd/generator/local-fs.target.requires/boot-efi.mount
early in the boot process to ensure wanted services are available when called. - transient: These units represent a temporary service or timer created using the
systemd-run
command. It only runs for the duration of the current shell session, and the system forgets about it after restarting.
Check the Status of a Service
-
Check the status of an inactive service.
systemctl status nfs-server.service
The output confirms the services are inactive as it shows the Active: value as inactive (dead).
-
Check the status of an active service.
systemctl status chronyd.service
In this case, the output shows the Active: value as active (running) and provides additional pid, memory, and cgroup information, along with a small section of the most recent log entries.
-
Check if a unit is running.
systemctl is-active chronyd.service
This command returns one of two states, active or inactive.
-
Check if a unit is enabled.
systemctl is-enabled chronyd.service
This option returns one of two states, enabled or disabled.
-
Check if a unit has any problems starting.
systemctl is-failed chronyd.service
The command returns either a state of active if running as intended or inactive if an error occurs. The unit may report its status as being either unknown or inactive if a process or someone intentionally stopped it after boot.
The short output for these last three commands can help verify the status of a unit in scripted solutions.
Enable and Disable a Service
If a systemd service reports a different state than needed at boot time, you must be able to automatically tell systemd to start or stop the service.
-
Configure a service to start at boot.
sudo systemctl enable nfs-server.service
You can use the
--now
option to additionally start the service while you enable it.The output shows the creation of a symbolic link from under the current default.target .wants location to the unit’s service file definition.
-
Confirm the status of the service.
systemctl status nfs-server
The service unit shows as loaded but inactivewithout using the’– now’ option to enable the service.
-
Start the service.
sudo systemctl start nfs-server
Checking the status will show the service as active.
-
Stop a service.
sudo systemctl stop nfs-server
Checking the status confirms the service is stopped but still enabled.
-
Disable a service.
sudo systemctl disable nfs-server
This command removes the symbolic link and prevents the service from starting at boot time.
-
Confirm the service is disabled.
systemctl is-enabled nfs-server
Mask and Unmask a Service
It is also possible to mark systemd services as completely unstartable, rather than merely stopping or disabling the systemd service. The systemd mask and unmask commands deliver the ability to prevent them from starting either manually or automatically.
-
Mask a systemd service.
sudo systemctl mask nfs-server
Masking a service creates a symbolic link pointing the systemd unit configuration to
/dev/null
, thus preventing the service from being enabled or started. -
Confirm the service no longer starts.
sudo systemctl start nfs-server
The output confirms the service fails to start and explains why.
-
Unmak the service.
sudo systemctl unmask nfs-server
The command removes the symbolic link to `/dev/null’ and leaves the service in its original state of disabled and inactive.
User Behavior and Events
Enable Lingering For Users
Administrators can use the loginctl
command to change a specific user’s default behavior and enable processes for that user to linger or remain after the user terminates their session.
-
Enable lingering for a specific user.
sudo loginctl enable-linger oracle
-
Verify that the user is enabled.
sudo loginctl show-user oracle | grep -i linger
The output shows Linger=yes. You can also check for the presence of a file with the user’s name in the
/var/lib/systemd/linger
directory.
Review the Systemd Login Manager Configuration File
Systemd manages user login events and provides an editable configuration file to set the default behavior for different events related to the user’s session. You can locate this configuration file at /etc/systemd/logind.conf
.
-
Review the contents of the configuration file.
cat /etc/systemd/logind.conf
The majority of options are commented out but display the compile-time default values. There are three options in this file that can control how systemd handles processes running within the user space when the user’s session terminates.
- KillUserProcesses: This option can control whether or not user processes are terminated by default when the session ends. Setting this option to ‘no’ allows systemd to run user processes after any user logs out of the system.
- KillExcludeUsers: If the KillUserProcesses option is enabled, this option allows you to specify a space-separated list of users for which systemd will enable processes to continue to run after the session terminates. Adding a username to this list behaves similarly to adding a user to the systemd linger group using the
loginctl
command. - KillOnlyUsers: If the KillUserProcesses option is disabled, this parameter specifies a space-separated list of users for which systemd should terminate processes after logout.
Next Steps
By completing this tutorial, you’ll better understand the basics of using systemd on Oracle Linux. There are more features to explore, so continue your learning by checking out the links below and any of the tutorial’s suggested man pages.
Related Links
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.