8 Podman Networking

Understand how to configure the different networking requirements for containers that run in Podman, or when working with images and temporary containers within Buildah.

Podman handles the networking of containers differently depending on whether the containers are run by the root or privileged user or by a standard user on the host system. The root user has considerably more power to change network infrastructure on the host, but the standard user has limited ability to alter network infrastructure.

In a network setup for Podman, containers that are running in a pod or a group share the same networking namespace, and therefore have access to the same IP and MAC addresses and port mappings. The shared namespace eases network communication between different containers, or between the host and the containers running on it. For more information about pods, see Podman Pods.

Unless indicated otherwise, all the networking procedures in this section can only be performed by the root or privileged user.

Setting a Proxy Server

Configure Podman to use proxy servers, both system wide, and for the Podman systemd service.

Podman automatically uses the system proxy settings for commands that you run and for any containers that you provision.

You can apply proxy settings on a system-wide basis by adding these to /etc/profile:

HTTP_PROXY=proxy_URL:port
HTTPS_PROXY=proxy_URL:port

Some services, such as the Cockpit web console, use the Podman API systemd service to interact with Podman.

Note:

The Podman API service isn't required to use Podman. Only run this service if you use applications that use the API to interact with Podman, such as the Cockpit web console.

To set the system proxy environment variables for services that use the Podman API, you can create a systemd service drop-in.

  1. Create the /etc/systemd/system/podman.service.d directory, if it doesn't already exist, to host systemd service drop-in configuration specific to the Podman API service.
    sudo mkdir -p /etc/systemd/system/podman.service.d
  2. Create or edit the /etc/systemd/system/podman.service.d/http-proxy.conf file to contain contents similar to:
    [Service]
    Environment="HTTP_PROXY=proxy_URL:port"
    Environment="HTTPS_PROXY=proxy_URL:port"

    Replace proxy_URL:port with the URL and port number for the proxy server that you need to use.

  3. Reload the systemd configuration changes and restart the Podman API service:
    sudo systemctl daemon-reload
    sudo systemctl restart podman

Configuring Port Mapping for Containers

Map networking ports to use with unprivileged Podman containers.

For unprivileged containers that are run by the standard user, without root permissions, Podman relies on port mapping to use the existing network infrastructure that's available on the host system. A standard user can't, and doesn't need to, configure specific network settings such as assigning IP addresses for containers. Podman handles the networking functionality for these containers automatically by performing port forwarding to container-based services.

In previous releases of Oracle Linux and Podman, slirp4netns provided a separate network configuration for each container, set its own gateway address, and provided Network Address Translation (NAT) for communication between unprivileged containers and the host when restricted port access was needed. Port publishing for an unprivileged user was limited to IPv4 port numbers 1024 to 65535, and there was no native IPv6 port mapping functionality.

For systems running Oracle Linux 9.5 or newer, and Podman 5.3 or newer, unprivileged containers use pasta networking by default. Oracle Linux 10 systems also use pasta networking by default.

Pasta networking uses the Plug A Simple Socket Transport (passt) network driver to copy the IP addresses from the host network adapter configuration and provide a translation layer between layer 2 network interfaces and layer 4 sockets for protocols such as TCP, UDP, and ICMP. This translation layer is accessible to unprivileged containers, provides access to IPv4 and IPv6 port mappings, and manages network access between containers on the same host. For more information, see upstream passt documentation.

Note:

To learn more about Pasta and slirp4netns networking, see the Use Pasta Networking with Podman on Oracle Linux tutorial.

Example 8-1 Verify pasta networking in a container

To verify that a running container on an Oracle Linux 9 host system is using pasta networking, use the podman inspect command and check the HostConfig.NetworkMode setting. For example:

podman inspect --format='{{.HostConfig.NetworkMode}}' container_id

Replace container_id with the name or ID of the container. If the container is using pasta networking, the output looks similar to:

pasta

Example 8-2 Map a port on the host to a container

This example maps port 8080 on the host to the container port 80:

podman run --name mynginx -d -p 8080:80 quay.io/libpod/alpine_nginx:latest

The -P option can be used when running the container to enable Podman to automatically configure port mappings. However, the resulting configuration might be less predictable than you intend.

After a port mapping is established, that port can be accessed directly from the host where the container is running. In this example, the host can access port 80 on the container by opening a web browser to http://localhost:8080, or using a curl command. For example:

curl http://127.0.0.1:8080

In this example, the output looks similar to:

podman rulez

To view a container's port mappings directly, use the podman port command. For example:

podman port mynginx

If the container has active port mappings, the command output looks similar to:

80/tcp -> 0.0.0.0:8080

You can use the podman port -a command to view all port mappings for all the containers running on the host.

Example 8-3 Verify port access between containers

Because the containers and the host share the same network namespace, a container can communicate directly with another container by using the port mapping with the IP address or host name of the Oracle Linux host system.

Extending the previous example, run a second container with the following command:

podman run -it --rm oraclelinux:9-slim curl http://hostname:8080

Where hostname is the host name or IP address of the Oracle Linux host system.

To review inbound web traffic on the container running a web server, use the podman logs command:

podman logs container 2> /dev/null | grep "GET /"

Replace container with the container name or ID.

HTTP requests between Podman containers on the same host are logged with the host system gateway IP address rather than the public-facing IP address, because the pasta networking translation layer efficiently routes network traffic to the correct place.

If the host system has firewall software running, inbound traffic must be allowed on the mapped port for the container to be externally accessible to other host systems and remote clients. For example:

sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --reload

You can then connect to the container from a remote host using:

curl http://hostname:8080

Inspecting Container Networking

You can inspect the networking information for any container that you have created to obtain important information such as IP addressing, networks, or port mappings. Note that where a container is running under the root account, prefix the following commands with sudo, as appropriate.

To view IP addresses for a container, run:

podman inspect --format='{{.NetworkSettings.IPAddress}}' container

To view networks that are attached to a container, run:

podman inspect --format='{{.NetworkSettings.Networks}}' container

To view port mappings for a container, run:

podman inspect --format='{{.NetworkSettings.Ports}}' container

Advanced Networking for Containers

Advanced Podman network configuration can only be performed by the root user, and therefore applies only to containers that are run by the root user.

Advanced networking, which might require assigning IP addresses, enables containers to take advantage of particular features within the network stack to communicate with other containers in a pod. In this case, Podman implements a bridged network stack that can handle IP address assignment and full network access for each container.

For containers that are run by the root user, network management is achieved by using one of two backend network stacks:

  • Container Network Interface (CNI): A deprecated networking stack written in Golang and designed around the concept of plugins that can be used to implement different networking functions for a wide range of container related projects. For more information, see the upstream CNI documentation.
  • Netavark: A network stack designed for Podman, but compatible with other Open Container Initiative container management applications. For more information, see the upstream Netavark documentation.

Podman selects which network stack to use automatically, depending on which network stacks are available on a system. You can identify which network stack a system is using by running:

podman info --format '{{.Host.NetworkBackend}}'

Caution:

The podman network commands that are described in this section only work for containers that are run with root permissions. Running these commands with standard user containers returns an error code.

About CNI Networks

Note:

The CNI network stack is now deprecated and might be removed in future releases of Podman. Consider using Netavark instead. To change network stack, see Changing the Network Backend. Although CNI is deprecated, Netavark doesn't support plugins available in CNI, such as the ability to connect to Kubernetes networks created using Flannel. You can continue to use CNI networking to take advantage of this facility, if required.

CNI is a set of network tooling that caters to container-based network requirements. CNI uses a plugin development model to accommodate different network functions and requirements. Podman can use many of these plugins directly to easily set up basic networking for individual containers, or for containers running within a pod.

You can opt to use CNI as the default network backend to maintain consistent configuration with older Podman deployments.

To use CNI, you must have the containernetworking-plugins package installed. To check whether this package is installed, use:

rpm -q containernetworking-plugins

If it's not installed, install it using:

sudo dnf install -y containernetworking-plugins

For each network you create in Podman, a new configuration file in JSON format is generated in the /etc/cni/net.d/ directory. In most instances, you don't need to edit or manage the files in this directory.

A typical network configuration file might look similar to:

{
   "cniVersion": "0.4.0",
   "name": "mynetwork",
   "plugins": [
      {
         "type": "bridge",
         "bridge": "cni-podman1",
         "isGateway": true,
         "ipMasq": true,
         "hairpinMode": true,
         "ipam": {
            "type": "host-local",
            "routes": [
               {
                  "dst": "0.0.0.0/0"
               }
            ],
            "ranges": [
               [
                  {
                     "subnet": "10.89.0.0/24",
                     "gateway": "10.89.0.1"
                  }
               ]
            ]
         },
         "capabilities": {
            "ips": true
         }
      },
      {
         "type": "portmap",
         "capabilities": {
            "portMappings": true
         }
      },
      {
         "type": "firewall",
         "backend": ""
      },
      {
         "type": "tuning"
      }
   ]
}

About Netavark Networks

Netavark is a high-performance network stack that can be used to configure network bridges, firewall rules, and system settings for containers. Netavark doesn't use plugins to perform configuration setup. All network set up actions are performed directly by the tool itself, which reduces overhead, and improves network setup performance when you run a container. Netavark provides improved handling of IPv6, Network Address Translation (NAT) and port forwarding. DNS is also automatically configured across networks so that containers with several networks can connect to any other container on any other shared network by using the container name as a resolvable DNS reference.

Use the Netavark backend if all deployments are using Podman version 4.0 or later and you intend only to run containers within Podman. Netavark provides better performance and features that make containers easily integrate into existing network infrastructure and improved DNS resolution.

To use Netavark, you must have the netavark package installed. To check whether this package is installed, use:

rpm -q netavark

If it's not installed, install it using:

sudo dnf install -y netavark

For each network that you create within Podman, a new configuration file in JSON format is generated in the /etc/containers/networks/ directory. In most instances, you don't need to edit or manage the files within these directories.

A typical network configuration file might appear as follows:

{
     "name": "mynetwork",
     "id": "3977b0c90383b8460b75547576dba6ebcf67e815f0ed0c4b614af5cb329ebb83",
     "driver": "bridge",
     "network_interface": "podman1",
     "created": "2022-09-06T12:08:12.853219229Z",
     "subnets": [
          {
               "subnet": "10.89.0.0/24",
               "gateway": "10.89.0.1"
          }
     ],
     "ipv6_enabled": false,
     "internal": false,
     "dns_enabled": true,
     "ipam_options": {
          "driver": "host-local"
     }
}

You can display the contents of a network configuration file with the following command:

sudo podman network inspect network_name

Changing the Network Backend

You can switch between using the CNI and Netavark network backend stacks and force Podman to select one over the other. Switching from one to the other assumes that you have the packages for both network backend stacks installed, namely containernetworking-plugins and netavark.

Important:

If you change from one network backend to another, you must reset the Podman configuration. Switching network backends effectively removes all existing containers, images, networks, and pods from an environment.

To change and permanently set the network backend for a deployment, perform the following steps.

  1. Check whether /etc/containers/containers.conf exists. If not, copy the default configuration to this location so that you can customize it for the deployment.
    sudo cp /usr/share/containers/containers.conf /etc/containers/
  2. Edit /etc/containers/containers.conf and find the network_backend entry in the [network] section of the configuration file. Remove the entry's comment character if it exists. Change the value to match the network backend that you would prefer to use. For example, to use the CNI backend, change the entry to match:
    network_backend = "cni"
  3. Reinitialize the Podman configuration to its pristine state:
    sudo podman system reset

    This command displays a warning and prompts you for confirmation.

    WARNING! This will remove:
            - all containers
            - all pods
            - all images
            - all networks
            - all build cache
            - all machines
    Are you sure you want to continue? [y/N] y

    Note:

    If you have non root Podman instances, these also need to be reset individually if the network stack is changed.

  4. Reboot the system to ensure that the networking is running correctly for Podman to function.

Creating and Removing Networks

Use the podman network create command to generate a new network configuration. Podman automatically defines network settings based on the default network and any other existing networks. However, options are available to set the network range, subnet size, and to enable IPv6. Use the podman help network create command and the podman-network-create(1) manual page to get more information about these options.

Use the podman network remove command to remove a network. You must first remove any containers connected to the network. You can use the -f option to force the removal of any containers that are using the network.

You can also remove all networks that are unused on the system using the podman network prune command.

Example 8-4 Create a network

podman network create mynetwork

Example 8-5 Remove a network

To remove a network that you have created, run:
sudo podman network rm mynetwork

Example 8-6 Remove unused networks

podman network prune

Listing Networks

Use the podman network ls command to print a list of all the Podman networks.

For more information on the podman network ls command, see the podman-network-ls(1) manual page.

Example 8-7 List all Podman networks

podman network ls

The output might look similar to:

NETWORK ID    NAME                         DRIVER
2f259bab93aa  podman                       bridge
0b723c8502a2  podman-default-kube-network  bridge

Connecting and Disconnecting Container Networks

Use the podman network connect command to add a container to a network. When you create containers, the network is automatically started and containers are assigned IP addresses within the range that's defined for a network. Likewise, when you delete a container, the network is also automatically stopped.

Use the podman network disconnect to remove a container from a network.

For more information on the podman network connect command, see the podman-network-connect(1) manual page. For information on the podman network disconnect command, see the podman-network-disconnect(1) manual page.

Example 8-8 Add a container to a network

podman network connect mynetwork mycontainer

Example 8-9 Disconnect a container from a network

podman network disconnect mynetwork mycontainer