Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use MetalLB with Oracle Cloud Native Environment
Introduction
Network load balancers provide a method of externally exposing Kubernetes applications. A Kubernetes LoadBalancer service creates a network load balancer that provides and exposes an external IP address for connecting to an application from outside the cluster.
MetalLB is a network load balancer for Kubernetes applications deployed with Oracle Cloud Native Environment that runs on bare metal hosts. MetalLB allows you to use Kubernetes LoadBalancer services, which traditionally use a cloud provider’s network load balancer, in bare metal environments.
Objectives
In this tutorial, you will learn:
- Install and configure the MetalLB module
- Use a Kubernetes application to confirm that MetalLB is working
Prerequisites
-
Minimum of 7 Oracle Linux instances for the Oracle Cloud Native Environment cluster:
- Operator node
- 3 Kubernetes control plane nodes
- 3 Kubernetes worker nodes
-
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Prerequisites for Oracle Cloud Native Environment
-
Additional requirements include:
- A virtual IP address for the primary control plane node.
-
Do not use this IP address on any of the nodes.
-
The load balancer dynamically sets the IP address to the control plane node assigned as the primary controller.
Note: If you are deploying to Oracle Cloud Infrastructure, your tenancy requires enabling a new feature introduced in OCI: Layer 2 Networking for VLANs within your virtual cloud networks (VCNs). The OCI Layer 2 Networking feature is not generally available, although the free lab environment’s tenancy enables this feature.
If you have a use case, please work with your technical team to get your tenancy listed to use this feature.
-
- A virtual IP address for the primary control plane node.
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ocne
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Update the Oracle Cloud Native Environment configuration.
cat << EOF | tee instances.yml > /dev/null compute_instances: 1: instance_name: "ocne-operator" type: "operator" 2: instance_name: "ocne-control-01" type: "controlplane" 3: instance_name: "ocne-worker-01" type: "worker" 4: instance_name: "ocne-worker-02" type: "worker" 5: instance_name: "ocne-control-02" type: "controlplane" 6: instance_name: "ocne-control-03" type: "controlplane" 7: instance_name: "ocne-worker-03" type: "worker" EOF
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e ocne_type=vlan -e use_vlan=true -e use_vlan_full=true -e use_int_lb=true -e "@instances.yml"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
-
Open a terminal and connect via SSH to the ocne-operator node.
ssh oracle@<ip_address_of_node>
-
Set up the
kubectl
command on the operator node.mkdir -p $HOME/.kube; \ ssh ocne-control-01 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config; \ sudo chown $(id -u):$(id -g) $HOME/.kube/config; \ export KUBECONFIG=$HOME/.kube/config; \ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
-
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a
Ready
state along with their current Kubernetes version.
Install the MetalLB Module
-
Open the firewall for MetalLB on each of the worker nodes.
for host in ocne-worker-01 ocne-worker-02 ocne-worker-03 do ssh $host "sudo firewall-cmd --zone=public --add-port=7946/tcp --permanent; sudo firewall-cmd --zone=public --add-port=7946/udp --permanent; sudo firewall-cmd --reload" done
-
Avoid using the
--api-server
flag in futureolcnectl
commands.Get a list of the module instances and add the
--update-config
flag.olcnectl module instances \ --api-server 10.0.12.100:8091 \ --environment-name myenvironment \ --update-config
-
Create the MetalLB configuration file.
cat << 'EOF' | tee metallb-config.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: default namespace: metallb spec: addresses: - 10.0.12.240-10.0.12.250 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2sample namespace: metallb spec: ipAddressPools: - default EOF
-
Deploy the MetalLB module.
olcnectl module create --environment-name myenvironment --module metallb --name mymetallb --metallb-kubernetes-module mycluster --metallb-config metallb-config.yaml olcnectl module install --environment-name myenvironment --name mymetallb
-
Show the updated list of installed modules.
olcnectl module instances --environment-name myenvironment
Example Output:
INSTANCE MODULE STATE mycluster kubernetes installed mymetallb metallb installed ...
Create a Kubernetes Application
Verify MetalLB works by deploying a Kubernetes application configured with a LoadBalancer Service.
-
Create a Kubernetes Deployment and Service file.
tee echo-oci-lb.yml > /dev/null << 'EOF' --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-deployment labels: app: echo1 spec: replicas: 2 selector: matchLabels: app: echo1 template: metadata: labels: app: echo1 spec: containers: - name: echoserver image: k8s.gcr.io/echoserver:1.4 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: echo-lb-service spec: selector: app: echo1 type: LoadBalancer ports: - name: http port: 80 targetPort: 8080 EOF
-
Deploy the application and service.
kubectl apply -f echo-oci-lb.yml
-
Confirm the Kubernetes deployment is running.
kubectl get deployments
Example Output:
NAME READY UP-TO-DATE AVAILABLE AGE echo-deployment 2/2 2 2 3m15s
-
Show the Kubernetes service is running.
kubectl get svc
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-lb-service LoadBalancer 10.111.72.49 10.0.12.240 80:31727/TCP 4m47s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58m
Notice the
EXTERNAL-IP
for theecho-lb-service
LoadBalancer has an IP address of 10.0.12.240. MetalLB provides this IP address, which you can use to connect to the application.
Testing the Deployment
The next step is to test the newly deployed application. As MetalLB dynamically provisioned the EXTERNAL-IP
value (in this scenario between the range 10.0.12.240-10.0.12.250
), the first two steps store this dynamic value as operating system variables.
-
Capture the assigned
EXTERNAL-IP
address.LB=$(kubectl get svc -o jsonpath="{.status.loadBalancer.ingress[0].ip}" echo-lb-service)
-
Capture the port number assigned.
LBPORT=$(kubectl get svc -o jsonpath="{.spec.ports[0].port}" echo-lb-service)
-
Confirm the environment variables exist and value set.
echo $LB echo $LBPORT
-
Use the cURL command to connect to the deployed application.
curl -i -w "\n" $LB:$LBPORT
Example Output:
HTTP/1.1 200 OK Server: nginx/1.10.0 Date: Wed, 10 Aug 2022 10:52:10 GMT Content-Type: text/plain Transfer-Encoding: chunked Connection: keep-alive CLIENT VALUES: client_address=10.244.2.0 command=GET real path=/ query=nil request_version=1.1 request_uri=http://10.0.12.240:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=10.0.12.240 user-agent=curl/7.61.1 BODY: -no body in request-
Summary
These results confirm the successful configuration of MetalLB and the ability to accept application requests.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Use MetalLB with Oracle Cloud Native Environment
F61364-08
October 2024