3 Using a Service Mesh
Istio automatically populates its service registry with all services you create in the service mesh, so it knows all possible service endpoints. By default, the Envoy proxy sidecars manage traffic by sending requests to each service instance in turn in a round-robin fashion. You can configure the management of this traffic using the Istio traffic management APIs. The APIs are accessed using Kubernetes custom resource definitions (CRDs), which you set up and deploy using YAML files.
The Istio API traffic management features available are:
-
Virtual services: Configure request routing to services within the service mesh. Each virtual service can contain a series of routing rules, that are evaluated in order.
-
Destination rules: Configures the destination of routing rules within a virtual service. Destination rules are evaluated and actioned after the virtual service routing rules. For example, routing traffic to a particular version of a service.
-
Gateways: Configure inbound and outbound traffic for services in the mesh. Gateways are configured as standalone Envoy proxies, running at the edge of the mesh. An ingress and an egress gateway are deployed automatically when you install the Istio module.
-
Service entries: Configure services outside the service mesh in the Istio service registry. Lets you manage the traffic to services as if they're in the service mesh. Services in the mesh are automatically added to the service registry, and service entries let you bring in outside services.
-
Sidecars: Configure sidecar proxies to set the ports, protocols, and services to which a microservice can connect.
These Istio traffic management APIs are documented in the upstream Istio documentation.
Enabling Proxy Sidecars
Istio provides network communication between services to be abstracted from the services themselves and to instead be handled by proxies. Istio uses a sidecar design, which means that communication proxies run in their own containers alongside every service container.
To enable the use of a service mesh in Kubernetes applications, you need to enable automatic proxy sidecar injection. This injects proxy sidecar containers into pods you create.
To put automatic sidecar injection into effect, the namespace to
be used by an application must be labeled with
istio-injection=enabled
. For example, to enable
automatic sidecar injection for the default
namespace:
kubectl label namespace default istio-injection=enabled
To see the label is set for the default
namespace, use:
kubectl get namespace -L istio-injection
The output looks similar to:
NAME STATUS AGE ISTIO-INJECTION
default Active 29h enabled
externalip-validation-system Active 29h
istio-system Active 29h
...
Any applications deployed into the default namespace have automatic sidecar injection enabled and the sidecar runs alongside the pod. For example, create an NGINX deployment:
kubectl create deployment --image nginx hello-world
Show the details of the pod to see that an
istio-proxy
container is also deployed with the
application:
kubectl get pods
The output looks similar to:
NAME READY STATUS RESTARTS AGE
hello-world-5fcdb6bc85-wph7h 2/2 Running 0 7m40s
You can show an istio-proxy
sidecar is created along with the
nginx
container, by describing the pod:
kubectl describe pods hello-world-5fcdb6bc85-wph7h
The output looks similar to:
...
Normal Started 13s kubelet, worker1.example.com Started container nginx
Normal Started 12s kubelet, worker1.example.com Started container istio-proxy
Setting up a Load Balancer for an Ingress Gateway
If you're deploying the Istio module, you might also want to set up a load balancer to handle the Istio ingress gateway traffic. The information in this section shows you how to set up a load balancer to manage access to services from outside the cluster using the Istio ingress gateway.
The load balancer port mapping in this section sets ports for HTTP and HTTPS. The load
balancer listens for HTTP traffic on port 80
and redirects it to the Istio
ingress gateway NodePort number for http2
. You query the port number to set
for http2
by entering the following on a control plane node:
kubectl describe svc istio-ingressgateway -n istio-system |grep http2
The output looks similar to:
Port: http2 80/TCP
NodePort: http2 32681/TCP
In this example, the NodePort is 32681
. So the
load balancer must be configured to listen for HTTP traffic on
port 80
and redirect it to the
istio-ingressgateway
service on port
32681
.
For HTTPS traffic, the load balancer listens on port
443
and redirects it to the Istio ingress
gateway NodePort number for https
. To find the
port numbers to set for https
, enter:
kubectl describe svc istio-ingressgateway -n istio-system |grep https
The output looks similar to:
Port: https 443/TCP
NodePort: https 31941/TCP
In this example, the NodePort is 31941
. So the
load balancer must be configured to listen for HTTPS traffic on
port 443
and redirect it to the
istio-ingressgateway
service on port
31941
.
The load balancer must be set up with the following configuration for HTTP traffic:
-
The listener listening on TCP port
80
. -
The distribution set to round robin.
-
The target set to the TCP port for
http2
on the worker nodes. In this example it's32681
. -
The health check set to TCP.
For HTTPS traffic:
-
The listener listening on TCP port
443
. -
The distribution set to round robin.
-
The target set to the TCP port for
https
on the worker nodes. In this example it's31941
. -
The health check set to TCP.
For more information on setting up a load balancer, see Oracle Linux 9: Setting Up Load Balancing or Oracle Linux 8: Setting Up Load Balancing.
If you're deploying to Oracle Cloud Infrastructure, you can either set up a new load balancer or, if you have one, use the load balancer you set up for the Kubernetes module.
To set up a load balancer on Oracle Cloud Infrastructure for HTTP traffic:
-
Add a backend set to the load balancer using weighted round robin.
-
Add the worker nodes to the backend set. Set the port for the worker nodes to the TCP port for
http2
. In this example it's32681
. -
Create a listener for the backend set using TCP port
80
.
To set up a load balancer on Oracle Cloud Infrastructure for HTTPS traffic:
-
Add a backend set to the load balancer using weighted round robin.
-
Add the worker nodes to the backend set. Set the port for the worker nodes to the TCP port for
https
. In this example it's31941
. -
Create a listener for the backend set using TCP port
443
.
For more information on setting up a load balancer in Oracle Cloud Infrastructure, see the Oracle Cloud Infrastructure documentation.
Setting up an Ingress Gateway
An Istio ingress gateway lets you define entry points into the service mesh through which all incoming traffic flows. A ingress gateway lets you manage access to services from outside the cluster. You can monitor and set route rules for the traffic entering the cluster.
This section contains an example to configure the automatically created ingress gateway to
an NGINX web server application. The example assumes you have a load balancer available at
lb.example.com
and is connecting to the
istio-ingressgateway
service on TCP
port
32681
. The load balancer listener is set to listen on HTTP
port 80
, which is the port for the NGINX web server application used in the
virtual service in this example.
To set up an ingress gateway:
-
Create the deployment file to create the NGINX web server application. Create a file named
my-nginx.yml
, containing:apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-webserver name: my-nginx namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: my-webserver template: metadata: labels: app: my-webserver spec: containers: - image: nginx name: my-nginx ports: - containerPort: 80
-
Create a service for the deployment. Create a file named
my-nginx-service.yml
containing:apiVersion: v1 kind: Service metadata: name: my-http-ingress-service namespace: my-namespace spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: my-webserver type: ClusterIP
-
Create an ingress gateway for the service. Create a file named
my-nginx-gateway.yml
containing:apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-nginx-gateway namespace: my-namespace spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "mynginx.example.com"
-
Create a virtual service for the ingress gateway. Create a file named
my-nginx-virtualservice.yml
containing:apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-nginx-virtualservice namespace: my-namespace spec: hosts: - "mynginx.example.com" gateways: - my-nginx-gateway http: - match: - uri: prefix: / route: - destination: port: number: 80 host: my-http-ingress-service
-
Set up a namespace for the application named
my-namespace
and enable automatic proxy sidecar injection.kubectl create namespace my-namespace kubectl label namespaces my-namespace istio-injection=enabled
-
Run the deployment, service, ingress gateway, and virtual service:
kubectl apply -f my-nginx.yml kubectl apply -f my-nginx-service.yml kubectl apply -f my-nginx-gateway.yml kubectl apply -f my-nginx-virtualservice.yml
-
You can see the ingress gateway is running using:
kubectl get gateways.networking.istio.io --namespace my-namespace
The output looks similar to:
NAME AGE my-nginx-gateway 33s
-
You can see the virtual service is running using:
kubectl get virtualservices.networking.istio.io --namespace my-namespace
The output looks similar to:
NAME GATEWAYS HOSTS AGE my-nginx-virtualservice [my-nginx-gateway] [mynginx.example.com] 107s
-
To confirm the ingress gateway is serving the application to the load balancer, use:
curl -I -HHost:mynginx.example.com lb.example.com:80/
The output looks similar to:
HTTP/1.1 200 OK Date: <date> 00:39:16 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive last-modified: <date> 14:32:47 GMT etag: "5e5e6a8f-264" accept-ranges: bytes x-envoy-upstream-service-time: 15
Setting up an Egress Gateway
The Istio egress gateway lets you set up access to external HTTP and HTTPS services from applications inside the service mesh. External services are called using the sidecar container.
The Istio egress gateway is deployed automatically. You don't need to manually deploy it. You can confirm the Istio egress gateway service is running using:
kubectl get svc istio-egressgateway -n istio-system
The output looks similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.111.233.121 <none> 80/TCP,443/TCP,15443/TCP 9m26s
An example to show you how to set up use an Istio egress gateway is available in the upstream Istio documentation.
Testing Network Resilience
Istio network resilience and testing features let you set up and test failure recovery and to inject faults to test resilience. You set up these features dynamically at runtime to improve the reliability of applications in the service mesh. The network resilience and testing features available in this release are:
-
Timeouts: The amount of time that a sidecar proxy waits for replies from a service. You can set up a virtual service to configure specific timeouts for a service. The default timeout for HTTP requests is 15 seconds.
-
Retries: The number of retries allowed by the sidecar proxy to connect to a service after an initial connection failure. You can set up a virtual service to enable and configure the number of retries for a service. By default, no retries are allowed.
-
Fault injection: Set up fault injection mechanisms to test failure recovery of applications. You can set up a virtual service to set up and inject faults into a service. You can set delays to mimic network latency or an overloaded upstream service. You can also set aborts to mimic failures in an upstream service.