10 Upgrading the ASAP Cloud Native Environment

This chapter describes the tasks you perform in order to apply a change or upgrade to a component in the cloud native environment.

ASAP supports only one replica per instance. If the same Docker image is used in two instances, the behavior is undefined. Due to these constraints, ASAP supports only offline upgrades.

ASAP Cloud Native Upgrade Procedures

ASAP cloud native owns the component and therefore the upgrade procedure applies for the component. ASAP cloud native provides the mechanism to perform the upgrade using the scripts that are bundled with the Docker image and cloud native toolkit. The upgrade procedure includes upgrading the ASAP cloud native Docker image and deploying the Docker image in the instance.

To upgrade the ASAP installer, Java, WebLogic Server, Database client, and Cartridge install or uninstall require an upgrade in the ASAP Docker image.

To upgrade the ASAP Docker Image:

  1. Delete the running instance using the delete–instance.sh script.
  2. Copy the required installers to the $asap-img-builder/installers directory.
  3. Copy the required cartridges to the $asap-img-builder/cartridges directory.
  4. Run the following commands to copy installers and cartridges to the volume:
    $asap-img-builder/upgradeASAPDockerImage.sh
  5. Create a new container using the previous version of the Docker image using the following command:
    docker run --name $ASAP_CONTAINER -dit -h $DOCKER_HOSTNAME -p $WEBLOGIC_PORT -v $ASAP_VOLUME:/$ASAP_VOLUME ASAP-BASE-IMAGE

    For example: docker run --name asap-c -dit -h asaphost -p 7601 -v dockerhost_volume:/dockerhost_volume asapcn:7.4.0.0

    The container will be created with asap-c.

  6. Enter into the ASAP container using the following command:
    docker exec -it asap-c bash

    You have entered into the ASAP container. Now, you have to upgrade the ASAP installation in the console mode.

  7. Upgrade the ASAP installer using the console mode, manually.
  8. Run the ./startWeblogic.sh script to start WebLogic Server in the background.
  9. Navigate to the installation directory of ASAP as shown below:
    cd /dockerhost_volume/installers/new installer
  10. Run the following command to install ASAP:
    /asap74ServerLinux -console
  11. Enter the details for the prompted options.
  12. Enter the hostname of WebLogic Server as: 127.0.0.1
  13. Enter the port number provided in the domain.xml file.
  14. Enter the credentials of the Oracle WebLogic Server Administrator provided when you created the domain.
  15. Select the server as AdminServer.

    ASAP installation in the console mode is completed.

Upgrading cartridges

To upgrade cartridges, uninstall the previous cartridges and install the new cartridges.

To uninstall and install cartridges:

  1. Repeat steps 1 to 6 to create the staging container.

    Cartridges are present in the /dockerhost_volume/cartridges container.

  2. Start ASAP and WebLogic Server using the startALL.sh script.
  3. Navigate to the ASAP installation directory using cd $ASAP_BASE.
  4. Source the environment profile using the source Environment_Profile script.
  5. Verify the ASAP server status using the status command.
  6. Uninstall the cartridges. For more information, see "Uninstalling a Cartridge" in ASAP Installation Guide.
  7. Install the cartridges present in the /dockerhost_volume/cartridges directory. For more information, see "Installing a Cartridge" in ASAP Installation Guide.

To upgrade Java, WebLogic Server, and database client, see "About Upgrading ASAP" in ASAP Installation Guide.

Creating an Image from the Staging Container

The staging container is deployed with all the required updates to provision network elements. Save this container as a Docker image to deploy in the Kubernetes cluster.

To create an image from the staging container:

  1. Run the following command to create an image from the staging container:
    docker commit asap-c imagename:version

    Where version is the version of the ASAP Docker image. This version should be higher than the previous version.

  2. To deploy a new Docker image in the Kubernetes cluster, the image should be available in the configured Docker registry or on all worker nodes. To push the Docker image to the Kubernetes docker registry, run the following commands:
    docker images | grep image name 
    docker tag imageid <tag-imageid>
    -bash-4.2$ docker push  tag-imageid
    
  3. Stop and remove the containers using the following commands:
    docker stop asap-c
    docker rm asap-c
    
  4. Update the Docker image in the $ASAP_CNTK/charts/asap/values.yaml file.
  5. Create the ASAP instance using the following command:
    $ASAP_CNTK/scripts/create-instance.sh -p sr -i quick

Now the ASAP instance is upgraded successfully.

Order Balancer Cloud Native Upgrade Procedures

To upgrade the Order Balancer Docker Image:

  1. Delete the running instance using the delete–instance.sh script.
  2. Copy the required installers to the $asap-img-builder/installers directory.
  3. Run the following command to copy installers to the volume:
    $asap-img-builder/upgradeOBDockerImage.sh
  4. Create a new container using the previous version of the Docker image using the following command:
    docker run --name $OB_CONTAINER   -dit -h $OB_HOSTNAME    -p $WEBLOGIC_PORT -v $OB_VOLUME:/$OB_VOLUME <OB-BASE-IMAGE>

    For example: docker run --name ob-c -dit -h obhost -p 7601 -v obhost_volume:/ obhost_volume obcn:7.4.0.0

    The container will be created with ob-c.

  5. Enter into the container using the following command:
    docker exec -it ob-c bash

    You have entered into the Order Balancer container. For upgrading Order Balancer, see "Updating and Redeploying Order Balancer" in ASAP System Administrator's Guide.

Creating an Image from the Staging Container

The staging container is deployed with all the required updates to route work orders to ASAP instances. Save this container as a Docker image to deploy in the Kubernetes cluster.

To create an image from the staging container:

  1. Run the following command to create an image from the staging container:
    docker commit ob-c imagename:version

    Where version is the version of the Order Balancer Docker image. This version should be higher than the previous version.

  2. To deploy the new Docker image in the Kubernetes cluster, the image should be available in the configured Docker registry or on all worker nodes. To push the Docker image to the Kubernetes docker registry, run the following commands:
    docker images | grep image name 
    docker tag imageid <tag-imageid>
    -bash-4.2$ docker push  tag-imageid
    
  3. Stop and remove the containers using the following commands:
    docker stop ob-c
    docker rm ob-c
    
  4. Update the Docker image in the $OB_CNTK/charts/ob/values.yaml file.
  5. Create the Order Balancer instance using the following command:
    $OB_CNTK/scripts/create-instance.sh -p sr -i quick

Now the Order Balancer instance is upgraded successfully.

Upgrades to Infrastructure

From the point of view of ASAP instances, upgrades to the cloud infrastructure fall into two categories:
  • Rolling upgrades
  • One-time upgrades

Note:

All infrastructure upgrades must continue to meet the supported types and versions listed in the ASAP documentation's certification statement.

Rolling upgrades are where, with proper high-availability planning (like anti-affinity rules), the instance as a whole remains available as parts of it undergo temporary outages. Examples of this are Kubernetes worker node OS upgrades, Kubernetes version upgrades and Docker version upgrades.

One-time upgrades affect a given instance all at once. The instance as a whole suffers either an operational outage or a control outage. Examples of this is Ingress controller upgrade.

Kubernetes and Docker Infrastructure Upgrades

Follow standard Kubernetes and Docker practices to upgrade these components. The impact at any point should be limited to one node - master (Kubernetes and OS) or worker (Kubernetes, OS, and Docker). If a worker node is going to be upgraded, drain and cordon the node first. This will result in all pods moving away to other worker nodes. This is assuming your cluster has the capacity for this - you may have to temporarily add a worker node or two. For ASAP instances, any pods on the cordoned worker will suffer an outage until they come up on other workers. However, their messages and orders are redistributed to surviving pods and processing continues at a reduced capacity until the affected pods relocate and initialize. As each worker undergoes this process in turn, pods continue to terminate and start up elsewhere, but as long as the instance has pods in both affected and unaffected nodes, it will continue to process orders.

Ingress Controller Upgrade

Follow the documentation of your chosen Ingress Controller to perform an upgrade. Depending on the Ingress Controller used and its deployment in your Kubernetes environment, the ASAP instances it serves may see a wide set of impacts, ranging from no impact at all (if the Ingress Controller supports a clustered approach and can be upgraded that way) to a complete outage.

The new Traefik can be installed into a new name space, and one-by-one, projects can be unregistered from the old Traefik and registered with the new Traefik.

export TRAEFIK_NS=old-namespace $ASAP_CNTK/scripts/unregister-namespace -p project -t traefik 
export TRAEFIK_NS=new-namespace $ASAP_CNTK/scripts/register-namespace -p project -t traefik

During this transition, there will be an outage in terms of the outside world interacting with ASAP. Any data that flows through the ingress will be blocked until the new Traefik takes over. This includes GUI traffic, order injection, API queries, and SAF responses from external systems. This outage will affect all the instances in the project being transitioned.

Miscellaneous Upgrade Procedures

This section describes miscellaneous upgrade scenarios.

Network File System (NFS)

If an instance is created successfully, but a change to the NFS configuration is required, then the change cannot be made to a running ASAP instance. In this case, the procedure is as follows:

  1. Delete the ASAP instance.
  2. Update the nfs details in the pv.yaml and pvc.yaml files.
  3. Start the instance.