10 Upgrading the ASAP Cloud Native Environment
This chapter describes the tasks you perform in order to apply a change or upgrade to a component in the cloud native environment.
ASAP supports only one replica per instance. If the same ASAP image is used in two instances, the behavior is undefined. Due to these constraints, ASAP supports only offline upgrades.
ASAP Cloud Native Upgrade Procedures
ASAP cloud native owns the component and therefore the upgrade procedure applies for the component. ASAP cloud native provides the mechanism to perform the upgrade using the scripts that are bundled with the ASAP image and cloud native toolkit. The upgrade procedure includes upgrading the ASAP cloud native image and deploying the ASAP image in the instance from ASAP 7.3.0 or 7.4.0 to ASAP 7.4.1.
To upgrade the ASAP installer, Java, WebLogic Server, Database client, and Cartridge install or uninstall require an upgrade in the ASAP image.
To upgrade the ASAP Image:
-
Delete the running instance using the
delete–instance.sh
script. -
Copy the required installers to the
$asap-img-builder/installers
directory with the following files and folders- ASAP Redhat Package Manager (RPM) Installer. For example,
asap-installer-7.4.1.0.0-Bxxx.x86_64.rpm)
- Linux Installer for ASAP 7.4.1 or later
- Installers for WebLogic Server and JDK 8
- Oracle Database Client golden base version 19.3.0.0 for
Linux 64 bit
LINUX.X64_193000_client.zip
) - OPatch Utility
- JDK 17
(
jdk-17.0.11_linux-x64_bin.tar.gz
) - Oracle Database Release Updates
- ANT installer tar file(
apache-ant-1.10.15-bin.tar.gz
)
- ASAP Redhat Package Manager (RPM) Installer. For example,
-
Copy the required cartridges to the
$asap-img-builder/cartridges
directory. -
Run the following command to copy installers and cartridges to the volume:
$ASAP_IMG_BUILDER/upgrade-asap-image.sh
-
Create a new container using the previous version of the ASAP image using the following command:
podman run --name $ASAP_CONTAINER -dit -h $podman_HOSTNAME -p $WEBLOGIC_PORT -v $ASAP_VOLUME:/$ASAP_VOLUME ASAP-BASE-IMAGE
For example: podman run --name asap-c -dit -h asaphost -p 7891 -v podmanhost_volume:/podmanhost_volume asapcn:7.4.1.0
The container will be created with asap-c.
-
Enter into the ASAP container using the following command:
podman exec -it asap-c bash
You have entered into the ASAP container.
-
Start the WebLogic container by running the following command:
nohup ./startWeblogic.sh &
-
Open another shell to perform the database client upgrade and the JDK installation.
-
Run the container in detached mode.
podman run --name <CONTAINER_NAME> -dit -h <PODMAN_HOSTNAME> -p <WEBLOGIC_PORT> -v $ASAP_VOLUME:/$ASAP_VOLUME <ASAP-BASE-IMAGE>
- Extract the tar file using the following
command:
tar -xvf /$ASAP_VOLUME/installers/$JDK_FILE_17 -C /usr/lib/jvm/java/
In the example,
JDK_FILE_17
is the file name of the jdk 17 installer file entered inbuild_env.sh
. - Set the
JAVA_HOME_ASAP
variable to the path of JDK17 by running the command:export JAVA_HOME_ASAP= /usr/lib/jvm/java/jdk-17.0.11
-
Set the
ANT_HOME
variable to the Ant Installation location:export ANT_HOME= ASAP_home/ant
-
Run the
extract-ant.sh
script available in$ASAP_IMG_BUILDER/scripts/
to install Ant:./$ASAP_IMG_BUILDER/scripts/extract-ant.sh
-
Run the
installDBclient.sh
to remove the existing earlier 32 bit version of database client and install the 64-bit database client../$ASAP_IMG_BUILDER/installDBclient.sh
-
Run the following command to upgrade the installed RPM package:
ASAP_home is the directory into which the ASAP software is installed.sudo rpm -U --prefix ASAP_home asap-installer-7.4.1.0.0-Bxxx.x86_64.rpm
-
Update the
$ASAP_BASE/config/sampleUpgradeConfiguration.properties
file. See "Upgrading from ASAP 7.3.0.x or ASAP 7.4 to ASAP 7.4.1" in ASAP Installation Guide for information onsampleUpgradeConfiguration.properties
file. -
Run the upgrade script.
cd $ASAP_base/scripts ./upgradeASAP -properties ../config/sampleUpgradeConfiguration.properties -d cd.. source Environment_profile
-
Uninstall
NORTEL_DMS_POTS.sar
cartridge if it is installed in the previous version, using the following command:asapd -start -d | -host host -port port uninstallCartridge $ASAP_BASE/activationModels/Nortel_DMS_POTS.sar
-
Install
NORTEL_DMS_POTS.sar
cartridge, using the following command:installCartridge $ASAP_BASE/activationModels/Nortel_DMS_POTS.sar asapd -stop -d -url host:port
-
To start the ASAP server, run the following command:
start_asap_sys -h
Verify the ASAP server status using the
status
command.Ensure that there are minimum 5 processes in the output of
status
command, one process each of Control server, Daemon server, SARM, NEP and JNEP. In ASAP 7.4.1, on successful startup, the minimum process count is 5 as compared to 8 processes in 7.3.0 and 7.4.0. -
Update the
$ASAP_CNTK/scripts/status.sh
script to limit the process count to 5 instead of the default value of 8. For example, in the following line, replace 5 with 8.if [ "$process_count" -ge 5 ]; then
-
Commit the container to image by running the following command:
podman commit asap-c imagename:version
To upgrade cartridges, uninstall the previous cartridges and install the new cartridges.
To uninstall and install cartridges:
- Repeat steps 1 to 6 to create the staging container.
Cartridges are present in the
/podmanhost_volume/cartridges
container. - Start ASAP and WebLogic Server using the
startALL.sh
script. - Navigate to the ASAP installation directory using
cd $ASAP_BASE
. - Source the environment profile using the
source Environment_Profile
script. - Verify the ASAP server status using the
status
command. - Uninstall the cartridges. For more information, see "Uninstalling a Cartridge" in ASAP Installation Guide.
- Install the cartridges present in the
/podmanhost_volume/cartridges
directory. For more information, see "Installing a Cartridge" in ASAP Installation Guide.
To upgrade Java, WebLogic Server, and database client, see "About Upgrading ASAP" in ASAP Installation Guide.
The staging container is deployed with all the required updates to provision network elements. Save this container as a ASAP image to deploy in the Kubernetes cluster.
To create an image from the staging container:
- Run the following command to create an image from the staging
container:
podman commit asap-c imagename:version
Where version is the version of the ASAP image. This version should be higher than the previous version.
- To deploy a new ASAP image in the Kubernetes cluster, the image
should be available in the configured registry or on all worker nodes. To push
the ASAP image to the Kubernetes registry, run the following
commands:
podman push <imageid> <image-name_repository:tag> podman tag <imageid> <image-name_repository:tag>
- Stop and remove the containers using the following
commands:
podman stop asap-c podman rm asap-c
- Update the ASAP image in the
$ASAP_CNTK/charts/asap/values.yaml
file. - Create the ASAP instance using the following
command:
$ASAP_CNTK/scripts/create-instance.sh -p sr -i quick
Now the ASAP instance is upgraded successfully.
Order Balancer Cloud Native Upgrade Procedures
To upgrade the Order Balancer Image:
- Delete the running instance using the
delete–instance.sh
script. - Copy the required installers to the
$asap-img-builder/installers
directory. - Run the following command to copy installers to the
volume:
$asap-img-builder/upgradeOBPodmanImage.sh
- Create a new container using the previous version of the Order Balancer
image using the following
command:
podman run --name $OB_CONTAINER -dit -h $OB_HOSTNAME -p $WEBLOGIC_PORT -v $OB_VOLUME:/$OB_VOLUME <OB-BASE-IMAGE>
For example: podman run --name ob-c -dit -h obhost -p 7891 -v obhost_volume:/ obhost_volume obcn:7.4.0.0
The container will be created with ob-c.
- Enter into the container using the following
command:
podman exec -it ob-c bash
You have entered into the Order Balancer container. For upgrading Order Balancer, see "Updating and Redeploying Order Balancer" in ASAP System Administrator's Guide.
Creating an Image from the Staging Container
The staging container is deployed with all the required updates to route work orders to ASAP instances. Save this container as a podman image to deploy in the Kubernetes cluster.
To create an image from the staging container:
- Run the following command to create an image from the
staging
container:
podman commit ob-c imagename:version
Where version is the version of the Order Balancer image. This version should be higher than the previous version.
- To deploy the new Order Balancer image in the Kubernetes
cluster, the image should be available in the configured registry or
on all worker nodes. To push the image to the Kubernetes registry,
run the following
commands:
podman images | grep image name podman push <imageid> <image-name_repository:tag> podman tag imageid <tag-imageid>
- Stop and remove the containers using the following
commands:
podman stop ob-c podman rm ob-c
- Update the Order Balancer image in the
$OB_CNTK/charts/ob/values.yaml
file. - Create the Order Balancer instance using the following
command:
$OB_CNTK/scripts/create-instance.sh -p sr -i quick
Now the Order Balancer instance is upgraded successfully.
Upgrades to Infrastructure
- Rolling upgrades
- One-time upgrades
Note:
All infrastructure upgrades must continue to meet the supported types and versions listed in the ASAP documentation's certification statement.Rolling upgrades are where, with proper high-availability planning (like anti-affinity rules), the instance as a whole remains available as parts of it undergo temporary outages. Examples of this are Kubernetes worker node OS upgrades, Kubernetes version upgrades and Podman version upgrades.
One-time upgrades affect a given instance all at once. The instance as a whole suffers either an operational outage or a control outage. Examples of this is Ingress controller upgrade.
Kubernetes and Podman Infrastructure Upgrades
Follow standard Kubernetes and Podman practices to upgrade these components. The impact at any point should be limited to one node - master (Kubernetes and OS) or worker (Kubernetes, OS, and Podman). If a worker node is going to be upgraded, drain and cordon the node first. This will result in all pods moving away to other worker nodes. This is assuming your cluster has the capacity for this - you may have to temporarily add a worker node or two. For ASAP instances, any pods on the cordoned worker will suffer an outage until they come up on other workers. However, their messages and orders are redistributed to surviving pods and processing continues at a reduced capacity until the affected pods relocate and initialize. As each worker undergoes this process in turn, pods continue to terminate and start up elsewhere, but as long as the instance has pods in both affected and unaffected nodes, it will continue to process orders.
Ingress Controller Upgrade
Follow the documentation of your chosen Ingress Controller to perform an upgrade. Depending on the Ingress Controller used and its deployment in your Kubernetes environment, the ASAP instances it serves may see a wide set of impacts, ranging from no impact at all (if the Ingress Controller supports a clustered approach and can be upgraded that way) to a complete outage.
The new Traefik can be installed into a new name space, and one-by-one, projects can be unregistered from the old Traefik and registered with the new Traefik.
export TRAEFIK_NS=old-namespace $ASAP_CNTK/scripts/unregister-namespace -p project -t traefik
export TRAEFIK_NS=new-namespace $ASAP_CNTK/scripts/register-namespace -p project -t traefik
During this transition, there will be an outage in terms of the outside world interacting with ASAP. Any data that flows through the ingress will be blocked until the new Traefik takes over. This includes GUI traffic, order injection, API queries, and SAF responses from external systems. This outage will affect all the instances in the project being transitioned.
Miscellaneous Upgrade Procedures
This section describes miscellaneous upgrade scenarios.
Network File System (NFS)
If an instance is created successfully, but a change to the NFS configuration is required, then the change cannot be made to a running ASAP instance. In this case, the procedure is as follows:
- Delete the ASAP instance.
- Update the
nfs
details in the pv.yaml and pvc.yaml files. - Start the instance.