12 Debugging and Troubleshooting
This chapter provides information about debugging and troubleshooting issues that you may face while setting up an ASAP cloud native environment and creating ASAP cloud native instances.
- Troubleshooting Issues with Traefik and WebLogic Administration Console
- Common Error Scenarios
- Known Issues
Troubleshooting Issues with Traefik and WebLogic Administration Console
This section describes how to troubleshoot issues with access to WLST, and WebLogic Administration Console.
It is assumed that Traefik is used as the default Ingress controller
and the domain name suffix is asap.org
. You can modify the
instructions to suit any other domain name suffix that you may have chosen.
Table 12-1 URLs for Accessing ASAP Clients
Client | If Not Using Oracle Cloud Infrastructure Load Balancer | If Using Oracle Cloud Infrastructure Load Balancer |
---|---|---|
WebLogic Admin Console | http://admin.instance.project.asap.org:30305/console | http://admin.instance.project.asap.org:80/console |
Error: Http 404 Page not found
This is the most common problem that you may encounter.
To resolve this issue:
- Verify the Domain Name System (DNS) configuration.
Note:
These steps apply for local DNS resolution via the hosts file. For any other DNS resolution, such as corporate DNS, follow the corresponding steps.The hosts configuration file is located at:- On Windows: C:\Windows\System32\drivers\etc\hosts
- On Linux: /etc/hosts
Verify if the following entry exists in the hosts configuration file of the client machine from where you are trying to connect to ASAP:- Local installation of Kubernetes without Oracle Cloud Infrastructure load
balancer:
Kubernetes_Cluster_Master_IP <hostname provided in the values.yaml file>
- If Oracle Cloud Infrastructure load balancer is
used:
Load_balancer_IP instance.project.asap. <hostname given in the values.yaml file>
Resolve the DNS configuration.
- Verify the browser settings and ensure that
*.asap.org
is added to the No proxy list, if your proxy cannot route to it. - Verify if the Traefik pod is running and installing or updating the Traefik Helm
chart:
kubectl -n traefik get pod NAME READY STATUS RESTARTS AGE traefik-operator-657b5b6d59-njxwg 1/1 Running 0 128m
- Verify if the Traefik service is
running:
kubectl -n traefik get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oci-lb-service-traefik LoadBalancer 10.96.136.31 100.77.18.141 80:31115/TCP 20d <---- Is expected in OCI environment only -- traefik-operator NodePort 10.98.176.16 <none> 443:30443/TCP,80:30305/TCP 141m traefik-operator-dashboard ClusterIP 10.103.29.101 <none> 80/TCP 141m
Note:
If the Traefik service is not running, install or update the Traefik Helm chart. - Verify if the Traefik back-end systems are registered, by using one of the following
options:
- Run the following commands to check if your project name space is being monitored by
Traefik. The absence of your project name space means that your managed server
back-end systems are not registered with
Traefik.
$ cd $ASAP_CNTK $ source scripts/common-utils.sh $ find_namespace_list 'namespaces' traefik traefik-operator "traefik","project_1", "project_2"
- Verify the Traefik Dashboard and add the following DNS entry in your hosts
configuration
file:
Kubernetes_Access_IP traefik.asap.org
Add the same entry regardless of whether you are using Oracle Cloud Infrastructure load balancer or not. Navigate to:
http://traefik.asap.org:30305/dashboard/
and check the back-end systems that are registered. If you cannot find your project name space, install or upgrade the Traefik Helm chart. See "Installing the Traefik Container Image" for more information.
- Run the following commands to check if your project name space is being monitored by
Traefik. The absence of your project name space means that your managed server
back-end systems are not registered with
Traefik.
Reloading Instance Backend Systems
If your instance's ingress is present, yet Traefik does not recognize the URLs of your instance, try to unregister and register your project name space again. You can do this by using the unregister-namespace.sh and register-namespace.sh scripts in the toolkit.
Note:
Unregistering a project name space will stop access to any existing instances in that name space that was working prior to the unregistration.Debugging Traefik Access Logs
To increase the log level and debug Traefik access logs:
- Run the following
command:
A new instance of the Traefik pod is created automatically.$ helm upgrade traefik-operator traefik/traefik --version 9.11.0 --namespace traefik --reuse-values --set logs.access.enabled=true
- Look for the pod that is created most
recently:
$ kubectl get po -n traefik NAME READY STATUS RESTARTS AGE traefik-operator-pod_name 1/1 Running 0 0 5s $ kubectl -n traefik logs -f traefik-operator-pod_name
- Enabling access logs generates large amounts of information in the logs.
After debugging is complete, disable access logging by running the following
command:
$ helm upgrade traefik-operator traefik/traefik --version 9.11.0 --namespace traefik --reuse-values --set logs.access.enabled=false
Cleaning Up Traefik
Note:
Clean up is not usually required. It should be performed as a desperate measure only. Before cleaning up, make a note of the monitoring project name spaces. Once Traefik is re-installed, run $ASAP_CNTK/scripts/register-namespace.sh for each of the previously monitored project name spaces.
Warning: Uninstalling Traefik in this manner will interrupt access to all ASAP instances in the monitored project name spaces.
helm uninstall traefik-operator -n traefik
Cleaning up of Traefik does not impact actively running ASAP instances. However, they cannot be accessed during that time. Once the Traefik chart is re-installed with all the monitored name spaces and registered as Traefik back-end systems successfully, ASAP instances can be accessed again.
Setting up Logs
As described earlier in this guide, ASAP and WebLogic logs can be stored in the individual pods or in a location provided via a Kubernetes Persistent Volume. The PV approach is strongly recommended, both to allow for proper preservation of logs (as pods are ephemeral) and to avoid straining the in-pod storage in Kubernetes.
Within the pod, logs are available at: /u01/oracle/user_projects/domains/domain/servers/AdminServer/logs.
ASAP logs: /scratch/oracle/asap/DATA/logs/
When a PV is configured, logs are available at the following path starting from the root of the PV storage:
project-instance/logs.
Common Problems and Solutions
This section describes some common problems that you may experience because you have run a script or a command erroneously or you have not properly followed the recommended procedures and guidelines regarding setting up your cloud environment, components, tools, and services in your environment. This section provides possible solutions for such problems.
Pod Status
kubectl get pods -n namespace
## healthy status looks like this
NAME READY STATUS RESTARTS AGE
project-instance-introspect-domain-job-hzh9t 1/1 Running 0 3s
The READY field is showing 1/1, which indicates that the pod status is healthy.
NAME READY STATUS RESTARTS AGE
project-instance-introspect-domain-job-r2d6j 0/1 ErrImagePull 0 5s
### OR
NAME READY STATUS RESTARTS AGE
project-instance-introspect-domain-job-r2d6j 0/1 ImagePullBackOff 0 45s
This shows that the introspection pod status is not healthy. If the image can be pulled, it is possible that it took a long time to pull the image.
To resolve this issue, verify the image name and the tag and that it is accessible from the repository by the pod.
- Pull the container image manually on all Kubernetes nodes where the ASAP cloud native pods can be started up.