Oracle WebCenter Content on Kubernetes
The WebLogic Kubernetes Operator supports deployment of Oracle WebCenter Content. Follow the instructions in this document to set up Oracle WebCenter Content domains on Kubernetes.
In this release, Oracle WebCenter Content domains are supported using the “domain on a persistent volume” model only, where the domain home is located in a persistent volume (PV).
The operator has several key features to assist you with deploying and managing Oracle WebCenter Content domains in a Kubernetes environment. You can:
- Create Oracle WebCenter Content instances in a Kubernetes persistent volume (PV). This PV can reside in an NFS file system or other Kubernetes volume types.
- Start servers based on declarative startup parameters and desired states.
- Expose the Oracle WebCenter Content services and composites for external access.
- Scale Oracle WebCenter Content domains by starting and stopping Managed Servers on demand, or by integrating with a REST API to initiate scaling based on WLDF, Prometheus, Grafana, or other rules.
- Publish operator and WebLogic Server logs to Elasticsearch and interact with them in Kibana.
- Monitor the Oracle WebCenter Content instance using Prometheus and Grafana.
Current production release
The current supported production release of the Oracle WebLogic Server Kubernetes Operator, for Oracle WebCenter Content domains deployment is 4.2.9
Recent changes and known issues
See the Release Notes for recent changes and known issues for Oracle WebCenter Content domains deployment on Kubernetes.
Limitations in WebCenter Content Domain
See here for limitations in this release.
About this documentation
This documentation includes sections targeted to different audiences. To help you find what you are looking for more easily, please consult this table of contents:
Quick Start explains how to quickly get an Oracle WebCenter Content domain instance running, using the defaults, nothing special. Note that this is only for development and test purposes.
Install Guide and Administration Guide provide detailed information about all aspects of using the Kubernetes operator including:
- Installing and configuring the operator.
- Using the operator to create and manage Oracle WebCenter Content domains.
- Configuring Kubernetes load balancers.
- Configuring Elasticsearch and Kibana to access the operator and WebLogic Server log files.
- Patching an Oracle WebCenter Content Docker image.
- Removing/deleting domains.
- And much more!
Additional reading
Oracle WebCenter Content domains deployment on Kubernetes leverages the Oracle WebLogic Server Kubernetes operator framework.
- To develop an understanding of the operator, including design, architecture, domain life cycle management, and configuration overrides, review the operator documentation.
- To learn more about the Oracle WebCenter Content architecture and components, see Understanding Oracle WebCenter Content.
Release Notes
Review the latest changes for Oracle WebCenter Content on Kubernetes.
Recent changes
Date | Version | Change |
---|---|---|
December 2024 | 14.1.2.0.0 GitHub release version 24.4.3 |
First release of Oracle WebCenter Content 14.1.2.0.0 on Kubernetes. |
Known issues
Issue | Description |
---|---|
Publishing via LoadBalancer Endpoint | Currenly publishing is only supported via NodePort as described in section For Publishing Setting in WebCenter Content. |
Install Guide
Install the WebLogic Kubernetes Operator and prepare and deploy Oracle Webcenter Content domains.
Requirements and Limitations
Understand the system requirements and limitations for deploying and running Oracle WebCenter Content domains with the WebLogic Kubernetes Operator, including the WebCenter Content domain cluster sizing recommendations.
Contents
Introduction
This document describes the special considerations for deploying and running a WebCenter Content domain with the WebLogic Kubernetes Operator. Other than those considerations listed here, WebCenter Content domains work in the same way as Fusion Middleware Infrastructure domains and WebLogic Server domains.
In this release, WebCenter Content domains are supported using the domain on a persistent volume
model only where a WebCenter Content domain is located in a persistent volume (PV).
System Requirements
- Container images based on Oracle Linux 8 are now supported. My Oracle Support and the Oracle Container Registry host container images based on both Oracle Linux 8.
- Kubernetes 1.24.0+, 1.25.0+, 1.26.2+, 1.27.2+, 1.28.2+ and 1.29.1+ (check with
kubectl version
). - Docker 19.03.11+ (check with
docker version
) or CRI-O 1.20.2+ (check withcrictl version | grep RuntimeVersion
). - Flannel networking v0.13.0-amd64 or later (check with
Docker images | grep flannel
). - Helm 3.10.2+ (check with
helm version --client --short
). - Oracle WebLogic Kubernetes Operator 4.2.9 (see operator releases page).
- Oracle WebCenterContent 14.1.2.0.0 Docker image (built either using imagetool or the buildDockerImage script).
- You must have the
cluster-admin
role to install the operator. The operator does not need thecluster-admin
role at runtime. - We do not currently support running WebCenterContent in non-Linux containers.
- These proxy setup are used for pulling the required binaries and source code from the respective repositories:
- export NO_PROXY=“localhost,127.0.0.0/8,$(hostname -i),.your-company.com,/var/run/docker.sock”
- export no_proxy=“localhost,127.0.0.0/8,$(hostname -i),.your-company.com,/var/run/docker.sock”
- export http_proxy=http://www-proxy-your-company.com:80
- export https_proxy=http://www-proxy-your-company.com:80
- export HTTP_PROXY=http://www-proxy-your-company.com:80
- export HTTPS_PROXY=http://www-proxy-your-company.com:80
NOTE: Add your host IP by using
hostname -i
and alsonslookup
IP addresses to the no_proxy, NO_PROXY list above.
Limitations
Compared to running a WebLogic Server domain in Kubernetes using the WebLogic Kubernetes Operator, the following limitations currently exist for Oracle WebCenter Content domains:
- In this release, Oracle WebCenter Content domains are supported using the domain on a persistent volume model only, where the domain home is located in a persistent volume (PV).
- The “domain in image” and “model in image” models are not supported. Also, “WebLogic Deploy Tooling (WDT)” based deployments are currently not supported.
- Only configured clusters are supported. Dynamic clusters are not supported for Oracle WebCenter Content domains. Note that you can still use all of the scaling features, but you need to define the maximum size of your cluster at domain creation time. Mixed clusters (configured servers targeted to a dynamic cluster) are not supported.
- The WebLogic Logging Exporter project has been archived. Users are encouraged to use Fluentd or Logstash.
- The WebLogic Monitoring Exporter currently supports WebLogic MBean trees only. Support for JRF and Oracle WebCenter Content MBeans is not available. Also, a metrics dashboard specific to Oracle WebCenter Content is not available. Instead, use the WebLogic Server dashboard to monitor the Oracle WebCenter Content server metrics in Grafana.
- Some features such as multicast, multitenancy, production redeployment, and Node Manager (although it is used internally for the liveness probe and to start WebLogic Server instances) are not supported in this release.
- Features such as Java Messaging Service whole server migration, consensus leasing, and maximum availability architecture (Oracle WebCenter Content setup) are not supported in this release.
- You can have multiple UCM servers in your domain but only one IBR server is supported.
- There is a generic limitation with all load-balancers in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported.
- Enabling or disabling the memory resiliency for Oracle Service Bus using the Enterprise Manager Console is not supported in this release.
For up-to-date information about the features of WebLogic Server that are supported in Kubernetes environments, see My Oracle Support Doc ID 2349228.1.
WebCenter Content Cluster Sizing Recommendations
WebCenter Content | Normal Usage | Moderate Usage | High Usage |
---|---|---|---|
Admin Server | No of CPU(s) : 1, Memory : 4GB | No of CPU(s) : 1, Memory : 4GB | No of CPU(s) : 1, Memory : 4GB |
Managed Server | No of Servers : 2, No of CPU(s) : 2, Memory : 16GB | No of Servers : 2, No of CPU(s) : 4, Memory : 16GB | No of Servers : 3, No of CPU(s) : 6, Memory : 16-32GB |
PV Storage | Minimum 250GB | Minimum 250GB | Minimum 500GB |
Prepare your environment
To prepare your Oracle WebCenter Content in Kubernetes environment, complete the following steps:
Set up the code repository to deploy Oracle WebCenter Content domain
Set up your Kubernetes cluster
If you need help setting up a Kubernetes environment, check the documentation.
Install Helm
The WebLogic Kubernetes Operator uses Helm to create and deploy the necessary resources and then run it in a Kubernetes cluster. For Helm installation and usage information, see here.
Pull dependent images
Obtain dependent images and add them to your local registry. Dependent images include WebLogic Kubernetes Operator, Traefik. Pull these images and add them to your local registry:
- Pull these docker images and re-tag them as shown:
To pull an image from the Oracle Container Registry, in a web browser, navigate to https://container-registry.oracle.com
and log in using the Oracle Single Sign-On authentication service. If you do not already have SSO credentials, at the top of the page, click the Sign In link to create them.
Use the web interface to accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. Your acceptance of these terms are stored in a database that links the software images to your Oracle Single Sign-On login credentials.
Then, pull these docker images and re-tag them:
docker login https://container-registry.oracle.com (enter your Oracle email Id and password)
This step is required once at every node to get access to the Oracle Container Registry.
WebLogic Kubernetes Operator image:
$ docker pull container-registry.oracle.com/middleware/weblogic-kubernetes-operator:4.2.9
$ docker tag container-registry.oracle.com/middleware/weblogic-kubernetes-operator:4.2.9 oracle/weblogic-kubernetes-operator:4.2.9
Pull Traefik Image
$ docker pull traefik:2.6.0
Set up the code repository to deploy Oracle WebCenter Content domain
Oracle WebCenter Content domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator infrastructure. To deploy an Oracle WebCenter Content domain, you must set up the deployment scripts.
Create a working directory to set up the source code:
$ mkdir $HOME/wcc_4.2.9 $ cd $HOME/wcc_4.2.9
Download the WebLogic Kubernetes Operator source code and Oracle WebCenter Content Suite Kubernetes deployment scripts from the WebCenter Content repository. Required artifacts are available at
OracleWebCenterContent/kubernetes
.$ git clone https://github.com/oracle/fmw-kubernetes.git $ export WORKDIR=$HOME/wcc_4.2.9/fmw-kubernetes/OracleWebCenterContent/kubernetes
Obtain the Oracle WebCenter Content Docker image
Obtain the Oracle WebCenter Content image using any one of the options.
1 Get Oracle WebCenter Content image from Oracle Container Registry (OCR)
2 Build Oracle WebCenter Content Container image
1. Get Oracle WebCenter Content image from the Oracle Container Registry (OCR):
For first time users, to pull an image from the Oracle Container Registry, navigate to https://container-registry.oracle.com and log in using the Oracle Single Sign-On (SSO) authentication service. If you do not already have SSO credentials, you can create an Oracle Account using: https://profile.oracle.com/myprofile/account/create-account.jspx.
Use the web interface to accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. Your acceptance of these terms are stored in a database that links the software images to your Oracle Single Sign-On login credentials.
To obtain the image, log in to the Oracle Container Registry:
$ docker login container-registry.oracle.com
Find and then pull the prebuilt Oracle WebCenter Content Suite image :
$ docker pull container-registry.oracle.com/middleware/webcenter-content_cpu:14.1.2.0.0-<TAG>
2. Build Oracle WebCenter Content Container image :
Alternatively, if you want to build and use Oracle WebCenter Content Container image, using WebLogic Image Tool, with any additional bundle patch or interim patches, then follow these steps to create the image.
Note: The default Oracle WebCenter Content image name used for Oracle WebCenter Content domain deployment is
oracle/wccontent:14.1.2.0.0
. The image created must be tagged asoracle/wccontent:14.1.2.0.0
using thedocker tag
command. If you want to use a different name for the image, make sure to update the new image tag name in thecreate-domain-inputs.yaml
file and also in other instances where theoracle/wccontent:14.1.2.0.0
image name is used.
Install the WebLogic Kubernetes Operator
The WebLogic Kubernetes Operator supports the deployment of Oracle WebCenter Content domain in the Kubernetes environment. Follow the steps in this document to install WebLogic Kubernetes Operator. > Note: Optionally, you can execute these steps to send the contents of the operator’s logs to Elasticsearch.
In the following example commands to install the WebLogic Kubernetes Operator, opns
is the namespace and op-sa
is the service account created for WebLogic Kubernetes Operator:
Creating namespace and service account for WebLogic Kubernetes Operator
$ kubectl create namespace opns
$ kubectl create serviceaccount -n opns op-sa
Install WebLogic Kubernetes Operator
$ cd ${WORKDIR}
$ helm install weblogic-kubernetes-operator charts/weblogic-operator --namespace opns --set image=oracle/weblogic-kubernetes-operator:4.2.9 --set serviceAccount=op-sa --set "domainNamespaces={}" --set "javaLoggingLevel=FINE" --wait
Prepare the environment for Oracle WebCenter Content domain
Create a namespace for the Oracle WebCenter Content domain
Create a Kubernetes namespace (for example, wccns
) for the domain unless you intend to use the default namespace. Use the new namespace in the remaining steps in this section. For details, see Prepare to run a domain.
$ kubectl create namespace wccns
$ cd ${WORKDIR}
$ helm upgrade --reuse-values --namespace opns --set "domainNamespaces={wccns}" --wait weblogic-kubernetes-operator charts/weblogic-operator
Create a persistent storage for the Oracle WebCenter Content domain
In the Kubernetes namespace you created, create the PV and PVC for the domain by running the create-pv-pvc.sh script. Follow the instructions for using the script to create a dedicated PV and PVC for the Oracle WebCenter Content domain.
Review the configuration parameters for PV creation here. Based on your requirements, update the values in the
create-pv-pvc-inputs.yaml
file located at${WORKDIR}/create-weblogic-domain-pv-pvc/
. Sample configuration parameter values for the Oracle WebCenter Content domain are:baseName
: domaindomainUID
: wccinfranamespace
: wccnsweblogicDomainStorageType
: HOST_PATHweblogicDomainStoragePath
: /net//scratch/k8s_dir/wcc > Note: Alternatively, you can use NFS
as the value ofweblogicDomainStorageType
if you choose to use an NFS server for the persistent storage.
Ensure that the path for the
weblogicDomainStoragePath
property exists and have the ownership for 1000:0. If not, you need to create it as follows:$ sudo mkdir /scratch/k8s_dir/wcc $ sudo chown -R 1000:0 /scratch/k8s_dir/wcc
Run the
create-pv-pvc.sh
script:$ cd ${WORKDIR}/create-weblogic-domain-pv-pvc $ rm -rf output/ $ ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o output
The
create-pv-pvc.sh
script will create a subdirectorypv-pvcs
under the given/path/to/output-directory
directory and creates two YAML configuration files for PV and PVC. Apply these two YAML files to create the PV and PVC Kubernetes resources using thekubectl create -f
command:bash $ kubectl create -f output/pv-pvcs/wccinfra-domain-pv.yaml -n wccns $ kubectl create -f output/pv-pvcs/wccinfra-domain-pvc.yaml -n wccns
Get the details of PV and PVC:
$ kubectl describe pv wccinfra-domain-pv $ kubectl describe pvc wccinfra-domain-pvc -n wccns
Create a Kubernetes secret with domain credentials
Create the Kubernetes secrets username
and password
of the administrative account in the same Kubernetes namespace as the domain:
$ cd ${WORKDIR}/create-weblogic-domain-credentials
$ ./create-weblogic-credentials.sh -u weblogic -p welcome1 -n wccns -d wccinfra -s wccinfra-domain-credentials
For more details, see this document.
You can check the secret with the kubectl get secret
command.
For example:
$ kubectl get secret wccinfra-domain-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2020-09-16T08:22:50Z"
labels:
weblogic.domainName: wccinfra
weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:password: {}
f:username: {}
f:metadata:
f:labels:
.: {}
f:weblogic.domainName: {}
f:weblogic.domainUID: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-09-16T08:22:50Z"
name: wccinfra-domain-credentials
namespace: wccns
resourceVersion: "3277100"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-domain-credentials
uid: 35a8313f-1ec2-44b0-a2bf-fee381eed57f
type: Opaque
Create a Kubernetes secret with the RCU credentials
You also need to create a Kubernetes secret containing the credentials for the database schemas. When you create your domain, it will obtain the RCU credentials from this secret.
Use the provided sample script to create the secret:
$ cd ${WORKDIR}/create-rcu-credentials
$ ./create-rcu-credentials.sh -u weblogic -p welcome1 -a sys -q welcome1 -d wccinfra -n wccns -s wccinfra-rcu-credentials
The parameter values are:
-u
username for schema owner (regular user), required.
-p
password for schema owner (regular user), required.
-a
username for SYSDBA user, required.
-q
password for SYSDBA user, required.
-d
domainUID. Example: wccinfra
-n
namespace. Example: wccns
-s
secretName. Example: wccinfra-rcu-credentials
You can confirm the secret was created as expected with the kubectl get secret
command.
For example:
$ kubectl get secret wccinfra-rcu-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
sys_password: d2VsY29tZTE=
sys_username: c3lz
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2020-09-16T08:23:04Z"
labels:
weblogic.domainName: wccinfra
weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:password: {}
f:sys_password: {}
f:sys_username: {}
f:username: {}
f:metadata:
f:labels:
.: {}
f:weblogic.domainName: {}
f:weblogic.domainUID: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-09-16T08:23:04Z"
name: wccinfra-rcu-credentials
namespace: wccns
resourceVersion: "3277132"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-rcu-credentials
uid: b75f4e13-84e6-40f5-84ba-0213d85bdf30
type: Opaque
Configure access to your database
Run a container to create rcu pod
$ kubectl run rcu --image oracle/wccontent:14.1.2.0.0 -n wccns -- sleep infinity
#check the status of rcu pod
$ kubectl get pods -n wccns
Run the Repository Creation Utility to set up your database schemas
Create OR Drop schemas
To create the database schemas for Oracle WebCenter Content, run the create-rcu-schema.sh
script.
For example:
# make sure rcu pod status is running before executing this
kubectl exec -n wccns -ti rcu /bin/bash
# DB details
export CONNECTION_STRING=your_db_host:1521/your_db_service
export RCUPREFIX=your_schema_prefix
echo -e welcome1"\n"welcome1> /tmp/pwd.txt
# Create schemas
/u01/oracle/oracle_common/bin/rcu -silent -createRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS -component STB -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -tablespace USERS -tempTablespace TEMP -f < /tmp/pwd.txt
# Drop schemas
/u01/oracle/oracle_common/bin/rcu -silent -dropRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS -component STB -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -f < /tmp/pwd.txt
#exit from the container
exit
Note: In the create and drop schema commands above, pass additional components ( -component IPM -component CAPTURE ) if IPM and CAPTURE applications are enabled respectively.
Now that you have required Docker images and have created your RCU schemas, you are ready to create your domain. To continue, follow the instructions in Create Oracle WebCenter Content domains.
Create Oracle WebCenter Content domain
The section describes creation of Oracle WebCenter Content domain home on an existing Kubernetes persistent volume (PV) and persistent volume claim (PVC), using WebCenter Content deployment scripts. The scripts also generate the domain YAML file, which can then be used to start the Kubernetes artifacts of the corresponding domain.
Contents
- Prerequisites
- Prepare to use the create domain script
- Configuration parameters
- Run the create domain script
- Run the managed-server-wrapper script
- Verify the results
- Verify the domain
- Verify the pods
- Verify the services
- Scale-up/down Managed Server Counts
- Details required for configuring IBR provider on UCM
- Configure an additional mount or shared space to a domain for Imaging and Capture
Prerequisites
Before you begin, complete the following steps:
- Review the Domain resource documentation.
- Review the requirements and limitations.
- Ensure that you have executed all the preliminary steps in Prepare your environment.
- Ensure that the database schemas were created and the WebLogic Kubernetes Operator are running.
Prepare to use the create domain script
The sample scripts for Oracle WebCenter Content domain deployment are available at ${WORKDIR}/create-wcc-domain
.
You must edit create-domain-inputs.yaml
(or a copy of it) located under ${WORKDIR}/create-wcc-domain/domian-home-on-pv
to provide the details for your domain. Refer to the configuration parameters below to understand the information that you must provide in this file.
Configuration parameters
The following parameters can be provided in the inputs file.
Parameter | Definition | Default |
---|---|---|
sslEnabled |
Boolean indicating whether to enable SSL for each WebLogic Server instance. | false |
adminPort |
Port number for the Administration Server inside the Kubernetes cluster. | 7001 |
adminServerSSLPort |
SSL port number of the Administration Server inside the Kubernetes cluster. | 7002 |
adminNodePort |
Port number of the Administration Server outside the Kubernetes cluster. | 30701 |
adminServerName |
Name of the Administration Server. | AdminServer |
clusterName |
Name of the WebLogic cluster instance to generate for the domain. By default the cluster name is ucm_cluster & ibr_cluster for the WebCenter Content domain. | ucm_cluster |
configuredManagedServerCount |
Number of Managed Server instances to generate for the domain. | 5 |
createDomainFilesDir |
Directory on the host machine to locate all the files to create a WebLogic domain, including the script that is specified in the createDomainScriptName property. By default, this directory is set to the relative path wlst , and the create script will use the built-in WLST offline scripts in the wlst directory to create the WebLogic domain. An absolute path is also supported to point to an arbitrary directory in the file system. The built-in scripts can be replaced by the user-provided scripts as long as those files are in the specified directory. Files in this directory are put into a Kubernetes config map, which in turn is mounted to the createDomainScriptsMountPath , so that the Kubernetes pod can use the scripts and supporting files to create a domain home. |
wlst |
createDomainScriptsMountPath |
Mount path where the create domain scripts are located inside a pod. The create-domain.sh script creates a Kubernetes job to run the script (specified in the createDomainScriptName property) in a Kubernetes pod to create a domain home. Files in the createDomainFilesDir directory are mounted to this location in the pod, so that the Kubernetes pod can use the scripts and supporting files to create a domain home. |
/u01/weblogic |
createDomainScriptName |
Script that the create domain script uses to create a WebLogic domain. The create-domain.sh script creates a Kubernetes job to run this script to create a domain home. The script is located in the in-pod directory that is specified in the createDomainScriptsMountPath property. If you need to provide your own scripts to create the domain home, instead of using the built-it scripts, you must use this property to set the name of the script that you want the create domain job to run. |
create-domain-job.sh |
domainHome |
Home directory of the WebCenter Content domain. If not specified, the value is derived from the domainUID as /shared/domains/<domainUID> . |
/u01/oracle/user_projects/domains/wccinfra |
domainPVMountPath |
Mount path of the domain persistent volume. | /u01/oracle/user_projects |
domainUID |
Unique ID that will be used to identify this particular domain. Used as the name of the generated WebLogic domain as well as the name of the Kubernetes domain resource. This ID must be unique across all domains in a Kubernetes cluster. This ID cannot contain any character that is not valid in a Kubernetes service name. | wccinfra |
exposeAdminNodePort |
Boolean indicating if the Administration Server is exposed outside of the Kubernetes cluster. | false |
exposeAdminT3Channel |
Boolean indicating if the T3 administrative channel is exposed outside the Kubernetes cluster. | false |
image |
WebCenter Content Docker image. WebLogic Kubernetes Operator requires Oracle WebCenter Content 14.1.2.0.0 Refer to Obtain the Oracle WebCenter Content Docker image for details on how to obtain or create the image. | oracle/wccontent:14.1.2.0.0 |
imagePullPolicy |
WebLogic Docker image pull policy. Legal values are IfNotPresent , Always , or Never . |
IfNotPresent |
imagePullSecretName |
Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image. The presence of the secret will be validated when this parameter is specified. | |
includeServerOutInPodLog |
Boolean indicating whether to include the server .out to the pod’s stdout. | true |
initialManagedServerReplicas |
Number of UCM Managed Servers to initially start for the domain. | 3 |
javaOptions |
Java options for starting the Administration Server and Managed Servers. A Java option can have references to one or more of the following pre-defined variables to obtain WebLogic domain information: $(DOMAIN_NAME) , $(DOMAIN_HOME) , $(ADMIN_NAME) , $(ADMIN_PORT) , and $(SERVER_NAME) . If sslEnabled is set to true and the WebLogic demo certificate is used, add -Dweblogic.security.SSL.ignoreHostnameVerification=true to allow the Managed Servers to connect to the Administration Server while booting up. The WebLogic generated demo certificate in this environment typically contains a host name that is different from the runtime container’s host name. |
-Dweblogic.StdoutDebugEnabled=false |
logHome |
The in-pod location for the domain log, server logs, server out, and Node Manager log files. If not specified, the value is derived from the domainUID as /shared/logs/<domainUID> . |
/u01/oracle/user_projects/domains/logs/wccinfra |
managedServerNameBase |
Base string used to generate Managed Server names. | ucm_server |
managedServerPort |
Port number for each Managed Server. By default the managedServerPort is 16200 for the ucm_server & managedServerPort is 16250 for the ibr_server . |
16200 |
managedServerSSLPort |
SSL port number for each Managed Server. By default the managedServerSSLPort is 16201 for the ucm_server & managedServerSSLPort is 16251 for the ibr_server . |
16201 |
managedServerAdministrationPort |
Administration Port number for managed server. | 9200 |
namespace |
Kubernetes namespace in which to create the domain. | wccns |
persistentVolumeClaimName |
Name of the persistent volume claim created to host the domain home. If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc . |
wccinfra-domain-pvc |
productionModeEnabled |
Boolean indicating if production mode is enabled for the domain. | true |
serverStartPolicy |
Determines which WebLogic Server instances will be started. Legal values are NEVER , IF_NEEDED , ADMIN_ONLY . |
IF_NEEDED |
t3ChannelPort |
Port for the t3 channel of the NetworkAccessPoint. | 30012 |
t3PublicAddress |
Public address for the T3 channel. This should be set to the public address of the Kubernetes cluster. This would typically be a load balancer address. | If not provided, the script will attempt to set it to the IP address of the Kubernetes cluster |
weblogicCredentialsSecretName |
Name of the Kubernetes secret for the Administration Server’s user name and password. If not specified, then the value is derived from the domainUID as <domainUID>-weblogic-credentials . |
wccinfra-domain-credentials |
weblogicImagePullSecretName |
Name of the Kubernetes secret for the Docker Store, used to pull the WebLogic Server image. | |
serverPodCpuRequest , serverPodMemoryRequest , serverPodCpuCLimit , serverPodMemoryLimit |
The maximum amount of compute resources allowed, and minimum amount of compute resources required, for each server pod. Please refer to the Kubernetes documentation on Managing Compute Resources for Containers for details. |
Resource requests and resource limits are not specified. |
rcuSchemaPrefix |
The schema prefix to use in the database, for example WCC1 . You may wish to make this the same as the domainUID in order to simplify matching domains to their RCU schemas. |
WCC1 |
rcuDatabaseURL |
The database URL. | <YOUR DATABASE CONNECTION DETAILS> |
rcuCredentialsSecret |
The Kubernetes secret containing the database credentials. | wccinfra-rcu-credentials |
ipmEnabled |
Boolean indicating whether to enable WebCenter Imaging application | false |
captureEnabled |
Boolean indicating whether to enable WebCenter Capture application | false |
adfuiEnabled |
Boolean indicating whether to enable WebCenter ADF UI application | false |
initialIpmServerReplicas |
Number of IPM Managed Servers to initially start for the domain. | 0 |
initialCaptureServerReplicas |
Number of CAPTURE Managed Servers to initially start for the domain. | 0 |
initialAdfuiServerReplicas |
Number of ADFUI Managed Servers to initially start for the domain. | 0 |
Note that the names of the Kubernetes resources in the generated YAML files may be formed with the value of some of the properties specified in the create-inputs.yaml
file. Those properties include the adminServerName
, clusterName
and managedServerNameBase
. If those values contain any characters that are invalid in a Kubernetes service name, those characters are converted to valid values in the generated YAML files. For example, an uppercase letter is converted to a lowercase letter and an underscore ("_")
is converted to a hyphen ("-")
.
Note: The properties ipmEnabled, captureEnabled, adfuiEnabled are set to
false
by default and should be updated totrue
if you need to enable the respective applications. If any of those three applications (IPM, CAPTURE & ADFUI) are enabled, respective initial replica count must be a non-zero number.
The sample demonstrates how to create the Oracle WebCenter Content domain home and associated Kubernetes resources for that domain. In addition, the sample provides the capability for users to supply their own scripts to create the domain home for other use cases. The generated domain YAML file could also be modified to cover more use cases.
Run the create domain script
Run the create domain script, specifying your inputs file and an output directory to store the generated artifacts:
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./create-domain.sh \
-i create-domain-inputs.yaml \
-o <path to output-directory>
The script will perform the following steps:
- Create a directory for the generated Kubernetes YAML files for this domain if it does not already exist. The path name is
<path to output-directory>/weblogic-domains/<domainUID>
. If the directory already exists, its contents must be removed before using this script. - Create a Kubernetes job that will start up a utility Oracle WebCenter Content container and run offline WLST scripts to create the domain on the shared storage.
- Run and wait for the job to finish.
- Create a Kubernetes domain YAML file,
domain.yaml
, in the “output” directory that was created above. This YAML file can be used to create the Kubernetes resource using thekubectl create -f
orkubectl apply -f
command. - Create a convenient utility script,
delete-domain-job.yaml
, to clean up the domain home created by the create script.
Run the managed-server-wrapper script
Run managed-server-wrapper
script, which internally applies the domain YAML. This script also applies initial configurations for Managed Server containers and readies Managed Servers for future inter-container communications.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./start-managed-servers-wrapper.sh -o <path_to_output_directory> -p <load_balancer_port> -n <ibr_node_port> -m <ucm_node_port> -s <ssl_termination>
Note: In the above command, parameters
-n
and-m
refers to the node-ports to be used for exposingIBR intradoc port
andUCM intradoc port
respectively. Suggested values for both these node-ports should be within a range of 30000-32767. Please keep in mind that<ibr_node_port>
value must be specified at all time, whereas<ucm_node_port>
value is only required when IPM and ADFUI Managed Servers are enabled.
A value for parameter -s
needs to be provided only if SSL termination at loadbalancer is being used - acceptable value is either true
or false
. If this parameter value is not supplied, the script assumes that ssl termination at loadbalancer is not being used and by default the value will be taken as false
.
Run the startup configuration scripts for IPM and WCCADF applications as applicable
Run the script configure-ipm-connection.sh to do startup configurations if IPM is enabled.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-ipm-connection.sh -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_or_ssl_termination>
Run the script configure-wccadf-domain.sh to do startup configurations if ADFUI is enabled.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-wccadf-domain.sh -n <node_ip> -m <ucm_node_port>
Patch the domain for the changes to be applied to the domain.
#STOP
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
$ sleep 2m
#START
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'
The default domain created by the script has the following characteristics:
- An Administration Server named
AdminServer
listening on port7001
. - A configured cluster named
ucm_cluster
of size 3. - A configured cluster named
ibr_cluster
of size 1. - A configured cluster named
ipm_cluster
of size 3. - A configured cluster named
capture_cluster
of size 3. - A configured cluster named
wccadf_cluster
of size 3. - Managed Servers, named
ucm_cluster
listening on port16200
. - Managed Servers, named
ibr_cluster
listening on port16250
. - Managed Servers, named
ipm_cluster
listening on port16000
. - Managed Servers, named
capture_cluster
listening on port16400
. - Managed Servers, named
wccadf_cluster
listening on port16225
. - Log files that are located in
/shared/logs/<domainUID>
.
Verify the results
The create domain script will verify that the domain was created, and will report failure if there was any error. However, it may be desirable to manually verify the domain, even if just to gain familiarity with the various Kubernetes objects that were created by the script.
Generated YAML files with the default inputs
Sample content of the generated domain.yaml
:
$ cat output/weblogic-domains/wccinfra/domain.yaml
# Copyright (c) 2021, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
# This is an example of how to define a Domain resource.
#
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
name: wccinfra
namespace: wccns
labels:
weblogic.domainUID: wccinfra
spec:
# The WebLogic Domain Home
domainHome: /u01/oracle/user_projects/domains/wccinfra
maxClusterConcurrentStartup: 1
# The domain home source type
# Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image
domainHomeSourceType: PersistentVolume
# The WebLogic Server Docker image that WebLogic Kubernetes Operator uses to start the domain
image: "oracle/wccontent:14.1.2.0.0"
# imagePullPolicy defaults to "Always" if image version is :latest
imagePullPolicy: "IfNotPresent"
# Identify which Secret contains the credentials for pulling an image
#imagePullSecrets:
#- name:
# Identify which Secret contains the WebLogic Admin credentials (note that there is an example of
# how to create that Secret at the end of this file)
webLogicCredentialsSecret:
name: wccinfra-domain-credentials
# Whether to include the server out file into the pod's stdout, default is true
includeServerOutInPodLog: true
# Whether to enable log home
logHomeEnabled: true
# Whether to write HTTP access log file to log home
httpAccessLogInLogHome: true
# The in-pod location for domain log, server logs, server out, and Node Manager log files
logHome: /u01/oracle/user_projects/domains/logs/wccinfra
# An (optional) in-pod location for data storage of default and custom file stores.
# If not specified or the value is either not set or empty (e.g. dataHome: "") then the
# data storage directories are determined from the WebLogic domain home configuration.
dataHome: ""
# serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
# This determines which WebLogic Servers the WebLogic Kubernetes Operator will start up when it discovers this Domain
# - "NEVER" will not start any server in the domain
# - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
# - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
serverStartPolicy: "IF_NEEDED"
serverPod:
# an (optional) list of environment variable to be set on the servers
env:
- name: JAVA_OPTIONS
value: "-Dweblogic.StdoutDebugEnabled=false"
- name: USER_MEM_ARGS
value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m "
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
volumeMounts:
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
# adminServer is used to configure the desired behavior for starting the administration server.
adminServer:
# serverStartState legal values are "RUNNING" or "ADMIN"
# "RUNNING" means the listed server will be started up to "RUNNING" mode
# "ADMIN" means the listed server will be start up to "ADMIN" mode
serverStartState: "RUNNING"
adminService:
channels:
# The Admin Server's NodePort
- channelName: default
nodePort: 30701
# Uncomment to export the T3Channel as a service
# - channelName: T3Channel
# clusters is used to configure the desired behavior for starting member servers of a cluster.
# If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.
clusters:
- clusterName: ibr_cluster
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 1
serverStartPolicy: "IF_NEEDED"
# The number of managed servers to start for unlisted clusters
# replicas: 1
# Istio
# configuration:
# istio:
# enabled:
# readinessPort:
- clusterName: ucm_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
serverStartPolicy: "IF_NEEDED"
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: ipm_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: capture_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: wccadf_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
Verify the domain
To confirm that the domain was created, enter the following command:
$ kubectl describe domain DOMAINUID -n NAMESPACE
Replace DOMAINUID
with the domainUID
and NAMESPACE
with the actual namespace.
Sample domain description:
$ kubectl describe domain wccinfra -n wccns
Name: wccinfra
Namespace: wccns
Labels: weblogic.domainUID=wccinfra
Annotations: API Version: weblogic.oracle/v8
Kind: Domain
Metadata:
Creation Timestamp: 2020-11-23T12:48:13Z
Generation: 7
Managed Fields:
API Version: weblogic.oracle/v8
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:weblogic.domainUID:
Manager: kubectl
Operation: Update
Time: 2020-11-23T13:50:28Z
API Version: weblogic.oracle/v8
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusters:
f:conditions:
f:servers:
f:startTime:
Manager: OpenAPI-Generator
Operation: Update
Time: 2020-12-03T10:20:52Z
Resource Version: 18267402
Self Link: /apis/weblogic.oracle/v8/namespaces/wccns/domains/wccinfra
UID: 1a866c30-9b29-4281-bd2b-df80914efdff
Spec:
Admin Server:
Admin Service:
Channels:
Channel Name: default
Node Port: 30701
Server Start State: RUNNING
Clusters:
Cluster Name: ibr_cluster
Replicas: 1
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start Policy: IF_NEEDED
Server Start State: RUNNING
Cluster Name: ucm_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start Policy: IF_NEEDED
Server Start State: RUNNING
Cluster Name: ipm_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: capture_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: wccadf_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Data Home:
Domain Home: /u01/oracle/user_projects/domains/wccinfra
Domain Home Source Type: PersistentVolume
Http Access Log In Log Home: true
Image: oracle/wccontent_ora_final_it:14.1.2.0.0
Image Pull Policy: IfNotPresent
Include Server Out In Pod Log: true
Log Home: /u01/oracle/user_projects/domains/logs/wccinfra
Log Home Enabled: true
Max Cluster Concurrent Startup: 1
Server Pod:
Env:
Name: JAVA_OPTIONS
Value: -Dweblogic.StdoutDebugEnabled=false
Name: USER_MEM_ARGS
Value: -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m
Volume Mounts:
Mount Path: /u01/oracle/user_projects/domains
Name: weblogic-domain-storage-volume
Volumes:
Name: weblogic-domain-storage-volume
Persistent Volume Claim:
Claim Name: wccinfra-domain-pvc
Server Start Policy: IF_NEEDED
Web Logic Credentials Secret:
Name: wccinfra-domain-credentials
Status:
Clusters:
Cluster Name: ibr_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 1
Replicas: 1
Replicas Goal: 1
Cluster Name: ucm_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: ipm_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: capture_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: wccadf_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Conditions:
Last Transition Time: 2020-11-23T13:58:41.070Z
Reason: ServersReady
Status: True
Type: Available
Servers:
Desired State: RUNNING
Health:
Activation Time: 2020-11-25T16:55:24.930Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: AdminServer
State: RUNNING
Cluster Name: ibr_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-11-30T12:23:27.603Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: ibr_server1
State: RUNNING
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server2
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server3
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server4
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server5
Cluster Name: ucm_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-12-02T14:10:37.992Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: ucm_server1
State: RUNNING
Cluster Name: ucm_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-12-01T04:51:19.886Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: ucm_server2
State: RUNNING
Cluster Name: ucm_cluster
Desired State: SHUTDOWN
Server Name: ucm_server3
Cluster Name: ucm_cluster
Desired State: SHUTDOWN
Server Name: ucm_server4
Cluster Name: ucm_cluster
Desired State: SHUTDOWN
Server Name: ucm_server5
Cluster Name: ipm_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-12-01T04:51:19.886Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: ipm_server1
State: RUNNING
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server2
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server3
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server4
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server5
Cluster Name: capture_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-12-01T04:51:19.886Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: capture_server1
State: RUNNING
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server2
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server3
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server4
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server5
Cluster Name: wccadf_cluster
Desired State: RUNNING
Health:
Activation Time: 2020-12-01T04:51:19.886Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: wccadf_server1
State: RUNNING
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server2
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server3
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server4
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server5
Start Time: 2020-11-23T12:48:13.756Z
Events: <none>
In the Status
section of the output, the available servers and clusters are listed. Note that if this command is issued soon after the script finishes, there may be no servers available yet, or perhaps only the Administration Server but no Managed Servers. WebLogic Kubernetes Operator will start up the Administration Server first and wait for it to become ready before starting the Managed Servers.
Verify the pods
Enter the following command to see the pods running the servers:
$ kubectl get pods -n NAMESPACE
Here is an example of the output of this command. You can verify that an Administration Server and Managed Servers for ucm, ibr, ipm, capture and wccadf cluster are running.
$ kubectl get pod -n wccns
NAME READY STATUS RESTARTS AGE
rcu 1/1 Running 0 78d
wccinfra-adminserver 1/1 Running 0 9d
wccinfra-create-fmw-infra-sample-domain-job-l8r9d 0/1 Completed 0 9d
wccinfra-ibr-server1 1/1 Running 0 9d
wccinfra-ucm-server1 1/1 Running 0 9d
wccinfra-ucm-server2 1/1 Running 0 9d
wccinfra-ucm-server3 1/1 Running 0 9d
wccinfra-ipm-server1 1/1 Running 0 9d
wccinfra-ipm-server2 1/1 Running 0 9d
wccinfra-ipm-server3 1/1 Running 0 9d
wccinfra-capture-server1 1/1 Running 0 9d
wccinfra-capture-server2 1/1 Running 0 9d
wccinfra-capture-server3 1/1 Running 0 9d
wccinfra-wccadf-server1 1/1 Running 0 9d
wccinfra-wccadf-server2 1/1 Running 0 9d
wccinfra-wccadf-server3 1/1 Running 0 9d
Verify the services
Enter the following command to see the services for the domain:
$ kubectl get services -n NAMESPACE
Here is an example of the output of this command.
Sample list of services:
$ kubectl get services -n wccns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wccinfra-adminserver ClusterIP None <none> 7001/TCP 9d
wccinfra-adminserver-external NodePort 10.104.100.193 <none> 7001:30701/TCP 9d
wccinfra-cluster-ibr-cluster ClusterIP 10.98.100.212 <none> 16250/TCP 9d
wccinfra-cluster-ibr-cluster-ext NodePort 10.109.247.52 <none> 5555:30555/TCP 9d
wccinfra-cluster-ucm-cluster ClusterIP 10.108.47.178 <none> 16200/TCP 9d
wccinfra-cluster-ipm-cluster ClusterIP 10.108.217.111 <none> 16000/TCP 9d
wccinfra-cluster-capture-cluster ClusterIP 10.110.193.252 <none> 16400/TCP 9d
wccinfra-cluster-wccadf-cluster ClusterIP 10.109.191.247 <none> 16225/TCP 9d
wccinfra-ibr-server1 ClusterIP None <none> 16250/TCP 9d
wccinfra-ibr-server2 ClusterIP 10.97.253.44 <none> 16250/TCP 9d
wccinfra-ibr-server3 ClusterIP 10.110.183.48 <none> 16250/TCP 9d
wccinfra-ibr-server4 ClusterIP 10.108.228.158 <none> 16250/TCP 9d
wccinfra-ibr-server5 ClusterIP 10.101.29.140 <none> 16250/TCP 9d
wccinfra-ucm-server1 ClusterIP None <none> 16200/TCP 9d
wccinfra-ucm-server2 ClusterIP None <none> 16200/TCP 9d
wccinfra-ucm-server3 ClusterIP None <none> 16200/TCP 9d
wccinfra-ucm-server4 ClusterIP 10.109.25.242 <none> 16200/TCP 9d
wccinfra-ucm-server5 ClusterIP 10.109.193.26 <none> 16200/TCP 9d
wccinfra-ipm-server1 ClusterIP None <none> 16000/TCP 9d
wccinfra-ipm-server2 ClusterIP None <none> 16000/TCP 9d
wccinfra-ipm-server3 ClusterIP None <none> 16000/TCP 9d
wccinfra-ipm-server4 ClusterIP 10.111.215.108 <none> 16000/TCP 9d
wccinfra-ipm-server5 ClusterIP 10.109.220.10 <none> 16000/TCP 9d
wccinfra-capture-server1 ClusterIP None <none> 16400/TCP 9d
wccinfra-capture-server2 ClusterIP None <none> 16400/TCP 9d
wccinfra-capture-server3 ClusterIP None <none> 16400/TCP 9d
wccinfra-capture-server4 ClusterIP 10.109.72.216 <none> 16400/TCP 9d
wccinfra-capture-server5 ClusterIP 10.102.90.234 <none> 16400/TCP 9d
wccinfra-wccadf-server1 ClusterIP None <none> 16225/TCP 9d
wccinfra-wccadf-server2 ClusterIP None <none> 16225/TCP 9d
wccinfra-wccadf-server3 ClusterIP None <none> 16225/TCP 9d
wccinfra-wccadf-server4 ClusterIP 10.99.91.229 <none> 16225/TCP 9d
wccinfra-wccadf-server5 ClusterIP 10.105.114.38 <none> 16225/TCP 9d
Scale-up/down Managed Server Counts
For an existing domain, these managed-server replica counts can be modified, independent of each other, by modifying the domain.yaml (to be handled by the customers with sufficient access). To scale up or scale down managed server counts in an existing domain, the following steps need to be performed.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/output/weblogic-domains/wccinfra/
# modify respective managed server replicas to scale up or scale down and save it.
$ vim domain.yaml
# Apply the updated domain.yaml configuration file
$ kubectl apply -f domain.yaml
Details required for configuring IBR provider on UCM
Obtain details for service
wccinfra-cluster-ibr-cluster-ext
for the NodePort mapped to IBR intradoc portNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) wccinfra-cluster-ibr-cluster-ext NodePort 10.109.247.52 <none> 5555:30555/TCP
Create the outgoing provider by providing following details and restart the servers.
Please provide the NodePort value (in the above sample - 30555), as
Server Port
.Server Host Name: <hostname in which IBR Server pod is deployed> Server Port: 30555
Configure an additional mount or shared space to a domain for Imaging and Capture
Optionally, if you want to configure an additional mount or shared space to a domain, for WebCenter Imaging and WebCenter Capture applications for file imports, refer to the [Configure an Additional Mount or Shared-Space to a Domain for Imaging and Capture]({{< relref “/wccontent-domains/adminguide/configure-mount-share.md” >}}).
Launch Oracle Webcenter Content Native Applications in Containers
This section provides the steps required to use product native binaries with user interfaces.
Issue with Launching Headful User Interfaces for Oracle WebCenter Content Native Binaries
Oracle WebCenter Content (UCM) provide a set of native binaries with headful UIs, which are delivered as part of the product container image. WebCenter Content container images are, by default, created with Oracle slim linux image, which doesn’t come with all the packages pre-installed to support headful applications with UIs to be launched. UCM provides many such native binaries which uses JAVA AWT for UI support. With current Oracle WebCenter Content container images, running native applications fails, being unable to launch UIs.
The following sections document the solution, by providing a set of instructions, enabling users to run UCM native applications with UIs.
These instructions are divided in two parts - 1. Steps to update the existing container image 1. Steps to launch native apps using VNC sessions
Steps to Update out-of-the-box Oracle WebCenter Content Container Image Using WebLogic Image Tool
This section describes the method to update image with a OS package using WebLogic Image Tool. Please refer this for setting up the WebLogic Image Tool. #### Additional Build Commands
The installation of required OS packages in the image, can be done using yum command in additional build command option available in WebLogic Image Tool. Here is the sample additionalBuildCmds.txt
file, to be used, to install required Linux packages (libXext.x86_64, libXrender.x86_64 and libXtst.x86_64).
[final-build-commands]
USER root
RUN yum -y --downloaddir=/tmp/imagetool install libXext libXrender libXtst \
&& yum -y --downloaddir=/tmp/imagetool clean all \
&& rm -rf /var/cache/yum/* \
&& rm -rf /tmp/imagetool
USER oracle
Note: It is important to change the user to
oracle
, otherwise the user during the container execution will beroot
. #### Build arguments
The arguments required for updating the image can be passed as file to the WebLogic Image Tool.
'update' is the sub command to Image Tool for updating an existing docker image.
'--fromImage' option provides the existing docker image that has to be updated.
'--tag' option should be provided with the new tag for the updated image.
'--additionalBuildCommands' option should be provided with the above created additional build commands file.
'--chown oracle:root' option should be provided to update file permissions.
Below is a sample build argument (buildArgs) file, to be used for updating the image,
update
--fromImage <existing_WCContent_image_without_dependent_packages>
--tag <name_of_updated_WCContent_image_to_be_built>
--additionalBuildCommands ./additionalBuildCmds.txt
--chown oracle:root
Update Oracle WebCenter Content Container Image
Now we can execute the WebLogic Image Tool to update the out-of-the-box image, using the build-argument file described above -
$ imagetool @buildArgs
WebLogic Image Tool provides multiple options for updating the image. For detailed information on the update options, please refer to this document.
Updating the image does not modify the ‘CMD’ from the source image unless it is modified in the additional build commands.
$ docker inspect -f '{{.Config.Cmd}}' <name_of_updated_Wccontent_image>
[/u01/oracle/container-scripts/createDomainandStartAdmin.sh]
Steps to launch Oracle WebCenter Content native applications using VNC sessions.
Once updated image is successfully built and available on all required nodes, do the following: a. Update the domain.yaml file with updated image name and apply the domain.yaml file.
$ kubectl apply -f domain.yaml
- After applying the modified domain.yaml, pods will get restarted and start running with updated image with required packages.
$ kubectl get pods -n <namespace_being_used_for_wccontent_domain>
Create VNC sessions on the master node to launch native apps. These are the steps to be followed using the VNC session.
Run this command on each VNC session:
$ xhost + <HOST-IP or HOST-NAME of the node, on which POD is deployed>
Note: The above command works for multi-node clusters (in which master node and worker nodes are deployed on different hosts and pods are distributed among worker nodes, running on different hosts). In case of single node clusters (where there is only master node and no worker nodes and all pods are deployed on the host, on which master node is running), one needs to use container/pod’s IP instead of the master-node’s HOST-IP itself.
To obtain the container IP, follow the command mentioned in step g
, from within that container’s shell.
$ xhost + <IP of the container, from which binaries are to be run >
- Get into the pod’s (for example,
wccinfra-ucm-server1
) shell:
$ kubectl exec -n wccns -it wccinfra-ucm-server1 -- /bin/bash
- Traverse to the binaries location:
$ cd /u01/oracle/user_projects/domains/wccinfra/ucm/cs/bin
- Get the container IP:
$ hostname -i
- Set DISPLAY variable within the container:
$ export DISPLAY=<HOST-IP/HOST-NAME of the master node, where VNC session was
created>:vnc-session display-id
- Launch any native UCM application, from within the container, like this:
$ ./SystemProperties
If the application has an UI, it will get launched now.
Administration Guide
Describes how to use some of the common utility tools and configurations to administer Oracle WebCenter Content domains.
Set up a load balancer
The Oracle WebLogic Server Kubernetes operator supports ingress-based load balancers such as Traefik and NGINX (kubernetes/ingress-nginx). It also supports Apache Webtier load balancer.
Traefik
This section provides information about how to install and configure the ingress-based Traefik load balancer (version 2.6.0 or later for production deployments) to load balance Oracle WebCenter Content domain clusters. You can configure Traefik for non-SSL, SSL termination and end-to-end SSL access of the application URL.
Follow these steps to set up Traefik as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:
Non-SSL and SSL termination
Install the Traefik (ingress-based) load balancer
Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the
values.yaml
file in the sample but setkubernetes.namespaces
specifically.$ cd ${WORKDIR} $ kubectl create namespace traefik $ helm repo add traefik https://helm.traefik.io/traefik --force-update
Sample output:
"traefik" has been added to your repositories
Install Traefik:
$ cd ${WORKDIR} $ helm install traefik traefik/traefik \ --namespace traefik \ --values charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik}" \ --set "service.type=NodePort" --wait
Sample output:
NAME: traefik LAST DEPLOYED: Sun Jan 17 23:30:20 2021 NAMESPACE: traefik STATUS: deployed REVISION: 1 TEST SUITE: None
A sample
values.yaml
for deployment of Traefik 2.6.0:image: name: traefik tag: 2.6.0 pullPolicy: IfNotPresent ingressRoute: dashboard: enabled: true # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class) annotations: {} # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels) labels: {} providers: kubernetesCRD: enabled: true kubernetesIngress: enabled: true # IP used for Kubernetes Ingress endpoints ports: traefik: port: 9000 expose: true # The exposed port for this service exposedPort: 9000 # The port protocol (TCP/UDP) protocol: TCP web: port: 8000 # hostPort: 8000 expose: true exposedPort: 30305 nodePort: 30305 # The port protocol (TCP/UDP) protocol: TCP # Use nodeport if set. This is useful if you have configured Traefik in a # LoadBalancer # nodePort: 32080 # Port Redirections # Added in 2.2, you can make permanent redirects via entrypoints. # https://docs.traefik.io/routing/entrypoints/#redirection # redirectTo: websecure websecure: port: 8443 # # hostPort: 8443 expose: true exposedPort: 30443 # The port protocol (TCP/UDP) protocol: TCP nodePort: 30443 additionalArguments: - "--log.level=INFO"
Verify the Traefik status and find the port number of the SSL and non-SSL services:
$ kubectl get all -n traefik
Sample output:
NAME READY STATUS RESTARTS AGE
pod/traefik-f9cf58697-p57nt 1/1 Running 0 22d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/traefik NodePort 10.96.95.253 <none> 9000:32306/TCP,30305:30305/TCP,30443:30443/TCP 22d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/traefik 1/1 1 1 22d
NAME DESIRED CURRENT READY AGE
replicaset.apps/traefik-f9cf58697 1 1 1 22d
Access the Traefik dashboard through the URL
http://$(hostname -f):32306
, with the HTTP hosttraefik.example.com
:$ curl -H "host: $(hostname -f)" http://$(hostname -f):32306/dashboard/
Note: Make sure that you specify a fully qualified node name for
$(hostname -f)
Configure Traefik to manage ingresses
Configure Traefik to manage ingresses created in this namespace, where traefik
is the Traefik namespace and wccns
is the namespace of the domain:
$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
"kubernetes.namespaces={traefik,wccns}" --set
Sample output:
Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:43:02 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
Create an ingress for the domain
Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml
. By default, type
is TRAEFIK
, tls
is Non-SSL
, and domainType
is wccinfra
. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml
based on the type of configuration (non-SSL or SSL). If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths
) based on the domain application URLs that need to be accessed. The template YAML file for the Traefik (ingress-based) load balancer is located at ${WORKDIR}/charts/ingress-per-domain/templates/traefik-ingress.yaml
Install
ingress-per-domain
using Helm for non-SSL configuration:$ cd ${WORKDIR} $ helm install wcc-traefik-ingress \ \ charts/ingress-per-domain --set type=TRAEFIK \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "traefik.hostname=$(hostname -f)" \ --set tls=NONSSL
Sample output:
NAME: wcc-traefik-ingress LAST DEPLOYED: Sun Jan 17 23:49:09 2021 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate a Kubernetes secret:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=<your_host_name>" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=DNS:<your_host_name> ) $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
Note: The value of
CN
andsubjectAltName
is the host on which this ingress is to be deployed.Create Traefik Middleware custom resource
In case of SSL termination, Traefik must pass a custom header
WL-Proxy-SSL:true
to the WebLogic Server endpoints. Create the Middleware using the following command:$ cat <<EOF | kubectl apply -f - apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: wls-proxy-ssl namespace: wccns spec: headers: customRequestHeaders: WL-Proxy-SSL: "true" EOF
Create the Traefik TLSStore custom resource.
In case of SSL termination, Traefik should be configured to use the user-defined SSL certificate. If the user-defined SSL certificate is not configured, Traefik will create a default SSL certificate. To configure a user-defined SSL certificate for Traefik, use the TLSStore custom resource. The Kubernetes secret created with the SSL certificate should be referenced in the TLSStore object. Run the following command to create the TLSStore:
$ cat <<EOF | kubectl apply -f - apiVersion: traefik.containo.us/v1alpha1 kind: TLSStore metadata: name: default namespace: wccns spec: defaultCertificate: secretName: domain1-tls-cert EOF
Install
ingress-per-domain
using Helm for SSL configuration.The Kubernetes secret name should be updated in the template file.
The template file also contains the following annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.middlewares: wccns-wls-proxy-ssl@kubernetescrd
The entry point for SSL access and the Middleware name should be updated in the annotation. The Middleware name should be in the form
<namespace>-<middleware name>@kubernetescrd
.$ cd ${WORKDIR} $ helm install wcc-traefik-ingress \ \ charts/ingress-per-domain --set type=TRAEFIK \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "traefik.hostname=$(hostname -f)" \ --set "traefik.hostnameorip=$(hostname -f)" \ --set tls=SSL
Sample output:
NAME: wcc-traefik-ingress LAST DEPLOYED: Mon Jul 20 11:44:13 2020 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
For non-SSL access to the Oracle WebCenter Content application, get the details of the services by the ingress:
$ kubectl describe ingress wccinfra-traefik -n wccns
These are all the services supported by the above deployed ingress:
Name: wccinfra-traefik
Namespace: wccns
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
domain1.org
/em wccinfra-adminserver:7001 (10.244.0.201:7001)
/wls-exporter wccinfra-adminserver:7001 (10.244.0.201:7001)
/cs wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/adfAuthentication wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/_ocsh wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/_dav wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/idcws wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/idcnativews wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/wsm-pm wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/ibr wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
/ibr/adfAuthentication wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
/weblogic/ready wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/imaging wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
/dc-console wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/dc-client wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/wcc wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations: kubernetes.io/ingress.class: traefik
meta.helm.sh/release-name: wcc-traefik-ingress
meta.helm.sh/release-namespace: wccns
Events: <none>
For SSL access to the Oracle WebCenter Content application, get the details of the services by the above deployed ingress:
$ kubectl describe ingress wccinfra-traefik -n wccns
All services supported by the above deployed ingress:
Name: wccinfra-traefik
Namespace: wccns
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
domain1.org
/em wccinfra-adminserver:7001 (10.244.0.201:7001)
/wls-exporter wccinfra-adminserver:7001 (10.244.0.201:7001)
/cs wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/adfAuthentication wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/_ocsh wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/_dav wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/idcws wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/idcnativews wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/wsm-pm wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/ibr wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
/ibr/adfAuthentication wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
/weblogic/ready wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
/imaging wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
/dc-console wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/dc-client wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/wcc wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations: kubernetes.io/ingress.class: traefik
meta.helm.sh/release-name: wcc-traefik-ingress
meta.helm.sh/release-namespace: wccns
Events: <none>
To confirm that the load balancer noticed the new ingress and is successfully routing to the domain server pods, you can send a request to the URL for the “WebLogic ReadyApp framework”, which should return an HTTP 200 status code, as follows:
$ curl -v http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER_PORT}/weblogic/ready * About to connect() to abc.com port 30305 (#0) * Trying 100.111.156.246... * Connected to abc.com (100.111.156.246) port 30305 (#0) > GET /weblogic/ready HTTP/1.1 > User-Agent: curl/7.29.0 > Host: domain1.org:30305 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 0 < Date: Thu, 03 Dec 2020 13:16:19 GMT < Vary: Accept-Encoding < * Connection #0 to host abc.com left intact
Verify domain application URL access
For non-SSL configuration
After setting up the Traefik (ingress-based) load balancer, verify that the domain application URLs are accessible through the non-SSL load balancer port 30305
for HTTP access. The sample URLs for Oracle WebCenter Content domain of type wcc
are:
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/weblogic/ready
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/cs
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/ibr
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/em
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/imaging
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/dc-console
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/wcc
For SSL configuration
After setting up the Traefik (ingress-based) load balancer, verify that the domain applications are accessible through the SSL load balancer port 30443
for HTTPS access. The sample URLs for Oracle WebCenter Content domain are:
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/weblogic/ready
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/em
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc
End-to-end SSL configuration
Install the Traefik load balancer for end-to-end SSL
Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the
values.yaml
file in the sample but setkubernetes.namespaces
specifically.$ cd ${WORKDIR} $ kubectl create namespace traefik $ helm repo add traefik https://helm.traefik.io/traefik --force-update
Sample output:
"traefik" has been added to your repositories
Install Traefik:
$ cd ${WORKDIR} $ helm install traefik traefik/traefik \ --namespace traefik \ --values charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik}" \ --set "service.type=NodePort" \ --wait
Sample output:
NAME: traefik LAST DEPLOYED: Sun Jan 17 23:30:20 2021 NAMESPACE: traefik STATUS: deployed REVISION: 1 TEST SUITE: None
Verify the Traefik operator status and find the port number of the SSL and non-SSL services:
$ kubectl get all -n traefik
Sample output:
NAME READY STATUS RESTARTS AGE
pod/traefik-operator-676fc64d9c-skppn 1/1 Running 0 78d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/traefik-operator NodePort 10.109.223.59 <none> 443:30443/TCP,80:30305/TCP 78d
service/traefik-operator-dashboard ClusterIP 10.110.85.194 <none> 80/TCP 78d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/traefik-operator 1/1 1 1 78d
NAME DESIRED CURRENT READY AGE
replicaset.apps/traefik-operator-676fc64d9c 1 1 1 78d
replicaset.apps/traefik-operator-cb78c9dc9 0 0 0 78d
Access the Traefik dashboard through the URL
http://$(hostname -f):32306
, with the HTTP hosttraefik.example.com
:$ curl -H "host: $(hostname -f)" http://$(hostname -f):32306/dashboard/
Note: Make sure that you specify a fully qualified node name for
$(hostname -f)
.
Configure Traefik to manage the domain
Configure Traefik to manage the domain application service created in this namespace, where traefik
is the Traefik namespace and wccns
is the namespace of the domain:
$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
"kubernetes.namespaces={traefik,wccns}" --set
Sample output:
Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:43:02 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
Create IngressRouteTCP
To enable SSL passthrough in Traefik, you can configure a TCP router. A sample YAML for
IngressRouteTCP
is available at${WORKDIR}/charts/ingress-per-domain/tls/traefik-tls.yaml
.Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.
The following should be updated in
traefik-tls.yaml
:- The service name and the SSL port should be updated in the Services.
- The load balancer hostname should be updated in the
HostSNI
rule.
Sample
traefik-tls.yaml
:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: wcc-ucm-routetcp
namespace: wccns
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`your_host_name`)
services:
- name: wccinfra-cluster-ucm-cluster
port: 16201
weight: 3
terminationDelay: 400
tls:
passthrough: true
- Create the IngressRouteTCP:
cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl apply -f traefik-tls.yaml
Verify end-to-end SSL access
Verify the access to application URLs exposed through the configured service. You should be able to access the following Oracle WebCenter Content domain URLs:
LOADBALANCER-SSLPORT is 30443
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc
Delete the IngressRouteTCP
cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl delete -f traefik-tls.yaml
Uninstall Traefik
$ helm delete wcc-traefik-ingress -n wccns
$ helm delete traefik -n wccns
$ kubectl delete namespace traefik
NGINX
This section provides information about how to install and configure the ingress-based NGINX load balancer to load balance Oracle WebCenter Content domain clusters. You can configure NGINX for non-SSL, SSL termination, and end-to-end SSL access of the application URL.
Follow these steps to set up NGINX as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:
See the official installation document for prerequisites.
To get repository information, enter the following Helm commands:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
Non-SSL and SSL termination
Install the NGINX load balancer
Deploy the
ingress-nginx
controller by using Helm on the domain namespace:$ helm install nginx-ingress -n wccns \ --set controller.service.type=NodePort \ --set controller.admissionWebhooks.enabled=false \ ingress-nginx/ingress-nginx
Sample output:
NAME: nginx-ingress
LAST DEPLOYED: Fri Jul 29 00:14:19 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check the status of the deployed ingress controller:
$ kubectl --namespace wccns get services | grep ingress-nginx-controller
Sample output:
nginx-ingress-ingress-nginx-controller NodePort 10.97.189.122 <none> 80:30993/TCP,443:30232/TCP 7d2h
Configure NGINX to manage ingresses
Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file
${WORKDIR}/charts/ingress-per-domain/values.yaml
. By default,type
isTRAEFIK
,tls
isNon-SSL
, anddomainType
iswccinfra
. These values can be overridden by passing values through the command line or can be edited in the sample filevalues.yaml
. If needed, you can update the ingress YAML file to define more path rules (in sectionspec.rules.host.http.paths
) based on the domain application URLs that need to be accessed. Update the template YAML file for the NGINX load balancer located at${WORKDIR}/charts/ingress-per-domain/templates/nginx-ingress.yaml
$ cd ${WORKDIR} $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \ \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml "nginx.hostname=$(hostname -f)" \ --set \ --set type=NGINX --set tls=NONSSL
Sample output:
NAME: wccinfra-nginx-ingress LAST DEPLOYED: Sun Feb 7 23:52:38 2021 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate a Kubernetes secret:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=<your_host_name>" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=DNS:<your_host_name> ) $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
Note: The value of
CN
andsubjectAltName
is the host on which this ingress is to be deployed.Install
ingress-per-domain
using Helm for SSL configuration:$ cd ${WORKDIR} $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "nginx.hostname=$(hostname -f)" \ --set "nginx.hostnameorip=$(hostname -f)" \ --set type=NGINX --set tls=SSL
Sample output:
NAME: wccinfra-nginx-ingress LAST DEPLOYED: Mon Feb 8 00:01:13 2021 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
For non-SSL access or SSL to the Oracle WebCenter Content application, get the details of the services by the ingress:
$ kubectl describe ingress wccinfra-nginx -n wccns
Sample output of the services supported by the above deployed ingress:
Name: wccinfra-nginx
Namespace: wccns
Address: 10.97.189.122
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
domain1-tls-cert terminates domain1.org
Rules:
Host Path Backends
---- ---- --------
domain1.org
/em wccinfra-adminserver:7001 (10.244.0.58:7001)
/servicebus wccinfra-adminserver:7001 (10.244.0.58:7001)
/cs wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
/adfAuthentication wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
/ibr wccinfra-cluster-ibr-cluster:16250 (10.244.0.59:16250)
/ibr/adfAuthentication wccinfra-cluster-ibr-cluster:16250 (10.244.0.59:16250)
/weblogic/ready wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
/imaging wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
/dc-console wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/dc-client wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
/wcc wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations: kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: wccinfra-nginx-ingress
meta.helm.sh/release-namespace: wccns
nginx.ingress.kubernetes.io/configuration-snippet:
more_set_input_headers "X-Forwarded-Proto: https";
more_set_input_headers "WL-Proxy-SSL: true";
nginx.ingress.kubernetes.io/ingress.allow-http: false
Events: <none>
Verify non-SSL and SSL termination access
Non-SSL configuration
Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-Non-SSLPORT
:
http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/weblogic/ready
http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/em
http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/cs
http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/ibr
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/imaging
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/dc-console
http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/wcc
SSL configuration
Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-SSLPORT
:
https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/weblogic/ready
https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/em
https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc
Uninstall the ingress
Uninstall and delete the ingress-nginx
deployment:
$ helm delete wccinfra-nginx -n wccns
End-to-end SSL configuration
Install the NGINX load balancer for End-to-end SSL
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate secrets:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt -subj "/CN=*" $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
Deploy the ingress-nginx controller by using Helm on the domain namespace:
$ helm install nginx-ingress -n wccns \ --set controller.extraArgs.default-ssl-certificate=wccns/domain1-tls-cert \ --set controller.service.type=NodePort \ --set controller.admissionWebhooks.enabled=false \ --set controller.extraArgs.enable-ssl-passthrough=true \ ingress-nginx/ingress-nginx
Sample output:
NAME: nginx-ingress
LAST DEPLOYED: Thu Sep 8 23:59:54 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check the status of the deployed ingress controller:
$ kubectl --namespace wccns get services | grep ingress-nginx-controller
Sample output:
nginx-ingress-ingress-nginx-controller NodePort 10.97.189.122 <none> 80:30993/TCP,443:30232/TCP 168m
Deploy tls to access individual Managed Servers
Deploy tls to securely access the services. Only one application can be configured with
ssl-passthrough
. A sample tls file for NGINX is shown below for the servicewccinfra-cluster-ucm-cluster
and port16201
. All the applications running on port16201
can be securely accessed through this ingress. For each backend service, create different ingresses as NGINX does not support multiple path/rules with annotationssl-passthrough
. That is, forwccinfra-cluster-ucm-cluster
,wccinfra-cluster-ibr-cluster
,wccinfra-cluster-ipm-cluster
,wccinfra-cluster-capture-cluster
,wccinfra-cluster-wccadf-cluster
andwccinfra-adminserver
, different ingresses must be created.Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. We can access only one Managed Server at a time.
$ cd ${WORKDIR}/charts/ingress-per-domain/tls
Sample nginx-ucm-tls.yaml:
Content of the nginx-ucm-tls.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wcc-ucm-ingress
namespace: wccns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- 'your_host_name'
secretName: domain1-tls-cert
rules:
- host: 'your_host_name'
http:
paths:
- path:
pathType: ImplementationSpecific
backend:
service:
name: wccinfra-cluster-ucm-cluster
port:
number: 16201
Note: host is the server on which this ingress is deployed.
Deploy the secured ingress:
$ cd ${WORKDIR}/charts/ingress-per-domain/tls $ kubectl create -f nginx-ucm-tls.yaml
Check the services supported by the ingress:
$ kubectl describe ingress wcc-ucm-ingress -n wccns
Services supported by the ingress:
Name: wcc-ucm-ingress
Namespace: wccns
Address: 10.102.97.237
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
domain1-tls-cert terminates domain1.org
Rules:
Host Path Backends
---- ---- --------
domain1.org
wccinfra-cluster-ucm-cluster:16201 (10.244.238.136:16201,10.244.253.132:16201)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 62s (x2 over 106s) nginx-ingress-controller Scheduled for sync
Verify end-to-end SSL access
Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-SSLPORT
:
https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
Uninstall ingress-nginx tls
$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl delete -f nginx-ucm-tls.yaml
Uninstall NGINX
//Uninstall and delete the `ingress-nginx` deployment
$ helm delete wccinfra-nginx-ingress -n wccns
//Uninstall NGINX
$ helm delete nginx-ingress -n wccns
Monitor an Oracle WebCenter Content domain
You can monitor a WebCenter Content domain using Prometheus and Grafana by exporting the metrics from the domain instance using the WebLogic Monitoring Exporter.
Set up monitoring for OracleWebCenterContent domain
Using the WebLogic Monitoring Exporter
you can scrape runtime information from a running Oracle WebCenter Content Suite instance and monitor them using Prometheus and Grafana. Follow these steps to set up monitoring for an Oracle WebCenter Content Suite instance. For more details on WebLogic Monitoring Exporter, see here.
Verify monitoring using Grafana Dashboard
After set-up is complete, to view the domain metrics, you can access the Grafana dashboard at http://mycompany.com:32100/
.
This displays the WebLogic Server Dashboard.
Elasticsearch integration for logs
Monitor an Oracle WebCenter Content domain and publish the WebLogic Server logs to Elasticsearch.
1. Integrate Elasticsearch to WebLogic Kubernetes Operator
For reference information, see Elasticsearch integration for the WebLogic Kubernetes Operator.
To enable elasticsearch integration, you must edit file ${WORKDIR}/charts/weblogic-operator/values.yaml
before deploying the WebLogic Kubernetes Operator.
# elkIntegrationEnabled specifies whether or not ELK integration is enabled.
elkIntegrationEnabled: true
# logStashImage specifies the docker image containing logstash.
# This parameter is ignored if 'elkIntegrationEnabled' is false.
logStashImage: "logstash:6.8.23"
# elasticSearchHost specifies the hostname of where Elasticsearch is running.
# This parameter is ignored if 'elkIntegrationEnabled' is false.
elasticSearchHost: "elasticsearch.default.svc.cluster.local"
# elasticSearchPort specifies the port number of where Elasticsearch is running.
# This parameter is ignored if 'elkIntegrationEnabled' is false.
elasticSearchPort: 9200
After you’ve deployed WebLogic Kubernetes Operator and made the above changes, the weblogic-operator pod will have additional Logstash container. The Logstash container will push the weblogic-operator logs to the configured Elasticsearch server.
2. Publish WebLogic Server and WebCenter Content Logs using Logstash Pod
You can publish the WebLogic Server logs to Elasticsearch Server using Logstash pod. This Logstash pod must have access to the shared domain home. For the WebCenter Content wccinfra
, you can use the persistent volume of the domain home in the Logstash pod. The steps to create the Logstash pod are as follows:
Get the persistent volume details of the domain home of the WebLogic Server(s). The following command will list the persistent volume details in the namespace - “wccns”:
$ kubectl get pv -n wccns
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
wccinfra-domain-pv 10Gi RWX Retain Bound wccns/wccinfra-domain-pvc wccinfra-domain-storage-class 33d
Create the deployment yaml for Logstash pod by updating the logstash.yaml
, located at $WORKDIR/logging-services/logstash/logstash.yaml
according to your configurations. The mounted persistent volume of the domain home will provide access to the WebLogic server logs to Logstash pod. Given below is a sample Logstash deployment yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
namespace: wccns
spec:
selector:
matchLabels:
app: logstash
template: # create pods using pod definition in this template
metadata:
labels:
app: logstash
spec:
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
- name: shared-logs
emptyDir: {}
containers:
- name: logstash
image: logstash:6.8.23
command: ["/bin/sh"]
args: ["/usr/share/logstash/bin/logstash", "-f", "/u01/oracle/user_projects/domains/logstash.conf"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
- name: shared-logs
mountPath: /shared-logs
ports:
- containerPort: 5044
name: logstash
Sample Logstash configuration file is located at $WORKDIR/logging-services/logstash/logstash.conf
$ vi $WORKDIR/logging-services/logstash/logstash.conf
input {
file {
path => "/u01/oracle/user_projects/domains/wccinfra/servers/**/logs/*-diagnostic.log"
start_position => beginning
}
file {
path => "/u01/oracle/user_projects/domains/logs/wccinfra/*.log"
start_position => beginning
}
}
filter {
grok {
match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ]
}
}
output {
elasticsearch {
hosts => ["elasticsearch.default.svc.cluster.local:9200"]
}
}
This sample configuration will publish all server and Diagnostic logs under wccinfra
to Logstash.
$ kubectl cp $WORKDIR/logging-services/logstash/logstash.conf wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/logstash.conf
Deploy Logstash pod
After you have created the Logstash deployment yaml and Logstash configuration file, deploy Logstash using following command:
$ kubectl create -f $WORKDIR/logging-services/logstash/logstash.yaml
3. Test the deployment of Elasticsearch and Kibana
The WebLogic Kubernetes Operator also provides a sample deployment of Elasticsearch and Kibana for testing purpose. You can deploy Elasticsearch and Kibana on the Kubernetes cluster as shown below:
$ cd ${WORKDIR}/elasticsearch-and-kibana/
$ kubectl create -f elasticsearch_and_kibana.yaml
Get the Kibana dashboard port information as shown below:
Wait for pods to start:
-bash-4.2$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
elasticsearch-8bdb7cf54-mjs6s 1/1 Running 0 4m3s
kibana-dbf8964b6-n8rcj 1/1 Running 0 4m3s
-bash-4.2$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP 10.105.205.157 <none> 9200/TCP,9300/TCP 10d
kibana NodePort 10.98.104.41 <none> 5601:30412/TCP 10d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42d
You can access the Kibana dashboard at http://<your_hostname>:30412/
. In our example, the node port would be 30412.
Create an Index Pattern in Kibana
Create an index pattern logstash-*
in Kibana > Management. After the servers are started, you will see the log data in the Kibana dashboard.
Publish logs to Elasticsearch
The WebLogic Logging Exporter adds a log event handler to WebLogic Server. WebLogic Server logs can be pushed to Elasticsearch in Kubernetes directly by using the Elasticsearch REST API. For more details, see to the WebLogic Logging Exporter project.
This sample shows you how to publish WebLogic Server logs to Elasticsearch and view them in Kibana. For publishing WebLogic Kubernetes Operator logs, see this sample.
Prerequisites
This document assumes that you have already set up Elasticsearch and Kibana for logs collection. If you have not, please see this document.
Download the WebLogic Logging Exporter binaries
The pre-built binaries are available on the WebLogic Logging Exporter Releases page.
Download:
- weblogic-logging-exporter-1.0.1.jar from the Releases page.
- snakeyaml-1.27.jar from Maven Central.
$ wget https://github.com/oracle/weblogic-logging-exporter/releases/download/v1.0.1/weblogic-logging-exporter.jar
$ wget -O snakeyaml-1.27.jar https://search.maven.org/remotecontent?filepath=org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar
Note: These identifiers are used in the sample commands in this document.
wccns
: WebCenter Content domain namespacewccinfra
:domainUID
wccinfra-adminserver
: Administration Server pod name
Copy the JAR Files to the WebLogic Domain Home
Copy the weblogic-logging-exporter.jar
and snakeyaml-1.27.jar
files to the domain home directory in the Administration Server pod.
$ kubectl cp <file-to-copy> <namespace>/<administration-server-pod>:<domainhome>
$ kubectl cp weblogic-logging-exporter.jar wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/
$ kubectl cp snakeyaml-1.27.jar wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/
Add a Startup Class to the Domain Configuration
In this step, we configure weblogic-logging-exporter JAR as a startup class in the WebLogic servers where we intend to collect the logs.
In the WebLogic Remote Console, in the left navigation pane, expand Environment, and then select Startup and Shutdown Classes.
Add a new startup class. You may choose any descriptive name, however, the class name must be
weblogic.logging.exporter.Startup
.Target the startup class to each server from which you want to export logs.
You can verify this by checking for the update in your config.xml file(
/u01/oracle/user_projects/domains/wccinfra/config/config.xml
) which should be similar to this example:$ kubectl exec -n wccns -it wccinfra-adminserver cat /u01/oracle/user_projects/domains/wccinfra/config/config.xml
<startup-class> <name>weblogic-logging-exporter</name> <target>adminServer,ucm_cluster,ibr_cluster,ipm_cluster,capture_cluster,wccadf_cluster</target> <class-name>weblogic.logging.exporter.Startup</class-name> </startup-class>
Update the WebLogic Server CLASSPATH
Copy the
setDomainEnv.sh
file from the pod to a local folder:$ kubectl cp wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/bin/setDomainEnv.sh $PWD/setDomainEnv.sh
Ignore exception:
tar: Removing leading '/' from member names
Modify
setDomainEnv.sh
to update the Server Class path, add below code at the end of file:CLASSPATH=/u01/oracle/user_projects/domains/wccinfra/weblogic-logging-exporter.jar:/u01/oracle/user_projects/domains/wccinfra/snakeyaml-1.27.jar:${CLASSPATH} export CLASSPATH
Copy back the modified
setDomainEnv.sh
file to the pod:$ kubectl cp setDomainEnv.sh wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/bin/setDomainEnv.sh
Create a Configuration File for the WebLogic Logging Exporter
In this step, we will be creating the configuration file for weblogic-logging-exporter.
Specify the Elasticsearch server host and port number in file
$WORKDIR/logging-services/weblogic-logging-exporter/WebLogicLoggingExporter.yaml
:Sample:
weblogicLoggingIndexName: wls publishHost: elasticsearch.default.svc.cluster.local publishPort: 9200 domainUID: wccinfra weblogicLoggingExporterEnabled: true weblogicLoggingExporterSeverity: Notice weblogicLoggingExporterBulkSize: 1 weblogicLoggingExporterFilters: - FilterExpression: NOT(MSGID = 'BEA-000449')
Copy the
WebLogicLoggingExporter.yaml
file to the domain home directory in the WebLogic Administration Server pod:$ kubectl cp ${WORKDIR}/logging-services/weblogic-logging-exporter/WebLogicLoggingExporter.yaml wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/config/
Restart All the Servers in the Domain
To restart the servers, stop and then start them using the following commands:
To STOP the servers:
$ kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
To START the servers:
$ kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'
After all the servers are restarted, see their server logs to check that the weblogic-logging-exporter
class is called, as shown below:
======================= Weblogic Logging Exporter Startup class called
================== Reading configuration from file name: /u01/oracle/user_projects/domains/wccinfra/config/WebLogicLoggingExporter.yaml
Config{weblogicLoggingIndexName='wls', publishHost='elasticsearch.default.svc.cluster.local', publishPort=9200, weblogicLoggingExporterSeverity='Notice', weblogicLoggingExporterBulkSize='1', enabled=true, weblogicLoggingExporterFilters=[
FilterConfig{expression='NOT(MSGID = 'BEA-000449')', servers=[]}], domainUID='wccinfra'}
====================== WebLogic Logging Exporter is ebled
================= publishHost in initialize: elasticsearch.default.svc.cluster.local
================= publishPort in initialize: 9200
================= url in executePutOrPostOnUrl: http://elasticsearch.default.svc.cluster.local:9200/wls
Create an Index Pattern in Kibana
Create an appropriate index pattern in Kibana > Management. After the servers are started, you will see the log data in the Kibana dashboard.
Publish logs to Elasticsearch Using Fluentd
Introduction
This page describes to how to configure a WebLogic domain to use Fluentd to send log information to Elasticsearch. Here’s the general mechanism for how this works:
fluentd
runs as a separate container in the Administration Server and Managed Server pods- The log files reside on a volume that is shared between the weblogic-server and fluentd containers
- fluentd tails the domain logs files and exports them to Elasticsearch
- A ConfigMap contains the filter and format rules for exporting log records.
Create fluentd configuration
Create a ConfigMap named fluentd-config in the namespace of the domain. The ConfigMap contains the parsing rules and Elasticsearch configuration. Here’s an explanation of some elements defined in the ConfigMap:
- The
@type
tail indicates that tail will be used to obtain updates to the log file - The
path
of the log file is obtained from the LOG_PATH environment variable that is defined in the fluentd container - The
tag
value of log records is obtained from the DOMAIN_UID environment variable that is defined in the fluentd container - The
parse
section defines how to interpret and tag each element of a log record - The
match
section contains the configuration information for connecting to Elasticsearch and defines the index name of each record to be the domainUID
Here is a sample configmap for fluentd configuration,
Sample configmap for fluentd configuration fluentd_configmap.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
weblogic.domainUID: wccinfra
weblogic.resourceVersion: domain-v2
name: fluentd-config
namespace: wccns
data:
fluentd.conf: |
<match fluent.**>
@type null
</match>
<source>
@type tail
path "#{ENV['LOG_PATH']}"
pos_file /tmp/server.log.pos
read_from_head true
tag "#{ENV['DOMAIN_UID']}"
# multiline_flush_interval 20s
<parse>
@type multiline
format_firstline /^####/
format1 /^####<(?<timestamp>(.*?))>/
format2 / <(?<level>(.*?))>/
format3 / <(?<subSystem>(.*?))>/
format4 / <(?<serverName>(.*?))>/
format5 / <(?<serverName2>(.*?))>/
format6 / <(?<threadName>(.*?))>/
format7 / <(?<info1>(.*?))>/
format8 / <(?<info2>(.*?))>/
format9 / <(?<info3>(.*?))>/
format10 / <(?<sequenceNumber>(.*?))>/
format11 / <(?<severity>(.*?))>/
format12 / <(?<messageID>(.*?))>/
format13 / <(?<message>(.*?))>/
</parse>
</source>
<match **>
@type elasticsearch
host "#{ENV['ELASTICSEARCH_HOST']}"
port "#{ENV['ELASTICSEARCH_PORT']}"
user "#{ENV['ELASTICSEARCH_USER']}"
password "#{ENV['ELASTICSEARCH_PASSWORD']}"
index_name "#{ENV['DOMAIN_UID']}"
</match>
Create the ConfigMap using the following command
$kubectl create -f fluentd_configmap.yaml
Mount fluentd configuration - Configmap as volume in the WebLogic container.
Edit the domain definition and configure a volume for the ConfigMap containing the fluentd configuration.
$kubectl edit domain -n wccns
Below sample yaml code add Configmap as volume,
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
- configMap:
defaultMode: 420
name: fluentd-config
name: fluentd-config-volume
Add fluentd container to WebLogic Server pods
Add a “fluentd container yaml” to the domain under serverPod: section that will run fluentd in the Administration Server and Managed Server pods.
Notice the container definition:
- Defines a LOG_PATH environment variable that points to the log location of WebLogic servers.
- Defines ELASTICSEARCH_HOST, ELASTICSEARCH_PORT, ELASTICSEARCH_USER, and ELASTICSEARCH_PASSWORD environment variables.
- Has volume mounts for the fluentd-config ConfigMap and the volume containing the domain logs.
$kubectl edit domain -n wccns
Sample fluentd container yaml fluentd container
:
containers:
- args:
- -c
- /etc/fluent.conf
env:
- name: DOMAIN_UID
valueFrom:
fieldRef:
fieldPath: metadata.labels['weblogic.domainUID']
- name: SERVER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['weblogic.serverName']
- name: LOG_PATH
value: /u01/oracle/user_projects/domains/logs/wccinfra/$(SERVER_NAME).log
- name: FLUENTD_CONF
value: fluentd.conf
- name: FLUENT_ELASTICSEARCH_SED_DISABLE
value: "true"
- name: ELASTICSEARCH_HOST
value: elasticsearch.default.svc.cluster.local
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USER
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-elasticsearch-1.3
imagePullPolicy: IfNotPresent
name: fluentd
resources: {}
volumeMounts:
- mountPath: /fluentd/etc/fluentd.conf
name: fluentd-config-volume
subPath: fluentd.conf
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
Restart WebLogic Servers
To restart the servers, edit the domain and change serverStartPolicy to NEVER for the WebLogic servers to shutdown
$kubectl edit domain -n wccns
After all the servers are shutdown edit domain again and set serverStartPolicy to IF_NEEDED for the servers to start again.
Create index pattern in Kibana
Create an index pattern “wccinfra*” in Kibana > Management. After the server starts, you will be able to see the log data in the Kibana dashboard,
Configure an additional mount or shared space to a domain for Imaging and Capture
A volume can be mounted to a server pod which can be accessible directly from outside Kubernetes cluster so that an external application could write new files to it.
This can be used specifically in WebCenter Imaging and WebCenter Capture applications for File Imports.
Kubernetes supports several types of volumes as given in Volumes | Kubernetes.
Further in this section, we will take nfs
volume as an example.
Mount “nfs” as volume
To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts in domain.yaml
file.
Update the domain.yaml
and apply the changes as shown in sample below for mounting nfs server (for example, 100.XXX.XXX.X with shared export path at /sharedir
) to all the server pods at /u01/sharedir
.
The path /u01/sharedir
can be configured as the file import path in WebCenter Imaging and WebCenter Capture applications and the files put to /sharedir
will be processed by the applications.
Sample entry of domain.yaml with nfs-volume configuration
...
serverPod:
# an (optional) list of environment variable to be set on the servers
env:
- name: JAVA_OPTIONS
value: "-Dweblogic.StdoutDebugEnabled=false"
- name: USER_MEM_ARGS
value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
- name: nfs-volume
nfs:
server: 100.XXX.XXX.XXX
path: /sharedir
volumeMounts:
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
- mountPath: /u01/sharedir
name: nfs-volume
...
Patch and Upgrade
Patch an existing Oracle WebCenter Content image or upgrade the infrastructure, such as upgrading the underlying Kubernetes cluster to a new release and upgrading the WebLogic Kubernetes Operator release.
Patch a Oracle WebCenter Content product Docker image
Upgrade the underlying Oracle WebCenter Content product image in a running Oracle WebCenter Content Kubernetes environment.
These instructions describe how to upgrade a new release of Oracle WebCenter Content product Docker image in a running Oracle WebCenter Content Kubernetes environment. A rolling upgrade approach is used to upgrade managed server pods of the domain.
Note : It is expecting a Zero down time as a rolling upgrade approach is used.
Prerequisites
- Make sure Oracle WebCenter Content domain is created and all the admin and managed pods are up and running.
- Make sure the database used for the Oracle WebCenter Content domain deployment is up and running during the upgrade process.
Recommendations: * Use the WebLogic Image Tool create feature for patching the Oracle WebCenter Content Docker image with a bundle patch and multiple interim patches. This is the recommended approach because it optimizes the size of the image. * Use the WebLogic Image Tool update feature for patching the Oracle WebCenter Content Docker image with a single interim patch. Note that the patched image size may increase considerably due to additional image layers introduced by the patch application tool.
Apply the patched image
Update the
image:
field indomain.yaml
configuration file with the patched image.Apply the updated
domain.yaml
configuration file:$ kubectl apply -f domain.yaml
Note: The server pods will be automatically restarted (rolling restart).
Upgrade an operator release
Upgrade the WebLogic Kubernetes Operator release to a newer version.
These instructions apply to upgrading operators within the 4.x release family as additional versions are released.
To upgrade the Kubernetes operator, use the helm upgrade
command. When upgrading the operator, the helm upgrade
command requires that you supply a new Helm chart and image. For example:
$ helm upgrade \
--reuse-values \
--set image=oracle/weblogic-kubernetes-operator:4.2.9 \
--namespace weblogic-operator-namespace \
--wait \
weblogic-operator \
kubernetes/charts/weblogic-operator
Upgrade a Kubernetes cluster
Upgrade the underlying Kubernetes cluster version in a running Oracle WebCenter Content Kubernetes environment.
These instructions describe how to upgrade a Kubernetes cluster created using kubeadm
on which an Oracle WebCenter Content domain is deployed. A rolling upgrade approach is used to upgrade nodes (master and worker) of the Kubernetes cluster.
Warning : It is expected that there will be a down time during the upgrade of the Kubernetes cluster as the nodes need to be drained as part of the upgrade process.
Prerequisites
- Review Prerequisites and ensure that your Kubernetes cluster is ready for upgrade. Make sure your environment meets all prerequisites.
- Make sure the database used for the Oracle WebCenter Content domain deployment is up and running during the upgrade process.
Upgrade the Kubernetes version
An upgrade of Kubernetes is supported from one MINOR version to the next MINOR version, or between PATCH versions of the same MINOR. For example, you can upgrade from 1.x to 1.x+1, but not from 1.x to 1.x+2. To upgrade a Kubernetes version, first all the master nodes of the Kubernetes cluster must be upgraded sequentially, followed by the sequential upgrade of each worker node.
- See here for Kubernetes official documentation to upgrade from 1.28 to 1.29
- See here for Kubernetes official documentation to upgrade from 1.27 to 1.28
- See here for Kubernetes official documentation to upgrade from 1.26 to 1.27
- See here for Kubernetes official documentation to upgrade from 1.25 to 1.26
Create or update an image
This section describes how to create or update an Oracle WebCenter Content Docker image used for deploying Oracle WebCenter Content domains. An Oracle WebCenter Content Docker image can be created using the WebLogic Image Tool.
If you have access to the My Oracle Support (MOS), and there is a need to build a new image with a patch (bundle or interim), it is recommended to use the WebLogic Image Tool to build an Oracle WebCenter Content image for production deployments.
Create or update an Oracle WebCenter Content Docker image using the WebLogic Image Tool
Using the WebLogic Image Tool, you can create a new Oracle WebCenter Content Docker image (can include patches as well) or update an existing image with one or more patches (bundle patch and interim patches).
Recommendations: * Use create for creating a new Oracle WebCenter Content Docker image either: * without any patches * or, containing the Oracle WebCenter Content binaries, bundle patch and interim patches. This is the recommended approach if you have access to the Oracle WebCenter Content patches because it optimizes the size of the image. * Use update for patching an existing Oracle WebCenter Content Docker image with a single interim patch. Note that the patched image size may increase considerably due to additional image layers introduced by the patch application tool.
Set up the WebLogic Image Tool
- Prerequisites
- Set up the WebLogic Image Tool
- Validate setup
- WebLogic Image Tool build directory
- WebLogic Image Tool cache
- Set up additional build scripts
Prerequisites
Verify that your environment meets the following prerequisites:
- Docker client and daemon on the build machine, with minimum Docker version 19.03.1.
- Bash version 4.0 or later, to enable the
command complete feature. - JAVA_HOME environment variable set to the appropriate JDK location.
Set up the WebLogic Image Tool
To set up the WebLogic Image Tool:
Create a working directory and change to it. In these steps, this directory is
imagetool-setup
.$ mkdir imagetool-setup $ cd imagetool-setup
Download the latest version of the WebLogic Image Tool from the releases page.
Unzip the release ZIP file to the
imagetool-setup
directory.Execute the following commands to set up the WebLogic Image Tool on a Linux environment:
$ cd imagetool-setup/imagetool/bin $ source setup.sh
Validate setup
To validate the setup of the WebLogic Image Tool:
Enter the following command to retrieve the version of the WebLogic Image Tool:
$ imagetool --version
Enter
imagetool
then press the Tab key to display the availableimagetool
commands:$ imagetool <TAB> cache create help rebase update
WebLogic Image Tool build directory
The WebLogic Image Tool creates a temporary Docker context directory, prefixed by wlsimgbuilder_temp
, every time the tool runs. Under normal circumstances, this context directory will be deleted. However, if the process is aborted or the tool is unable to remove the directory, it is safe for you to delete it manually. By default, the WebLogic Image Tool creates the Docker context directory under the user’s home directory. If you prefer to use a different directory for the temporary context, set the environment variable WLSIMG_BLDDIR
:
$ export WLSIMG_BLDDIR="/path/to/buid/dir"
WebLogic Image Tool cache
The WebLogic Image Tool maintains a local file cache store. This store is used to look up where the Java, WebLogic Server installers, and WebLogic Server patches reside in the local file system. By default, the cache store is located in the user’s $HOME/cache
directory. Under this directory, the lookup information is stored in the .metadata
file. All automatically downloaded patches also reside in this directory. You can change the default cache store location by setting the environment variable WLSIMG_CACHEDIR
:
$ export WLSIMG_CACHEDIR="/path/to/cachedir"
Set up additional build scripts
Creating an Oracle WebCenter Content Docker image using the WebLogic Image Tool requires additional container scripts for Oracle WebCenter Content domains.
Clone the docker-images repository to set up those scripts. In these steps, this directory is
DOCKER_REPO
:$ cd imagetool-setup $ git clone https://github.com/oracle/docker-images.git
Copy the additional WebLogic Image Tool build files from the WebLogic Kubernetes Operator source repository to the
imagetool-setup
location:$ mkdir -p imagetool-setup/docker-images/WebCenterContent/imagetool/14.1.2.0.0 $ cd imagetool-setup/docker-images/WebCenterContent/imagetool/14.1.2.0.0 $ cp -rf ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/imagetool-scripts/* .
Create an image
After setting up the WebLogic Image Tool and required build scripts, follow these steps to use the WebLogic Image Tool to create
a new Oracle WebCenter Content Docker image.
Download the Oracle WebCenter Content installation binaries and patches
You must download the required Oracle WebCenter Content installation binaries and patches as listed below from the Oracle Software Delivery Cloud and save them in a directory of your choice. In these steps, this directory is download location
.
Sample list of installation binaries and patches: * JDK:
* jdk-17.0.9+10_linux-x64_bin.tar.gz
- Fusion MiddleWare Infrastructure installer:
- fmw_14.1.2.0.0_infrastructure_generic.jar
- WebCenter Content installers:
- fmw_14.1.2.0.0_wccontent_generic.jar
- Fusion MiddleWare Infrastructure patches:
- if any (something similar to p28186abc_139428_Generic-23574493.zip (Opatch))
- WebCenter Content patches:
- if any (something similar to p33578xyz_141200_Generic.zip (wcc))
Note: This is a sample list of patches. You must get the appropriate list of patches for your Oracle WebCenter Content image.
Update required build files
The following files available in the code repository location <imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0
are used for creating the image. * additionalBuildCmds.txt
* buildArgs
In the
buildArgs
file, update all the occurrences of%DOCKER_REPO%
with thedocker-images
repository location, which is the complete path ofimagetool-setup/docker-images
.For example, update:
%DOCKER_REPO%/OracleWebCenterContent/imagetool/14.1.2.0.0/
to:
<imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0/
Similarly, update the placeholders
%JDK_VERSION%
and%BUILDTAG%
with appropriate values.
Create the image
Add a JDK package to the WebLogic Image Tool cache:
$ imagetool cache addInstaller --type jdk --version 17.0.9-10 --path <download location>/jdk-17.0.9+10_linux-x64_bin.tar.gz
Add the downloaded installation binaries to the WebLogic Image Tool cache:
$ imagetool cache addInstaller --type fmw --version 14.1.2.0.0 --path <download location>/fmw_14.1.2.0.0_infrastructure_generic.jar $ imagetool cache addInstaller --type wcc --version 14.1.2.0.0 --path <download location>/fmw_14.1.2.0.0_wccontent_generic.jar
Add the downloaded patches to the WebLogic Image Tool cache:
Commands to add patches in to the cache:
$ imagetool cache addEntry --key p33578xyz_141200_Generic --path <download location>/p33578xyz_141200_Generic.zip $ imagetool cache addEntry --key 28186abc_13.9.4.2.8 --path <download location>/p28186abc_139428_Generic-24497645.zip
Update the patches list to
buildArgs
.To the
create
command in thebuildArgs
file, append the Oracle WebCenter Content patches list using the--patches
flag and Opatch patch using the--opatchBugNumber
flag. Sample options for the list of patches above are:--patches 33578xyz_14.1.2.0.0 --opatchBugNumber=28186abc_13.9.4.2.8
Example
buildArgs
file after appending product’s list of patches and Opatch patch:create --jdkVersion=17.0.9-10 --type WCC --version=14.1.2.0.0 --tag=oracle/wccontent_create_1015:14.1.2.0.0 --pull --chown oracle:root --additionalBuildCommands <imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0/additionalBuildCmds.txt --additionalBuildFiles <imagetool-setup-location>/docker-images/OracleWebCenterContent/dockerfiles/14.1.2.0.0/container-scripts --patches 33578xyz_14.1.2.0.0 --opatchBugNumber=28186abc_13.9.4.2.8
Refer to this page for the complete list of options available with the WebLogic Image Tool
create
command.Enter the following command to create the Oracle WebCenter Content image:
$ imagetool @<absolute path to `buildargs` file>"
Sample Dockerfile generated with the imagetool command:
########## BEGIN DOCKERFILE ##########
#
# Copyright (c) 2023, Oracle and/or its affiliates.
#
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
#
FROM ghcr.io/oracle/oraclelinux:8-slim as os_update
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"
USER root
RUN yum -y --downloaddir=/tmp/imagetool install gzip tar unzip libaio jq hostname \
&& yum -y --downloaddir=/tmp/imagetool clean all \
&& rm -rf /var/cache/yum/* \
&& rm -rf /tmp/imagetool
## Create user and group
RUN if [ -z "$(getent group root)" ]; then hash groupadd &> /dev/null && groupadd root || exit -1 ; fi \
&& if [ -z "$(getent passwd oracle)" ]; then hash useradd &> /dev/null && useradd -g root oracle || exit -1; fi \
&& mkdir -p /u01 \
&& chown oracle:root /u01 \
&& chmod 775 /u01
# Install Java
FROM os_update as jdk_build
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"
ENV JAVA_HOME=/u01/jdk
COPY --chown=oracle:root jdk-17.0.9-10-linux-x64.tar.gz /tmp/imagetool/
USER oracle
RUN tar xzf /tmp/imagetool/jdk-17.0.9-10-linux-x64.tar.gz -C /u01 \
&& $(test -d /u01/jdk* && mv /u01/jdk* /u01/jdk || mv /u01/graal* /u01/jdk) \
&& rm -rf /tmp/imagetool \
&& rm -f /u01/jdk/javafx-src.zip /u01/jdk/src.zip
# Install Middleware
FROM os_update as wls_build
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"
ENV JAVA_HOME=/u01/jdk \
\
ORACLE_HOME=/u01/oracle
OPATCH_NO_FUSER=true
RUN mkdir -p /u01/oracle \
&& mkdir -p /u01/oracle/oraInventory \
&& chown oracle:root /u01/oracle/oraInventory \
&& chown oracle:root /u01/oracle
COPY --from=jdk_build --chown=oracle:root /u01/jdk /u01/jdk/
COPY --chown=oracle:root fmw_14.1.2.0.0_infrastructure_generic.jar fmw.rsp /tmp/imagetool/
COPY --chown=oracle:root fmw_14.1.2.0.0_wccontent.jar wcc.rsp /tmp/imagetool/
COPY --chown=oracle:root oraInst.loc /u01/oracle/
USER oracle
RUN echo "INSTALLING MIDDLEWARE" \
&& echo "INSTALLING fmw" \
&& \
/u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.2.0.0_infrastructure_generic.jar -silent ORACLE_HOME=/u01/oracle \
-responseFile /tmp/imagetool/fmw.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation \
&& echo "INSTALLING wcc" \
&& \
/u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.2.0.0_wccontent.jar -silent ORACLE_HOME=/u01/oracle \
-responseFile /tmp/imagetool/wcc.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation \
&& chmod -R g+r /u01/oracle
FROM os_update as final_build
ARG ADMIN_NAME
ARG ADMIN_HOST
ARG ADMIN_PORT
ARG MANAGED_SERVER_PORT
ENV ORACLE_HOME=/u01/oracle \
\
JAVA_HOME=/u01/jdk ${PATH}:/u01/jdk/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle
PATH=
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"
COPY --from=jdk_build --chown=oracle:root /u01/jdk /u01/jdk/
COPY --from=wls_build --chown=oracle:root /u01/oracle /u01/oracle/
USER oracle
WORKDIR /u01/oracle
#ENTRYPOINT /bin/bash
ENV ORACLE_HOME=/u01/oracle \
\
VOLUME_DIR=/u01/oracle/user_projects * \
SCRIPT_FILE=/u01/oracle/container-scripts/"-Djava.security.egd=file:/dev/./urandom" \
USER_MEM_ARGS=$PATH:$JAVA_HOME/bin:$ORACLE_HOME/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/container-scripts
PATH=
USER root
RUN mkdir -p $VOLUME_DIR && \
mkdir -p /u01/oracle/container-scripts && \
mkdir -p /u01/oracle/silent-install-files-tmp/config && \
mkdir -p /u01/oracle/logs && \
chown oracle:root -R /u01 $VOLUME_DIR && \
chmod a+xr /u01
COPY --chown=oracle:root files/container-scripts/ /u01/oracle/container-scripts/
RUN chmod +xr $SCRIPT_FILE
USER oracle
EXPOSE $UCM_PORT $UCM_INTRADOC_PORT $IBR_INTRADOC_PORT $IBR_PORT $ADMIN_PORT
WORKDIR ${ORACLE_HOME}
CMD ["/u01/oracle/container-scripts/createDomainandStartAdmin.sh"]
########## END DOCKERFILE ##########
Check the created image using the
docker images
command:$ docker images | grep wcc
Update an image
After setting up the WebLogic Image Tool and required build scripts, use the WebLogic Image Tool to update
an existing Oracle WebCenter Content Docker image:
Enter the following command for each patch to add the required patch(es) to the WebLogic Image Tool cache:
bash wrap $ cd <imagetool-setup> $ imagetool cache addEntry --key=33578xyz_14.1.2.0.0 --value <downloaded-patches-location>/p33578xyz_141200_Generic.zip [INFO ] Added entry 33578xyz_14.1.2.0.0=<downloaded-patches-location>/p33578xyz_141200_Generic.zip
Provide the following arguments to the WebLogic Image Tool
update
command:–-fromImage
- Identify the image that needs to be updated. In the example below, the image to be updated iswccontent:14.1.2.0.0
.–-patches
- Multiple patches can be specified as a comma-separated list.--tag
- Specify the new tag to be applied for the image being built.
Refer here for the complete list of options available with the WebLogic Image Tool
update
command.Note: The WebLogic Image Tool cache should have the latest OPatch zip. The WebLogic Image Tool will update the OPatch if it is not already updated in the image.
##### Examples
Sample update
command:
# If you are using a pre-built Oracle WebCenter Content image, obtained from My Oracle Support, then please use this command:
$ imagetool update --fromImage oracle/wccontent:14.1.2.0.0 --tag=oracle/wccontent_update_1015:14.1.2.0.0 --patches=33578xyz_14.1.2.0.0 --opatchBugNumber=28186abc_13.9.4.2.8
# In case, you chose to build an Oracle WebCenter Content image, please use the command given below:
$ imagetool update --chown oracle:root --fromImage oracle/wccontent:14.1.2.0.0 --tag=oracle/wccontent_update_1015:14.1.2.0.0 --patches=33578xyz_14.1.2.0.0
--opatchBugNumber=28186abc_13.9.4.2.8
Check the built image using the
docker images
command:$ docker images | grep wcc
Uninstall
This section describes the process to clean up the Oracle WebCenter Content domain setup.
Stop all Administration and Managed server pods
First stop the all pods related to a domain. This can be done by patching domain “serverStartPolicy” to “NEVER”. Here is the sample command for the same.
$ kubectl patch domain wcc-domain-name -n wcc-namespace --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
For example:
kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
Remove the domain
Remove the domain’s ingress (for example, Traefik ingress) using Helm:
$ helm uninstall wcc-domain-ingress -n sample-domain1-ns
For example:
$ helm uninstall wccinfra-traefik -n wccns
Remove the domain resources by using the sample
delete-weblogic-domain-resources.sh
script present at${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain
:$ cd ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain $ ./delete-weblogic-domain-resources.sh -d sample-domain1
For example:
$ cd ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain $ ./delete-weblogic-domain-resources.sh -d wccinfra
Use
kubectl
to confirm that the server pods and domain resource are deleted:$ kubectl get pods -n sample-domain1-ns $ kubectl get domains -n sample-domain1-ns
For example:
$ kubectl get pods -n wccns $ kubectl get domains -n wccns
Drop the RCU schemas
Follow [these steps]({{< relref “/wccontent-domains/installguide/prepare-your-environment/#create-or-drop-schemas” >}}) to drop the RCU schemas created for Oracle WebCenter Content domain.
Remove the domain namespace
Configure the installed ingress load balancer (for example, Traefik) to stop managing the ingresses in the domain namespace:
$ helm upgrade traefik-operator traefik/traefik \ --namespace traefik \ --reuse-values \ --set "kubernetes.namespaces={traefik}" \ --wait
Configure the WebLogic Kubernetes Operator to stop managing the domain:
$ helm upgrade sample-weblogic-operator \ \ kubernetes/charts/weblogic-operator --namespace sample-weblogic-operator-ns \ --reuse-values \ --set "domainNamespaces={}" \ --wait
For example:
$ cd ${WORKDIR}/weblogic-kubernetes-operator $ helm upgrade weblogic-kubernetes-operator \ \ kubernetes/charts/weblogic-operator --namespace opns \ --reuse-values \ --set "domainNamespaces={}" \ --wait
Delete the domain namespace:
$ kubectl delete namespace sample-domain1-ns
For example:
$ kubectl delete namespace wccns
Remove the WebLogic Kubernetes Operator
Remove the WebLogic Kubernetes Operator:
$ helm uninstall sample-weblogic-operator -n sample-weblogic-operator-ns
For example:
$ helm uninstall weblogic-kubernetes-operator -n opns
Remove WebLogic Kubernetes Operator’s namespace:
$ kubectl delete namespace sample-weblogic-operator-ns
For example:
$ kubectl delete namespace opns
Remove the load balancer
Remove the installed ingress based load balancer (for example, Traefik):
$ helm uninstall traefik -n traefik
Remove the Traefik namespace:
$ kubectl delete namespace traefik
Delete the domain home
To remove the domain home that is generated using the create-domain.sh
script, with appropriate privileges manually delete the contents of the storage attached to the domain home persistent volume (PV).
For example, for the domain’s persistent volume of type host_path
:
$ rm -rf /scratch/k8s_dir/WCC
Oracle Cloud Infrastructure
Setting up WebCenter Content domains with WebLogic Kubernetes Operator
This is a guide to run WebLogic Kubernetes Operator managed WebcenterContent domains on Oracle Cloud Infrastructure.
Preparing an OKE environment
Contents
- Create Public SSH Key to access all the Bastion and Worker nodes
- Create a compartment for OKE
- Create Container Clusters (OKE)
- Create Bastion Node to access Cluster
- Setup OCI CLI to download kubeconfig and access OKE Cluster
Create Public SSH Key to access all the Bastion and Worker nodes
Create SSH key using ssh-keygen
on linux terminal to access (ssh) the Compute instances (worker/bastion) in OCI.
ssh-keygen -t rsa -N "" -b 2048 -C demokey -f id_rsa
Create a compartment for OKE
Within your tenancy, there must be a compartment to contain the necessary network resources (VCN, subnets, internet gateway, route table, security lists). 1. Go to OCI console, and use the top-left Menu to select the Identity > Compartments option. 2. Click the Create Compartment
button. 3. Enter the compartment name(For example, WCCStorage) and description(OKE compartment), the click the Create Compartment
button.
Create Container Clusters (OKE)
- In the Console, open the navigation menu. Go to
Developer Services
and clickKubernetes Clusters (OKE)
. - Choose a Compartment you have permission to work in. Here we will use WCCStorage compartment.
- On the Cluster List page, select your Compartment and click Create Cluster.
- In the Create Cluster dialog, select Quick Create and click Launch Workflow.
- On the Create Cluster page specify the values as per your environment (like the sample values shown below)
- NAME: WCCOKEPHASE1
- COMPARTMENT: WCCStorage
- KUBERNETES VERSION: v1.26.2
- CHOOSE VISIBILITY TYPE: Private
- SHAPE: VM.Standard.E3.Flex (Choose the available shape for worker node pool. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes. See Supported Images and Shapes for Worker Nodes.)
- NUMBER OF NODES: 3 (The number of worker nodes to create in the node pool, placed in the regional subnet created for the ‘quick cluster’).
- Click Show Advanced Options and enter PUBLIC SSK KEY: ssh-rsa AA……bmVnWgX/ demokey (The public key id_rsa.pub created at Step1)
- Click Next to review the details you entered for the new cluster.
- Click
Create Cluster
to create the new network resources and the new cluster. - Container Engine for Kubernetes starts creating resources (as shown in the Creating cluster and associated network resources dialog). Click Close to return to the Console.
- Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active.
- Click on the
Node Pools
on Resources and thenView
to view the Node Pool and worker node status - You can view the status of Worker node and make sure all Node State in Active and Kubernetes Node Condition is Ready.The worker node gets listed in the kubectl command once the
Kubernetes Node Condition
is Ready. - To access the Cluster, Click on
Access Cluster
on the ClusterWCCOKEPHASE1
page. - We will be creating the bastion node and then access the Cluster.
Create Bastion Node to access Cluster
Setup a bastion node for accessing internal resources. We will create the bastion node in same VCN following below steps, so that we can ssh into worker nodes. Here we will choose CIDR Block: 10.0.22.0/24
. You can choose a different block, if you want.
Click on the VCN Name from the Cluster Page as shown below
Next Click on
Security List
and thenCreate Security List
Create a
bastion-private-sec-list
security with below Ingress and Egress Rules.Ingress Rules:
Egress Rules:
Create a
bastion-public-sec-list
security with below Ingress and Egress Rules.Ingress Rules:
Egress Rules:
Create the
bastion-route-table
withInternet Gateway
, so that we can add to bastion instance for internet accessNext create a Regional Public Subnet for bastion instance with name
bastion-subnet
with below details:- CIDR BLOCK: 10.0.22.0/24
- ROUTE TABLE: oke-bastion-routetables
- SUBNET ACCESS: PUBLIC SUBNET
- Security List: bastion-public-sec-list
- DHCP OPTIONS: Select the Default DHCP Options
Next Click on the Private Subnet which has Worker Nodes
And then add the
bastion-private-sec-list
to Worker Private Subnet, so that bastion instance can access the Worker nodesNext Create Compute Instance
oke-bastion
with below details- Name: BastionHost
- Image: Oracle Linux 8.X
- Availability Domain: Choose any AD which has limit for creating Instance
- VIRTUAL CLOUD NETWORK COMPARTMENT: WCCStorage( i.e., OKE Compartment)
- SELECT A VIRTUAL CLOUD NETWORK: Select VCN created by Quick Cluster
- SUBNET COMPARTMENT: WCCStorage ( i.e., OKE Compartment)
- SUBNET: bastion-subnet (create above)
- SELECT ASSIGN A PUBLIC IP ADDRESS
- SSH KEYS: Copy content of id_rsa.pub created in Step1
Once bastion Instance
BastionHost
is created, get the Public IP to ssh into the bastion instanceLogin to bastion host as below
ssh -i <your_ssh_bastion.key> opc@123.456.xxx.xxx
Setup OCI CLI
Install OCI CLI
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
Respond to the Installation Script Prompts.
To download the kubeconfig later after setup, we need to setup the oci config file. Follow the below command and enter the details when prompted
$ oci setup config
Sample Output”:
$ oci setup config This command provides a walkthrough of creating a valid CLI config file. The following links explain where to find the information required by this script: User API Signing Key, OCID and Tenancy OCID: https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#Other Region: https://docs.cloud.oracle.com/Content/General/Concepts/regions.htm General config documentation: https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm Enter a location for your config [/home/opc/.oci/config]: Enter a user OCID: ocid1.user.oc1..aaaaaaaao3qji52eu4ulgqvg3k4yf7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Enter a tenancy OCID: ocid1.tenancy.oc1..aaaaaaaaf33wodv3uhljnn5etiuafoxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Enter a region (e.g. ap-hyderabad-1, ap-melbourne-1, ap-mumbai-1, ap-osaka-1, ap-seoul-1, ap-sydney-1, ap-tokyo-1, ca-montreal-1, ca-toronto-1, eu-amsterdam-1, eu-frankfurt-1, eu-zurich-1, me-jeddah-1, sa-saopaulo-1, uk-gov-london-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): us-phoenix-1 Do you want to generate a new API Signing RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: Y Enter a directory for your keys to be created [/home/opc/.oci]: Enter a name for your key [oci_api_key]: Public key written to: /home/opc/.oci/oci_api_key_public.pem Enter a passphrase for your private key (empty for no passphrase): Private key written to: /home/opc/.oci/oci_api_key.pem Fingerprint: 74:d2:f2:db:62:a9:c4:bd:9b:4f:6c:d8:31:1d:a1:d8 Config written to /home/opc/.oci/config If you haven't already uploaded your API Signing public key through the console, follow the instructions on the page linked below in the section 'How to upload the public key': https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#How2
Now you need to upload the created public key in $HOME/.oci (oci_api_key_public.pem) to OCI console Login to OCI Console and navigate to
User Settings
, which is in the drop down under your OCI userprofile, located at the top-right corner of the page.On User Details page, Click
Api Keys
link, located near bottom-left corner of the page and then Click theAdd API Key
button. Copy the content ofoci_api_key_public.pem
and ClickAdd
.Now you can use the oci cli to access the OCI resources.
To access the Cluster, Click on
Access Cluster
on the ClusterWCCOKEPHASE1
pageTo access the Cluster from Bastion node perform steps as per the
Local Access
.$ oci -v $ mkdir -p $HOME/.kube $ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaaaaae4xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxrqgjtd --file $HOME/.kube/config --region us-phoenix-1 --token-version 2.0.0 $ export KUBECONFIG=$HOME/.kube/config
Install kubectl Client to access the Cluster
$ curl -LO https://dl.k8s.io/release/v1.15.7/bin/linux/amd64/kubectl $ sudo mv kubectl /bin/ $ sudo chmod +x /bin/kubectl
Access the Cluster from bastion node
$ kubectl get nodes NAME STATUS ROLES AGE VERSION 10.0.10.197 Ready node 14d v1.26.2 10.0.10.206 Ready node 14d v1.26.2 10.0.10.50 Ready node 14d v1.26.2
Install required add-ons for Oracle WebCenter Content Cluster setup
Install helm v3.10.*
$ wget wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz $ tar -zxvf helm-v3.10.3-linux-amd64.tar.gz $ sudo mv linux-amd64/helm /bin/helm $ helm version version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}
Install git
sudo yum install git -y
Preparing a file system
Create Filesystem and security list for FSS
Note: Make sure you create the filesystem and security list in the OKE created VCN
Login to OCI Console and go to Storage and Click
File System
Click
Create File System
You can create File System and Mount Targets with the default values. But in case you want to rename the file System and mount targets, follow below steps. > Note: Make sure the Virtual Cloud Network in Mount Target refers to the one where your OKE Cluster is created and you will be accessing this file system.
Edit and change the File System name. You can choose any name of your choice. Following instructions will assume that the File System name chosen is
WCCFS
.Edit and change the Mount Target name to
WCCFS
and make sure the Virtual Cloud Network selected is the one where all the instances are created. SelectPublic Subnet
and ClickCreate
Once the File System is created, it lands at below page. Click on
WCCFS
link.Click on Mount Commands which gives details on how to mount this file system on your instances.
Mount Command pop up gives details on what must be configured on security list to access the mount targets from instances. Note down the mount command which need to be executed on the instance
Note down the mount path and NFS server from the
COMMAND TO MOUNT THE FILE SYSTEM
. We will use this as NFS for Domain Home with below details. Sample from the above mount command.- NFSServer: 10.0.20.xxx
- Mount Path: /WCCFS
Create the security list
fss_seclist
with below Ingress Rules as given in the Mount commands pop upCreate the Egress rules as below as given in the Mount commands pop up.
Make sure to add the created security list
fss_security list
to each subnets as shown below: Otherwise the created security list rules will not apply to the instances.Once the security list
fss_security list
is added into the subnet, login to the instances and mount the file systems on to Bastion Node. > Note: Please make sure to replace the sample NFS server address (10.0.20.235, as shown in the example below) according to your environment.# Run below command in same order(sequence) as a root user. # login as root sudo su # Install NFS Utils yum install nfs-utils # Create directory where you want the mount the file system sudo mkdir -p /mnt/WCCFS # Mount Command sudo mount 10.0.20.235:/WCCFS /mnt/WCCFS # Alternatively you can use: "mount 10.0.20.235:/WCCFS /mnt/WCCFS". To persist on reboot add into /etc/fstab echo "10.0.20.235:/WCCFS /mnt/WCCFS nfs nfsvers=3 0 0" >> /etc/fstab mount -a # Change proper permissions so that all users can access the share volume sudo chown -R 1000:0 /mnt/WCCFS
Confirm that /WCCFS is now pointing to created File System
[root@bastionhost WCCFS]# cd /mnt/WCCFS/ [root@bastionhost WCCFS]# df -h . Filesystem Size Used Avail Use% Mounted on 10.0.20.235:/WCCFS 8.0E 0 8.0E 0% /mnt/WCCFS
Creating an OCIR
Publish images to OCIR
Push all the required images to OCIR and subsequently use from there. Follow the below steps for pushing the images to OCIR
Create an “Auth token”
Create an “Auth token” which will be used as docker password to push and pull images from OCIR. Login to OCI Console and navigate to User Settings, which is in the drop down under your OCI user-profile, located at the top-right corner of the OCI console page. * On User Details page, Click
Auth Tokens
link located near bottom-left corner of the page and then Click the Generate Token
button: Enter a Name and Click “Generate Token”
* Token will get generated
* Copy the generated token. > NOTE: It will only be displayed this one time, and you will need to copy it to a secure place for further use.
Using the OCIR
Using the Docker CLI to login to OCIR ( for phoenix : phx.ocir.io , ashburn: iad.ocir.io etc) 1. docker login phx.ocir.io 1. When promoted for username enter docker username as OCIR RepoName/oci username ( eg., axcmmdmzqtqb/oracleidentitycloudservice/myemailid@oracle.com) 1. When prompted for your password, enter the generated Auth Token 1. Now you can tag the WCC Docker image and push to OCIR. Sample steps as below
$ docker login phx.ocir.io
$ username - axcmmdmzqtqb/oracleidentitycloudservice/myemailid@oracle.com
$ password - abCXYz942,vcde (Token Generated for OCIR using user setting)
$ docker tag oracle/wccontent:14.1.2.0.0-<tag> phx.ocir.io/axcmmdmzqtqb/oracle/wccontent:14.1.2.0.0-<tag>
$ docker push phx.ocir.io/axcmmdmzqtqb/oracle/wccontent:14.1.2.0.0-<tag>
This has to be done on Bastion Node for all the images.
Verify the OCIR Images
Get the OCIR repository name by logging in to Oracle Cloud Infrastructure Console. In the OCI Console, open the Navigation menu. Under Solutions and Platform, go to Developer Services and click Container Registry (OCIR) and select the your Compartment.
Prepare environment for WCC domain
To create your Oracle WebCenter Content domain in Kubernetes OKE environment, complete the following steps:
Contents
Set up code repository to deploy Oracle WebCenter Content domain
Set up code repository to deploy Oracle WebCenter Content domain
Oracle WebCenter Content domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator infrastructure. To deploy an Oracle WebCenter Content domain, you must set up the deployment scripts.
Create a working directory to set up the source code:
$ mkdir $HOME/wcc_4.2.9 $ cd $HOME/wcc_4.2.9
Download the WebLogic Kubernetes Operator source code and Oracle WebCenter Content Suite Kubernetes deployment scripts from the WCContent repository. Required artifacts are available at
OracleWebCenterContent/kubernetes
.$ git clone https://github.com/oracle/fmw-kubernetes.git $ export WORKDIR=$HOME/wcc_4.2.9/fmw-kubernetes/OracleWebCenterContent/kubernetes
Create namespace for the Oracle WebCenter Content domain
Create a Kubernetes namespace (for example, wccns
) for the domain unless you intend to use the default namespace. Use the new namespace in the remaining steps in this section. For details, see Prepare to run a domain.
$ kubectl create namespace wccns
Create the imagePullSecrets
Create the imagePullSecrets (in wccns namespace) so that Kubernetes Deployment can pull the image automatically from OCIR.
Note: Create the imagePullSecret as per your environement using a sample command like this -
$ kubectl create secret docker-registry image-secret -n wccns --docker-server=phx.ocir.io --docker-username=axxxxxxxxxxx/oracleidentitycloudservice/<your_user_name> --docker-password='vUv+xxxxxxxxxxx<KN7z' --docker-email=me@oracle.com
The parameter values are:
OCI Region is phoenix
phx.ocir.io OCI Tenancy Name
axxxxxxxxxxx ImagePullSecret Name
image-secret Username and email address
me@oracle.com Auth Token Password
vUv+xxxxxxxxxxx<KN7z
Install WebLogic Kubernetes Operator in OKE
The WebLogic Kubernetes Operator supports the deployment of Oracle WebCenter Content domain in the Kubernetes environment.
In the following example commands to install the WebLogic Kubernetes Operator, opns
is the namespace and op-sa
is the service account created for the WebLogic Kubernetes Operator:
Creating namespace and service account for WebLogic Kubernetes Operator
$ kubectl create namespace opns
$ kubectl create serviceaccount -n opns op-sa
Install the WebLogic Kubernetes Operator in OKE
$ cd ${WORKDIR}
$ helm install weblogic-kubernetes-operator charts/weblogic-operator --namespace opns --set image=phx.ocir.io/xxxxxxxxxxx/oracle/weblogic-kubernetes-operator:4.2.9 --set imagePullSecret=image-secret --set serviceAccount=op-sa --set "domainNamespaces={}" --set "javaLoggingLevel=FINE" --wait
Verify the WebLogic Kubernetes Operator pod
$ kubectl get pods -n opns
NAME READY STATUS RESTARTS AGE
weblogic-operator-779965b66c-d8265 1/1 Running 0 11d
# Verify the Operator helm Charts
$ helm list -n opns
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
weblogic-kubernetes-operator opns 3 2022-02-24 06:50:29.810106777 +0000 UTC deployed weblogic-operator-4.2.9 4.2.9
Prepare environment for Oracle WebCenter Content domain
Upgrade WebLogic Kubernetes Operator with the Oracle WebCenter Content domain-namespace
$ cd ${WORKDIR}
$ helm upgrade --reuse-values --namespace opns --set "domainNamespaces={wccns}" --wait weblogic-kubernetes-operator charts/weblogic-operator
Create persistent storage for the Oracle WebCenter Content domain
In the Kubernetes namespace you created, create the PV and PVC for the domain by running the create-pv-pvc.sh script. Follow the instructions for using the script to create a dedicated PV and PVC for the Oracle WebCenter Content domain.
Here we will use the NFS Server and mount path, created on here.
Review the configuration parameters for PV creation here. Based on your requirements, update the values in the
create-pv-pvc-inputs.yaml
file located at${WORKDIR}/create-weblogic-domain-pv-pvc/
. Sample configuration parameter values for the Oracle WebCenter Content domain are:baseName
: domaindomainUID
: wccinfranamespace
: wccns
weblogicDomainStorageType:
: NFSweblogicDomainStorageNFSServer:
:weblogicDomainStoragePath
: /> Note: Make sure to update the “weblogicDomainStorageNFSServer:” with the NFS Server IP as per your Environment
Ensure that the path for the
weblogicDomainStoragePath
property exists (if not, please refer to this and has correct access permissions, and that the folder is empty.Run the
create-pv-pvc.sh
script:$ cd ${WORKDIR}/create-weblogic-domain-pv-pvc $ rm -rf output/ $ ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o output
The
create-pv-pvc.sh
script will create a subdirectorypv-pvcs
under the given/path/to/output-directory
directory and creates two YAML configuration files for PV and PVC. Apply these two YAML files to create the PV and PVC Kubernetes resources using thekubectl create -f
command:bash $ kubectl create -f output/pv-pvcs/wccinfra-domain-pv.yaml -n wccns $ kubectl create -f output/pv-pvcs/wccinfra-domain-pvc.yaml -n wccns
Get the details of PV and PVC:
$ kubectl describe pv wccinfra-domain-pv $ kubectl describe pvc wccinfra-domain-pvc -n wccns
Create Kubernetes secret with domain credentials
Create the Kubernetes secrets username
and password
of the administrative account in the same Kubernetes namespace as the domain:
$ cd ${WORKDIR}/create-weblogic-domain-credentials
$ ./create-weblogic-credentials.sh -u weblogic -p welcome1 -n wccns -d wccinfra -s wccinfra-domain-credentials
For more details, see this document.
You can check the secret with the kubectl get secret
command.
For example:
$ kubectl get secret wccinfra-domain-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2021-07-30T06:04:33Z"
labels:
weblogic.domainName: wccinfra
weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:password: {}
f:username: {}
f:metadata:
f:labels:
.: {}
f:weblogic.domainName: {}
f:weblogic.domainUID: {}
f:type: {}
manager: kubectl
operation: Update
time: "2021-07-30T06:04:36Z"
name: wccinfra-domain-credentials
namespace: wccns
resourceVersion: "90770768"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-domain-credentials
uid: 9c5dab09-15f3-4e1f-a40d-457904ddf96b
type: Opaque
Create Kubernetes secret with the RCU credentials
You also need to create a Kubernetes secret containing the credentials for the database schemas. When you create your domain, it will obtain the RCU credentials from this secret.
Use the provided sample script to create the secret:
$ cd ${WORKDIR}/create-rcu-credentials
$ ./create-rcu-credentials.sh -u weblogic -p welcome1 -a sys -q welcome1 -d wccinfra -n wccns -s wccinfra-rcu-credentials
The parameter values are:
-u
username for schema owner (regular user), required.
-p
password for schema owner (regular user), required.
-a
username for SYSDBA user, required.
-q
password for SYSDBA user, required.
-d
domainUID. Example: wccinfra
-n
namespace. Example: wccns
-s
secretName. Example: wccinfra-rcu-credentials
You can confirm the secret was created as expected with the kubectl get secret
command.
For example:
Sample secret description:
$ kubectl get secret wccinfra-rcu-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
sys_password: d2VsY29tZTE=
sys_username: c3lz
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2020-09-16T08:23:04Z"
labels:
weblogic.domainName: wccinfra
weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:password: {}
f:sys_password: {}
f:sys_username: {}
f:username: {}
f:metadata:
f:labels:
.: {}
f:weblogic.domainName: {}
f:weblogic.domainUID: {}
f:type: {}
manager: kubectl
operation: Update
time: "2020-09-16T08:23:04Z"
name: wccinfra-rcu-credentials
namespace: wccns
resourceVersion: "3277132"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-rcu-credentials
uid: b75f4e13-84e6-40f5-84ba-0213d85bdf30
type: Opaque
Install and start the Database
This step is required only when standalone database was not already setup and the user wanted to use the database in a container. The Oracle Database Docker images are supported only for non-production use. For more details, see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1). For production usecase it is suggested to use a standalone db. Sample provides steps to create the database in a container.
The database in a container can be created with a PV attached for persisting the data or without attaching the PV. In this setup we will be creating database in a container without PV attached.
$ cd ${WORKDIR}/create-oracle-db-service
$ ./start-db-service.sh -i phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x -s image-secret -n wccns
Sample Output”:
$ ./start-db-service.sh -i phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x -s image-secret -n wccns
Checking Status for NameSpace [wccns]
Skipping the NameSpace[wccns] Creation ...
NodePort[30011] ImagePullSecret[docker-store] Image[phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x] NameSpace[wccns]
service/oracle-db created
deployment.apps/oracle-db created
[oracle-db-8598b475c5-cx5nk] already initialized ..
Checking Pod READY column for State [1/1]
NAME READY STATUS RESTARTS AGE
oracle-db-8598b475c5-cx5nk 1/1 Running 0 20s
Service [oracle-db] found
NAME READY STATUS RESTARTS AGE
oracle-db-8598b475c5-cx5nk 1/1 Running 0 25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oracle-db LoadBalancer 10.96.74.187 <pending> 1521:30011/TCP 28s
[1/30] Retrying for Oracle Database Availability...
[2/30] Retrying for Oracle Database Availability...
[3/30] Retrying for Oracle Database Availability...
[4/30] Retrying for Oracle Database Availability...
[5/30] Retrying for Oracle Database Availability...
[6/30] Retrying for Oracle Database Availability...
[7/30] Retrying for Oracle Database Availability...
[8/30] Retrying for Oracle Database Availability...
[9/30] Retrying for Oracle Database Availability...
[10/30] Retrying for Oracle Database Availability...
[11/30] Retrying for Oracle Database Availability...
[12/30] Retrying for Oracle Database Availability...
[13/30] Retrying for Oracle Database Availability...
Done ! The database is ready for use .
Oracle DB Service is RUNNING with NodePort [30011]
Oracle DB Service URL [oracle-db.wccns.svc.cluster.local:1521/devpdb.k8s]
Once database is created successfully, you can use the database connection string, as an rcuDatabaseURL
parameter in the create-domain-inputs.yaml file.
Configure access to Database
Run a container to create rcu pod
kubectl run rcu --generator=run-pod/v1 \
--image phx.ocir.io/xxxxxxxxxxx/oracle/wccontent:x.x.x.x \
--namespace wccns \
--overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "image-secret"}] } }' \
-- sleep infinity
# Check the status of rcu pod
kubectl get pods -n wccns
Run Repository Creation Utility to set up your database schemas
Create or Drop schemas
To create the database schemas for Oracle WebCenter Content, run the create-rcu-schema.sh
script.
For example:
# Make sure rcu pod status is running before executing this
kubectl exec -n wccns -ti rcu /bin/bash
# DB details
export CONNECTION_STRING=your_db_host:1521/your_db_service
export RCUPREFIX=your_schema_prefix
echo -e welcome1"\n"welcome1> /tmp/pwd.txt
# Create schemas
/u01/oracle/oracle_common/bin/rcu -silent -createRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS -component STB -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -tablespace USERS -tempTablespace TEMP -f < /tmp/pwd.txt
# Drop schemas
/u01/oracle/oracle_common/bin/rcu -silent -dropRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS -component STB -component OPSS -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -f < /tmp/pwd.txt
# Exit from the container
exit
Note: In the create and drop schema commands above, pass additional components ( -component IPM -component CAPTURE ) if IPM and CAPTURE applications are enabled resepectively.
Now that you have your Docker images and created RCU schemas, you are ready to create your domain, after setting-up a load balancer.
Set up a load balancer
WebLogic Kubernetes Operator managed Oracle WebCenter Content domain on Oracle Cloud Infrastructure supports ingress-based load balancers such as Traefik and NGINX.
Traefik
This section provides information about how to install and configure the ingress-based Traefik load balancer (version 2.6.0 or later for production deployments) to load balance Oracle WebCenter Content domain clusters.
Follow these steps to set up Traefik as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:
Contents
Non-SSL and SSL termination
Install the Traefik (ingress-based) load balancer
Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the
values.yaml
file in the sample but setkubernetes.namespaces
specifically.$ cd ${WORKDIR} $ kubectl create namespace traefik $ helm repo add traefik https://helm.traefik.io/traefik --force-update
Sample output:
"traefik" has been added to your repositories
Install Traefik:
$ cd ${WORKDIR} $ helm install traefik traefik/traefik \ --namespace traefik \ --values charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik}" \ --set "service.type=LoadBalancer" --wait
Sample output:
NAME: traefik-operator
LAST DEPLOYED: Mon Jun 1 19:31:20 2020
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get Traefik load balancer IP or hostname:
NOTE: It may take a few minutes for this to become available.
You can watch the status by running:
$ kubectl get svc traefik-operator --namespace traefik -w
Once 'EXTERNAL-IP' is no longer '<pending>':
$ kubectl describe svc traefik-operator --namespace traefik | grep Ingress | awk '{print $3}'
2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP or hostname found in step 1
A sample values.yaml
for deployment of Traefik 2.6.0:
image:
name: traefik
tag: 2.6.0
pullPolicy: IfNotPresent
ingressRoute:
dashboard:
enabled: true
# Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
annotations: {}
# Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
labels: {}
providers:
kubernetesCRD:
enabled: true
kubernetesIngress:
enabled: true
# IP used for Kubernetes Ingress endpoints
ports:
traefik:
port: 9000
expose: true
# The exposed port for this service
exposedPort: 9000
# The port protocol (TCP/UDP)
protocol: TCP
web:
port: 8000
# hostPort: 8000
expose: true
exposedPort: 30305
nodePort: 30305
# The port protocol (TCP/UDP)
protocol: TCP
# Use nodeport if set. This is useful if you have configured Traefik in a
# LoadBalancer
# nodePort: 32080
# Port Redirections
# Added in 2.2, you can make permanent redirects via entrypoints.
# https://docs.traefik.io/routing/entrypoints/#redirection
# redirectTo: websecure
websecure:
port: 8443
# # hostPort: 8443
expose: true
exposedPort: 30443
# The port protocol (TCP/UDP)
protocol: TCP
nodePort: 30443
additionalArguments:
- "--log.level=INFO"
Verify the Traefik (load balancer) services:
Please note the EXTERNAL-IP of the traefik-operator service. This is the public IP address of the load balancer that you will use to access the WebLogic Server Administration Console and WebCenter Content URLs.
$ kubectl get service -n traefik NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik LoadBalancer 10.96.8.30 123.456.xx.xx 9000:30734/TCP,30305:30305/TCP,30443:30443/TCP 6d23h
To print only the Traefik EXTERNAL-IP, execute this command:
$ TRAEFIK_PUBLIC_IP=`kubectl describe svc traefik --namespace traefik | grep Ingress | awk '{print $3}'` $ echo $TRAEFIK_PUBLIC_IP 123.456.xx.xx
Verify the helm charts:
$ helm list -n traefik NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION traefik traefik 2 2022-09-11 12:22:41.122310912 +0000 UTC deployed traefik-10.24.3 2.8.5
Verify the Traefik status and find the port number
$ kubectl get all -n traefik
Sample output:
NAME READY STATUS RESTARTS AGE pod/traefik-f9cf58697-xjhpl 1/1 Running 0 7d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/traefik LoadBalancer 10.96.8.30 123.456.xx.xx 9000:30734/TCP,30305:30305/TCP,30443:30443/TCP 7d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/traefik 1/1 1 1 7d NAME DESIRED CURRENT READY AGE replicaset.apps/traefik-f9cf58697 1 1 1 7d
Configure Traefik to manage ingresses
Configure Traefik to manage ingresses created in this namespace, where traefik
is the Traefik namespace and wccns
is the namespace of the domain:
$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
"kubernetes.namespaces={traefik,wccns}" --set
Sample output:
Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:43:02 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
Create an ingress for the domain
Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml
. By default, type
is TRAEFIK
, tls
is Non-SSL
, and domainType
is wccinfra
. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml
based on the type of configuration (non-SSL or SSL). If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths
) based on the domain application URLs that need to be accessed. The template YAML file for the Traefik (ingress-based) load balancer is located at ${WORKDIR}/charts/ingress-per-domain/templates/traefik-ingress.yaml
Install
ingress-per-domain
using Helm for non-SSL configuration:$ export LB_HOSTNAME=<Traefik load balancer DNS name> #OR leave it empty to point to Traefik load-balancer IP, by default $ export LB_HOSTNAME=''
Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname, or leave it empty to point to the Traefik load-balancer IP.
$ cd ${WORKDIR} $ helm install wcc-traefik-ingress \ \ charts/ingress-per-domain --set type=TRAEFIK \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "traefik.hostname=$LB_HOSTNAME" \ --set tls=NONSSL
Sample output:
NAME: wcc-traefik-ingress LAST DEPLOYED: Sun Jan 17 23:49:09 2021 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
Create a certificate and generate a Kubernetes secret
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate :
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=<Traefik load balancer DNS name>" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=IP:$TRAEFIK_PUBLIC_IP ) #OR use the following command if you chose to leave LB_HOSTNAME empty in the previous step $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=*" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=IP:$TRAEFIK_PUBLIC_IP )
Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname.
Generate a Kubernetes secret:
$ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
Create Traefik custom resource
Create Traefik Middleware custom resource
In case of SSL termination, Traefik must pass a custom header
WL-Proxy-SSL:true
to the WebLogic Server endpoints. Create the Middleware using the following command:$ cat <<EOF | kubectl apply -f - apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: wls-proxy-ssl namespace: wccns spec: headers: customRequestHeaders: WL-Proxy-SSL: "true" EOF
Create the Traefik TLSStore custom resource.
In case of SSL termination, Traefik should be configured to use the user-defined SSL certificate. If the user-defined SSL certificate is not configured, Traefik will create a default SSL certificate. To configure a user-defined SSL certificate for Traefik, use the TLSStore custom resource. The Kubernetes secret created with the SSL certificate should be referenced in the TLSStore object. Run the following command to create the TLSStore:
$ cat <<EOF | kubectl apply -f - apiVersion: traefik.containo.us/v1alpha1 kind: TLSStore metadata: name: default namespace: wccns spec: defaultCertificate: secretName: domain1-tls-cert EOF
Install Ingress for SSL termination configuration
Install
ingress-per-domain
using Helm for SSL configuration.The Kubernetes secret name should be updated in the template file.
The template file also contains the following annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.tls: "true" traefik.ingress.kubernetes.io/router.middlewares: wccns-wls-proxy-ssl@kubernetescrd
The entry point for SSL access and the Middleware name should be updated in the annotation. The Middleware name should be in the form
<namespace>-<middleware name>@kubernetescrd
.$ cd ${WORKDIR} $ helm install wcc-traefik-ingress \ \ charts/ingress-per-domain --set type=TRAEFIK \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "traefik.hostname=$LB_HOSTNAME" \ --set "traefik.hostnameorip=$TRAEFIK_PUBLIC_IP" \ --set tls=SSL
Sample output:
NAME: wcc-traefik-ingress LAST DEPLOYED: Mon Jul 20 11:44:13 2020 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
Get the details of the services by the above deployed ingress:
$ kubectl describe ingress wccinfra-traefik -n wccns
To confirm that the load balancer noticed the new ingress and is successfully routing to the domain server pods, you can send a request to the URL for the “WebLogic ReadyApp framework”, which should return an HTTP 200 status code, as follows:
$ curl -v http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER_PORT}/weblogic/ready * About to connect() to abc.com port 30305 (#0) * Trying 100.111.156.246... * Connected to abc.com (100.111.156.246) port 30305 (#0) > GET /weblogic/ready HTTP/1.1 > User-Agent: curl/7.29.0 > Host: domain1.org:30305 > Accept: */* > < HTTP/1.1 200 OK < Content-Length: 0 < Date: Thu, 03 Dec 2020 13:16:19 GMT < Vary: Accept-Encoding < * Connection #0 to host abc.com left intact
End-to-End SSL configuration
Install the Traefik load balancer for end-to-end SSL
Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the
values.yaml
file in the sample but setkubernetes.namespaces
specifically.$ cd ${WORKDIR} $ kubectl create namespace traefik $ helm repo add traefik https://helm.traefik.io/traefik --force-update
Sample output:
"traefik" has been added to your repositories
Install Traefik:
$ cd ${WORKDIR} $ helm install traefik traefik/traefik \ --namespace traefik \ --values charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik}" \ --set "service.type=LoadBalancer" \ --wait
Sample output:
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:30:20 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify the Traefik operator status and find the port number of the SSL and non-SSL services:
$ kubectl get all -n traefik
Sample output:
NAME READY STATUS RESTARTS AGE pod/traefik-operator-676fc64d9c-skppn 1/1 Running 0 78d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/traefik-operator NodePort 10.109.223.59 <none> 443:30443/TCP,80:30305/TCP 78d service/traefik-operator-dashboard ClusterIP 10.110.85.194 <none> 80/TCP 78d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/traefik-operator 1/1 1 1 78d NAME DESIRED CURRENT READY AGE replicaset.apps/traefik-operator-676fc64d9c 1 1 1 78d replicaset.apps/traefik-operator-cb78c9dc9 0 0 0 78d
Configure Traefik to manage the domain
Configure Traefik to manage the domain application service created in this namespace, where traefik
is the Traefik namespace and wccns
is the namespace of the domain:
$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
"kubernetes.namespaces={traefik,wccns}" --set
Sample output:
Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:43:02 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
Create IngressRouteTCP
To enable SSL passthrough in Traefik, you can configure a TCP router. A sample YAML for
IngressRouteTCP
is available at${WORKDIR}/charts/ingress-per-domain/tls/traefik-tls.yaml
.Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.
The following should be updated in
traefik-tls.yaml
:- The service name and the SSL port should be updated in the Services.
- The load balancer hostname(DNS name) should be updated in the
HostSNI
rule.
Sample
traefik-tls.yaml
:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: wcc-ucm-routetcp
namespace: wccns
spec:
entryPoints:
- websecure
routes:
- match: HostSNI(`<Traefik load balancer DNS name>`)
services:
- name: wccinfra-cluster-ucm-cluster
port: 16201
weight: 3
terminationDelay: 400
tls:
passthrough: true
Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname, or specify ’*’ to point to the Traefik load balancer IP.
- Create the IngressRouteTCP:
cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl apply -f traefik-tls.yaml
Create Oracle WebCenter Content domain
With the load-balancer configured, please create your domain by following the instructions documented in [Create Oracle WebCenter Content domains]({{< relref “/wccontent-domains/oracle-cloud/create-wccontent-domains” >}}), before verifying domain application URL access.
Verify domain application URL access
Verify Non-SSL access
After setting up the Traefik (ingress-based) load balancer, verify that the domain application URLs are accessible through the load balancer port 30305
for HTTP access. The sample URLs for Oracle WebCenter Content domain of type wcc
are:
http://${TRAEFIK_PUBLIC_IP}:30305/weblogic/ready
http://${TRAEFIK_PUBLIC_IP}:30305/cs
http://${TRAEFIK_PUBLIC_IP}:30305/ibr
http://${TRAEFIK_PUBLIC_IP}:30305/imaging
http://${TRAEFIK_PUBLIC_IP}:30305/dc-console
http://${TRAEFIK_PUBLIC_IP}:30305/wcc
Verify SSL termination and end-to-end SSL access
After setting up the Traefik (ingress-based) load balancer, verify that the domain applications are accessible through the SSL load balancer port 30443
for HTTPS access. The sample URLs for Oracle WebCenter Content domain are:
LOADBALANCER-SSLPORT is 30443
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc
Uninstall Traefik
$ helm delete wcc-traefik-ingress -n wccns
$ helm delete traefik -n wccns
$ kubectl delete namespace traefik
NGINX
This section provides information about how to install and configure the ingress-based NGINX load balancer to load balance Oracle WebCenter Content domain clusters. You can configure NGINX for non-SSL, SSL termination, and end-to-end SSL access of the application URL.
Follow these steps to set up NGINX as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:
See the official installation document for prerequisites.
Contents
To get repository information, enter the following Helm commands:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
Non-SSL and SSL termination
Install the NGINX load balancer
Deploy the
ingress-nginx
controller by using Helm on the domain namespace:For Non-SSL, use the following command
$ helm install nginx-ingress -n wccns \ --set controller.service.type=LoadBalancer \ --set controller.admissionWebhooks.enabled=false \ ingress-nginx/ingress-nginx
For SSL termination at load balancer, use the following command
$ helm install nginx-ingress -n wccns \ --set controller.service.type=LoadBalancer \ --set controller.admissionWebhooks.enabled=false \ --set controller.extraArgs.default-ssl-certificate="wccns/domain1-tls-cert" \ ingress-nginx/ingress-nginx
Sample output:
NAME: nginx-ingress
LAST DEPLOYED: Fri Jul 29 00:14:19 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check the status of the deployed ingress controller:
Please note the EXTERNAL-IP of the nginx-controller service. This is the public IP address of the load balancer that you will use to access the WebLogic Server Administration Console and WebCenter Content URLs. > Note: It may take a few minutes for the LoadBalancer IP(EXTERNAL-IP) to be available.
$ kubectl --namespace wccns get services | grep ingress-nginx-controller
Sample output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) nginx-ingress-ingress-nginx-controller LoadBalancer 10.96.180.215 144.24.xx.xx 80:31339/TCP,443:32278/TCP
To print only the NGINX EXTERNAL-IP, execute this command:
NGINX_PUBLIC_IP=`kubectl describe svc nginx-ingress-ingress-nginx-controller --namespace wccns | grep Ingress | awk '{print $3}'` $ echo $NGINX_PUBLIC_IP 144.24.xx.xx
Verify the helm charts:
$ helm list -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nginx-ingress wccns 1 2022-05-13 deployed ingress-nginx-4.2.5 1.3.1
Configure NGINX to manage ingresses
Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file
${WORKDIR}/charts/ingress-per-domain/values.yaml
. By default,type
isTRAEFIK
,tls
isNon-SSL
, anddomainType
iswccinfra
. These values can be overridden by passing values through the command line or can be edited in the sample filevalues.yaml
. If needed, you can update the ingress YAML file to define more path rules (in sectionspec.rules.host.http.paths
) based on the domain application URLs that need to be accessed. Update the template YAML file for the NGINX load balancer located at${WORKDIR}/charts/ingress-per-domain/templates/nginx-ingress.yaml
Install
ingress-per-domain
using Helm for non-SSL configuration:$ export LB_HOSTNAME=<NGINX load balancer DNS name> #OR leave it empty to point to NGINX load-balancer IP, by default $ export LB_HOSTNAME=''
Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname, or leave it empty to point to the NGINX load balancer IP.
$ cd ${WORKDIR} $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "nginx.hostname=$LB_HOSTNAME" \ --set type=NGINX \ --set tls=NONSSL
Sample output:
NAME: wccinfra-nginx-ingress LAST DEPLOYED: Tue May 10 10:37:12 2022 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
Create a certificate and generate a Kubernetes secret
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=<NGINX load balancer DNS name>" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=IP:$NGINX_PUBLIC_IP ) #OR use the following command if you chose to leave LB_HOSTNAME empty in the previous step $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \ -subj "/CN=*" \ -extensions san -config \ <(echo "[req]"; echo distinguished_name=req; echo "[san]"; echo subjectAltName=IP:$NGINX_PUBLIC_IP )
Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.
Generate a Kubernetes secret:
$ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
Install Ingress for SSL termination configuration
Install
ingress-per-domain
using Helm for SSL configuration:$ cd ${WORKDIR} $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "nginx.hostname=$LB_HOSTNAME" \ --set "nginx.hostnameorip=$NGINX_PUBLIC_IP" \ --set type=NGINX --set tls=SSL
Sample output:
NAME: wccinfra-nginx-ingress LAST DEPLOYED: Tue May 10 10:37:12 2022 NAMESPACE: wccns STATUS: deployed REVISION: 1 TEST SUITE: None
For non-SSL access or SSL to the Oracle WebCenter Content application, get the details of the services by the ingress:
$ kubectl describe ingress wccinfra-nginx -n wccns
Sample output of the services supported by the above deployed ingress:
Name: wccinfra-nginx
Namespace: wccns
Address: 144.24.xx.xx
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/em wccinfra-adminserver:7001 (10.244.2.117:7001)
/wls-exporter wccinfra-adminserver:7001 (10.244.2.117:7001)
/cs wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/adfAuthentication wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/_ocsh wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/_dav wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/idcws wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/idcnativews wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/wsm-pm wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
/ibr wccinfra-cluster-ibr-cluster:16250 (10.244.2.119:16250)
/ibr/adfAuthentication wccinfra-cluster-ibr-cluster:16250 (10.244.2.119:16250)
/weblogic/ready wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
Annotations:
nginx.ingress.kubernetes.io/affinity-mode: persistent
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 8m3s (x2 over 8m5s) nginx-ingress-controller Scheduled for sync
End-to-End SSL configuration
Install the NGINX load balancer for end-to-end SSL
For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate secrets: click here
Deploy the ingress-nginx controller by using Helm on the domain namespace:
helm install nginx-ingress -n wccns \ \ --set controller.extraArgs.default-ssl-certificate=wccns/domain1-tls-cert \ --set controller.service.type=LoadBalancer \ --set controller.admissionWebhooks.enabled=false \ --set controller.extraArgs.enable-ssl-passthrough=true ingress-nginx/ingress-nginx
Sample output:
NAME: nginx-ingress
LAST DEPLOYED: Mon Sep 19 11:08:16 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace wccns get services -o wide -w nginx-ingress-ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check the status of the deployed ingress controller:
$ kubectl --namespace wccns get services | grep ingress-nginx-controller
Sample output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) nginx-ingress-ingress-nginx-controller LoadBalancer 10.96.180.215 144.24.xx.xx 80:31339/TCP,443:32278/TCP
To print only the NGINX EXTERNAL-IP, execute this command:
NGINX_PUBLIC_IP=`kubectl describe svc nginx-ingress-ingress-nginx-controller --namespace wccns | grep Ingress | awk '{print $3}'` $ echo $NGINX_PUBLIC_IP 144.24.xx.xx
Deploy tls to access individual Managed Servers
Deploy tls to securely access the services. Only one application can be configured with
ssl-passthrough
. A sample tls file for NGINX is shown below for the servicewccinfra-cluster-ucm-cluster
and port16201
. All the applications running on port16201
can be securely accessed through this ingress. For each backend service, create different ingresses as NGINX does not support multiple path/rules with annotationssl-passthrough
. That is, forwccinfra-cluster-ucm-cluster
,wccinfra-cluster-ibr-cluster
,wccinfra-cluster-ipm-cluster
,wccinfra-cluster-capture-cluster
,wccinfra-cluster-wccadf-cluster
andwccinfra-adminserver
, different ingresses must be created.Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.
$ cd ${WORKDIR}/charts/ingress-per-domain/tls
Sample nginx-ucm-tls.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wcc-ucm-ingress
namespace: wccns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- '$NGINX_PUBLIC_IP'
secretName: domain1-tls-cert
rules:
- host: '<NGINX load balancer DNS name>'
http:
paths:
- path:
pathType: ImplementationSpecific
backend:
service:
name: wccinfra-cluster-ucm-cluster
port:
number: 16201
Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.
Deploy the secured ingress:
$ cd ${WORKDIR}/charts/ingress-per-domain/tls $ kubectl create -f nginx-ucm-tls.yaml
Check the services supported by the ingress:
$ kubectl describe ingress wcc-ucm-ingress -n wccns
Services supported by the ingress:
Name: wcc-ucm-ingress
Namespace: wccns
Address: 10.102.97.237
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
domain1-tls-cert terminates domain1.org
Rules:
Host Path Backends
---- ---- --------
domain1.org
wccinfra-cluster-ucm-cluster:16201 (10.244.238.136:16201,10.244.253.132:16201)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 62s (x2 over 106s) nginx-ingress-controller Scheduled for sync
Deploy tls to access Administration Server
As
ssl-passthrough
in NGINX works on the clusterIP of the backing service instead of individual endpoints, you must exposeadminserver service
created by the WebLogic Kubernetes Operator with clusterIP.For example:
- Get the name of Administration Server service:
$ kubectl get svc -n wccns | grep wccinfra-adminserver
Sample output:
bash wccinfra-adminserver ClusterIP None <none> 7001/TCP,7002/TCP 7
- Expose the Administration Server service
wccinfra-adminserver
and use the new service namewccinfra-adminserver-nginx-ssl
:
$ kubectl expose svc wccinfra-adminserver -n wccns --name=wccinfra-adminserver-nginx-ssl --port=7002
- Deploy the secured ingress:
Sample nginx-admin-tls.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wcc-admin-ingress
namespace: wccns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- '$NGINX_PUBLIC_IP'
secretName: domain1-tls-cert
rules:
- host: '<NGINX load balancer DNS name>'
http:
paths:
- path:
pathType: ImplementationSpecific
backend:
service:
name: wccinfra-adminserver-nginx-ssl
port:
number: 7002
Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.
$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl create -f nginx-admin-tls.yaml
Uninstall ingress-nginx tls
$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl delete -f nginx-ucm-tls.yaml
Create Oracle WebCenter Content domain
With the load-balancer configured, please create your domain by following the instructions documented in [Create Oracle WebCenter Content domains]({{< relref “/wccontent-domains/oracle-cloud/create-wccontent-domains” >}}), before verifying domain application URL access.
Verify domain application URL access
Verify Non-SSL access
Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-HOSTNAME
:
http://${LOADBALANCER-HOSTNAME}/weblogic/ready
http://${LOADBALANCER-HOSTNAME}/em
http://${LOADBALANCER-HOSTNAME}/cs
http://${LOADBALANCER-HOSTNAME}/ibr
http://${LOADBALANCER_HOSTNAME}/imaging
http://${LOADBALANCER_HOSTNAME}/dc-console
http://${LOADBALANCER_HOSTNAME}/wcc
Verify SSL termination and end-to-end SSL access
Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-HOSTNAME
:
https://${LOADBALANCER-HOSTNAME}/weblogic/ready
https://${LOADBALANCER-HOSTNAME}/em
https://${LOADBALANCER-HOSTNAME}/cs
https://${LOADBALANCER-HOSTNAME}/ibr
https://${LOADBALANCER_HOSTNAME}/imaging
https://${LOADBALANCER_HOSTNAME}/dc-console
https://${LOADBALANCER_HOSTNAME}/wcc
Uninstall NGINX
Uninstall and delete the ingress-nginx
deployment:
//Uninstall and delete the `ingress-nginx` deployment
$ helm delete wccinfra-nginx-ingress -n wccns
//Uninstall NGINX
$ helm delete nginx-ingress -n wccns
Create Oracle WebCenter Content domain
Contents
- Run the create domain script
- Run the managed-server-wrapper script
- Verify the results
- Verify the pods
- Verify the services
- Expose service for IBR intradoc port
- Expose service for UCM intradoc port
Run the create domain script
Run the create domain script, specifying your inputs file and an output directory to store the generated artifacts:
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./create-domain.sh \
-i create-domain-inputs.yaml \
-o <path to output-directory>
The script will perform the following steps:
- Create a directory for the generated Kubernetes YAML files for this domain if it does not already exist. The path name is
<path to output-directory>/weblogic-domains/<domainUID>
. If the directory already exists, its contents must be removed before using this script. - Create a Kubernetes job that will start up a utility Oracle WebCenter Content container and run offline WLST scripts to create the domain on the shared storage.
- Run and wait for the job to finish.
- Create a Kubernetes domain YAML file,
domain.yaml
, in the “output” directory that was created above. This YAML file can be used to create the Kubernetes resource using thekubectl create -f
orkubectl apply -f
command.
Run the managed-server-wrapper script
Run oke-start-managed-server-wrapper.sh
script, which intrenally applies the domain YAML. This script also applies initial configurations for Managed Server containers and readies Managed Servers for future inter-container communications.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./oke-start-managed-servers-wrapper.sh -o <path_to_output_directory> -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_termination>
Note: A value for parameter
-s
needs to be provided only if SSL termination at loadbalancer is being used - acceptable value is eithertrue
orfalse
. If this parameter value is not supplied, the script assumes that ssl termination at loadbalancer is not being used and by default the value will be taken asfalse
.
Run the startup configuration scripts for IPM and WCCADF applications as applicable
Run the script configure-ipm-connection.sh
to do startup configurations if IPM is enabled.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-ipm-connection.sh -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_or_ssl_termination>
Run the script configure-wccadf-domain.sh
to do startup configurations if ADFUI is enabled.
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-wccadf-domain.sh -n <node_ip> -m <ucm_node_port>
Patch the domain for the changes to be applied to the domain.
#STOP
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
sleep 2m
#START
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'
Verify the results
The create domain script will verify that the domain was created, and will report failure if there was any error. However, it may be desirable to manually verify the domain, even if just to gain familiarity with the various Kubernetes objects that were created by the script.
Generated YAML files with the default inputs
Sample content of the generated domain.yaml
:
$ cat output/weblogic-domains/wccinfra/domain.yaml
# Copyright (c) 2017, 2021, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
# This is an example of how to define a Domain resource.
#
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
name: wccinfra
namespace: wccns
labels:
weblogic.domainUID: wccinfra
spec:
# The WebLogic Domain Home
domainHome: /u01/oracle/user_projects/domains/wccinfra
maxClusterConcurrentStartup: 1
# The domain home source type
# Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image
domainHomeSourceType: PersistentVolume
# The WebLogic Server image that the WebLogic Kubernetes Operator uses to start the domain
image: "phx.ocir.io/xxxxxxxxxx/oracle/wccontent/oracle/wccontent:x.x.x.x"
# imagePullPolicy defaults to "Always" if image version is :latest
imagePullPolicy: "IfNotPresent"
# Identify which Secret contains the credentials for pulling an image
imagePullSecrets:
- name: image-secret
# Identify which Secret contains the WebLogic Admin credentials (note that there is an example of
# how to create that Secret at the end of this file)
webLogicCredentialsSecret:
name: wccinfra-domain-credentials
# Whether to include the server out file into the pod's stdout, default is true
includeServerOutInPodLog: true
# Whether to enable log home
logHomeEnabled: true
# Whether to write HTTP access log file to log home
httpAccessLogInLogHome: true
# The in-pod location for domain log, server logs, server out, introspector out, and Node Manager log files
logHome: /u01/oracle/user_projects/domains/logs/wccinfra
# An (optional) in-pod location for data storage of default and custom file stores.
# If not specified or the value is either not set or empty (e.g. dataHome: "") then the
# data storage directories are determined from the WebLogic domain home configuration.
dataHome: ""
# serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
# This determines which WebLogic Servers the WebLogic Kubernetes Operator will start up when it discovers this Domain
# - "NEVER" will not start any server in the domain
# - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
# - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
serverStartPolicy: "IF_NEEDED"
serverPod:
# an (optional) list of environment variable to be set on the servers
env:
- name: JAVA_OPTIONS
value: "-Dweblogic.StdoutDebugEnabled=false"
- name: USER_MEM_ARGS
value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
volumeMounts:
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
# adminServer is used to configure the desired behavior for starting the administration server.
adminServer:
# serverStartState legal values are "RUNNING" or "ADMIN"
# "RUNNING" means the listed server will be started up to "RUNNING" mode
# "ADMIN" means the listed server will be start up to "ADMIN" mode
serverStartState: "RUNNING"
# adminService:
# channels:
# The Admin Server's NodePort
# - channelName: default
# nodePort: 30701
# Uncomment to export the T3Channel as a service
# - channelName: T3Channel
# clusters is used to configure the desired behavior for starting member servers of a cluster.
# If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.
clusters:
- clusterName: ibr_cluster
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 1
# The number of managed servers to start for unlisted clusters
# replicas: 1
# Istio
# configuration:
# istio:
# enabled:
# readinessPort:
- clusterName: ucm_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: ipm_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: capture_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
- clusterName: wccadf_cluster
clusterService:
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
serverService:
precreateService: true
serverStartState: "RUNNING"
serverPod:
# Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
# already members of the same cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "weblogic.clusterName"
operator: In
values:
- $(CLUSTER_NAME)
topologyKey: "kubernetes.io/hostname"
replicas: 3
# The number of managed servers to start for unlisted clusters
# replicas: 1
Verify the domain
To confirm that the domain was created, enter the following command:
$ kubectl describe domain DOMAINUID -n NAMESPACE
Replace DOMAINUID
with the domainUID
and NAMESPACE
with the actual namespace.
Sample domain description:
[opc@bastionhost domain-home-on-pv]$ kubectl describe domain wccinfra -n wccns
Name: wccinfra
Namespace: wccns
Labels: weblogic.domainUID=wccinfra
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"weblogic.oracle/v8","kind":"Domain","metadata":{"annotations":{},"labels":{"weblogic.domainUID":"wccinfra"},"name":"wccinfr...
API Version: weblogic.oracle/v8
Kind: Domain
Metadata:
Creation Timestamp: 2021-08-24T12:26:19Z
Generation: 33
Managed Fields:
API Version: weblogic.oracle/v8
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:weblogic.domainUID:
Manager: kubectl
Operation: Update
Time: 2021-09-30T10:56:07Z
API Version: weblogic.oracle/v8
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusters:
f:conditions:
f:introspectJobFailureCount:
f:servers:
f:startTime:
Manager: Kubernetes Java Client
Operation: Update
Time: 2021-10-04T20:06:17Z
Resource Version: 115422662
Self Link: /apis/weblogic.oracle/v8/namespaces/wccns/domains/wccinfra
UID: e283c968-b80b-404b-aa1e-711080d7cc38
Spec:
Admin Server:
Server Start State: RUNNING
Clusters:
Cluster Name: ibr_cluster
Replicas: 1
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: ucm_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: ipm_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: capture_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Cluster Name: wccadf_cluster
Cluster Service:
Annotations:
traefik.ingress.kubernetes.io/affinity: true
traefik.ingress.kubernetes.io/service.sticky.cookie: true
traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
Replicas: 3
Server Pod:
Affinity:
Pod Anti Affinity:
Preferred During Scheduling Ignored During Execution:
Pod Affinity Term:
Label Selector:
Match Expressions:
Key: weblogic.clusterName
Operator: In
Values:
$(CLUSTER_NAME)
Topology Key: kubernetes.io/hostname
Weight: 100
Server Service:
Precreate Service: true
Server Start State: RUNNING
Data Home:
Domain Home: /u01/oracle/user_projects/domains/wccinfra
Domain Home Source Type: PersistentVolume
Http Access Log In Log Home: true
Image: phx.ocir.io/xxxxxxxxxx/oracle/wccontent:x.x.x.x
Image Pull Policy: IfNotPresent
Image Pull Secrets:
Name: image-secret
Include Server Out In Pod Log: true
Log Home: /u01/oracle/user_projects/domains/logs/wccinfra
Log Home Enabled: true
Max Cluster Concurrent Startup: 1
Server Pod:
Env:
Name: JAVA_OPTIONS
Value: -Dweblogic.StdoutDebugEnabled=false
Name: USER_MEM_ARGS
Value: -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m
Volume Mounts:
Mount Path: /u01/oracle/user_projects/domains
Name: weblogic-domain-storage-volume
Volumes:
Name: weblogic-domain-storage-volume
Persistent Volume Claim:
Claim Name: wccinfra-domain-pvc
Server Start Policy: IF_NEEDED
Web Logic Credentials Secret:
Name: wccinfra-domain-credentials
Status:
Clusters:
Cluster Name: ibr_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 1
Replicas: 1
Replicas Goal: 1
Cluster Name: ucm_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: ipm_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: capture_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Cluster Name: wccadf_cluster
Maximum Replicas: 5
Minimum Replicas: 0
Ready Replicas: 3
Replicas: 3
Replicas Goal: 3
Conditions:
Last Transition Time: 2021-09-30T11:04:35.889547Z
Reason: ServersReady
Status: True
Type: Available
Introspect Job Failure Count: 0
Servers:
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T10:58:38.381000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: 10.0.10.135
Server Name: adminserver
State: RUNNING
Cluster Name: ibr_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:01:09.987000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: 10.0.10.135
Server Name: ibr_server1
State: RUNNING
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server2
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server3
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server4
Cluster Name: ibr_cluster
Desired State: SHUTDOWN
Server Name: ibr_server5
Cluster Name: ucm_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:00:36.369000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: 10.0.10.142
Server Name: ucm-server1
State: RUNNING
Cluster Name: ucm_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:02:35.448000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: 10.0.10.135
Server Name: ucm-server2
State: RUNNING
Cluster Name: ucm_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:04:32.314000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: 10.0.10.142
Server Name: ucm-server3
State: RUNNING
Cluster Name: ucm_cluster
Desired State: SHUTDOWN
Server Name: ucm-server4
Cluster Name: ucm_cluster
Desired State: SHUTDOWN
Server Name: ucm-server5
Cluster Name: ipm_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:04:32.314000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: ipm_server1
State: RUNNING
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server2
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server3
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server4
Cluster Name: ipm_cluster
Desired State: SHUTDOWN
Server Name: ipm_server5
Cluster Name: capture_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:04:32.314000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: capture_server1
State: RUNNING
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server2
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server3
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server4
Cluster Name: capture_cluster
Desired State: SHUTDOWN
Server Name: capture_server5
Cluster Name: wccadf_cluster
Desired State: RUNNING
Health:
Activation Time: 2021-09-30T11:04:32.314000Z
Overall Health: ok
Subsystems:
Subsystem Name: ServerRuntime
Symptoms:
Node Name: MyNodeName
Server Name: wccadf_server1
State: RUNNING
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server2
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server3
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server4
Cluster Name: wccadf_cluster
Desired State: SHUTDOWN
Server Name: wccadf_server5
Start Time: 2021-08-24T12:26:20.033714Z
Events: <none>
In the Status
section of the output, the available servers and clusters are listed. Note that if this command is issued soon after the script finishes, there may be no servers available yet, or perhaps only the Administration Server but no Managed Servers. The WebLogic Kubernetes Operator will start up the Administration Server first and wait for it to become ready before starting the Managed Servers.
Verify the pods
Enter the following command to see the pods running the servers:
$ kubectl get pods -n NAMESPACE
Here is an example of the output of this command. You can verify that an Administration Server and Managed Servers for ucm and ibr cluster are running.
$ kubectl get pod -n wccns
NAME READY STATUS RESTARTS AGE
rcu 1/1 Running 0 54d
wccinfra-adminserver 1/1 Running 0 18d
wccinfra-create-fmw-infra-sample-domain-job-xqnn4 0/1 Completed 0 54d
wccinfra-ibr-server1 1/1 Running 0 18d
wccinfra-ucm-server1 1/1 Running 0 18d
wccinfra-ucm-server2 1/1 Running 0 18d
wccinfra-ucm-server3 1/1 Running 0 18d
wccinfra-ipm-server1 1/1 Running 0 18d
wccinfra-ipm-server2 1/1 Running 0 18d
wccinfra-ipm-server3 1/1 Running 0 18d
wccinfra-capture-server1 1/1 Running 0 18d
wccinfra-capture-server2 1/1 Running 0 18d
wccinfra-capture-server3 1/1 Running 0 18d
wccinfra-wccadf-server1 1/1 Running 0 18d
wccinfra-wccadf-server2 1/1 Running 0 18d
wccinfra-wccadf-server3 1/1 Running 0 18d
Verify the services
Enter the following command to see the services for the domain:
$ kubectl get services -n NAMESPACE
Here is an example of the output of this command.
Sample list of services:
$ kubectl get services -n wccns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oracle-db LoadBalancer 10.96.4.194 141.148.xxx.xxx 1521:30011/TCP 15d
wccinfra-adminserver ClusterIP None <none> 7001/TCP 43h
wccinfra-capture-server1 ClusterIP None <none> 16400/TCP 43h
wccinfra-capture-server2 ClusterIP None <none> 16400/TCP 43h
wccinfra-capture-server3 ClusterIP None <none> 16400/TCP 43h
wccinfra-capture-server4 ClusterIP 10.96.162.97 <none> 16400/TCP 43h
wccinfra-capture-server5 ClusterIP 10.96.86.213 <none> 16400/TCP 43h
wccinfra-cluster-capture-cluster ClusterIP 10.96.107.96 <none> 16400/TCP 2d13h
wccinfra-cluster-ibr-cluster ClusterIP 10.96.123.229 <none> 16250/TCP 2d13h
wccinfra-cluster-ipm-cluster ClusterIP 10.96.130.117 <none> 16000/TCP 2d13h
wccinfra-cluster-ucm-cluster ClusterIP 10.96.24.88 <none> 16200/TCP 119s
wccinfra-cluster-wccadf-cluster ClusterIP 10.96.11.113 <none> 16225/TCP 2d13h
wccinfra-ibr-server1 ClusterIP None <none> 16250/TCP 43h
wccinfra-ibr-server2 ClusterIP 10.96.57.47 <none> 16250/TCP 43h
wccinfra-ibr-server3 ClusterIP 10.96.75.252 <none> 16250/TCP 43h
wccinfra-ibr-server4 ClusterIP 10.96.120.224 <none> 16250/TCP 43h
wccinfra-ibr-server5 ClusterIP 10.96.34.58 <none> 16250/TCP 43h
wccinfra-ipm-server1 ClusterIP None <none> 16000/TCP 43h
wccinfra-ipm-server2 ClusterIP None <none> 16000/TCP 43h
wccinfra-ipm-server3 ClusterIP None <none> 16000/TCP 43h
wccinfra-ipm-server4 ClusterIP 10.96.44.8 <none> 16000/TCP 43h
wccinfra-ipm-server5 ClusterIP 10.96.77.81 <none> 16000/TCP 43h
wccinfra-ucm-server1 ClusterIP None <none> 16200/TCP 43h
wccinfra-ucm-server2 ClusterIP None <none> 16200/TCP 43h
wccinfra-ucm-server3 ClusterIP None <none> 16200/TCP 43h
wccinfra-ucm-server4 ClusterIP 10.96.132.1 <none> 16200/TCP 43h
wccinfra-ucm-server5 ClusterIP 10.96.199.161 <none> 16200/TCP 43h
wccinfra-wccadf-server1 ClusterIP None <none> 16225/TCP 43h
wccinfra-wccadf-server2 ClusterIP None <none> 16225/TCP 43h
wccinfra-wccadf-server3 ClusterIP None <none> 16225/TCP 43h
wccinfra-wccadf-server4 ClusterIP 10.96.156.42 <none> 16225/TCP 43h
wccinfra-wccadf-server5 ClusterIP 10.96.194.175 <none> 16225/TCP 43h
Expose service for IBR intradoc port
Get the IP address for the node, hosting ibr managed server pod. In this sample, node running wccinfra-ibr-server1 pod has ip ‘10.0.10.xx’
$ kubectl get pods -n wccns -o wide #output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES wccinfra-adminserver 1/1 Running 0 4h50m 10.244.0.150 10.0.10.xxx <none> <none> wccinfra-create-fmw-infra-sample-domain-job-zbsxr 0/1 Completed 0 7d22h 10.244.1.25 10.0.10.xx <none> <none> wccinfra-ibr-server1 1/1 Running 0 4h48m 10.244.1.38 10.0.10.xx <none> <none> wccinfra-ucm-server1 1/1 Running 0 4h48m 10.244.1.39 10.0.10.xx <none> <none> wccinfra-ucm-server2 1/1 Running 0 4h46m 10.244.0.151 10.0.10.xxx <none> <none> wccinfra-ucm-server3 1/1 Running 0 4h44m 10.244.1.40 10.0.10.xx <none> <none>
Expose the IBR intradoc port as a NodePort > Note: Choose NodePort value from a range (default: 30000-32767). In this sample, we have chosen nodePort value as
30555
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/ kubectl expose service/wccinfra-cluster-ibr-cluster --name wccinfra-cluster-ibr-cluster-ext --port=5555 --type=NodePort -n wccns --dry-run=true -o yaml > wccinfra-cluster-ibr-cluster-ext.yaml sed -i -e '/targetPort:*/a\ \ \ \ nodePort: 30555' wccinfra-cluster-ibr-cluster-ext.yaml kubectl -n wccns apply -f wccinfra-cluster-ibr-cluster-ext.yaml
Verify ibr service name ‘wccinfra-cluster-ibr-cluster-ext’
$ kubectl get svc -n wccns NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) wccinfra-cluster-ibr-cluster-ext NodePort 10.109.247.52 <none> 5555:30555/TCP
Create the outgoing provider by providing following details and restart the servers.
Please provide the NodePort value (in the above sample - 30555), as
Server Port
.Server Host Name: <your-ibr-managed-server-node-ip> Server Port: 30555
Expose service for UCM intradoc port
Get the IP address for the node, hosting ucm managed server pod. In this sample, node running wccinfra-ucm-server1 pod has ip ‘10.0.10.xx’
$ kubectl get pods -n wccns -o wide #output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES wccinfra-adminserver 1/1 Running 0 4h50m 10.244.0.150 10.0.10.xxx <none> <none> wccinfra-create-fmw-infra-sample-domain-job-zbsxr 0/1 Completed 0 7d22h 10.244.1.25 10.0.10.xx <none> <none> wccinfra-ibr-server1 1/1 Running 0 4h48m 10.244.1.38 10.0.10.xx <none> <none> wccinfra-ucm-server1 1/1 Running 0 4h48m 10.244.1.39 10.0.10.xx <none> <none> wccinfra-ucm-server2 1/1 Running 0 4h46m 10.244.0.151 10.0.10.xxx <none> <none> wccinfra-ucm-server3 1/1 Running 0 4h44m 10.244.1.40 10.0.10.xx <none> <none>
Expose the UCM intradoc port as a NodePort > Note: Choose NodePort value from a range (default: 30000-32767). In this sample, we have chosen nodePort value as
30444
$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/ $ kubectl expose service/wccinfra-cluster-ucm-cluster --name wccinfra-cluster-ucm-cluster-ext --port=4444 --type=NodePort -n wccns --dry-run=true -o yaml > wccinfra-cluster-ucm-cluster-ext.yaml $ sed -i -e '/targetPort:*/a\ \ \ \ nodePort: 30444' wccinfra-cluster-ucm-cluster-ext.yaml $ kubectl -n wccns apply -f wccinfra-cluster-ucm-cluster-ext.yaml
Verify ucm service name ‘wccinfra-cluster-ucm-cluster-ext’
$ kubectl get svc -n wccns NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) wccinfra-cluster-ucm-cluster-ext NodePort 10.109.247.52 <none> 4444:30444/TCP
Configuring Oracle WebCenter Content for Oracle Identity Cloud Service (IDCS)
Contents
- Introduction
- Updating SSL.hostnameVerifier Property
- Configuring IDCS Security Provider
- Configuring Oracle Identity Cloud Integrator Provider
- Setting Up Trust between IDCS and WebLogic
- Creating Admin User in IDCS Admin Console for WebCenter Content
- Managing Group Memberships, Roles, and Accounts
- Configuring WebCenter Content for User Logout
Introduction
Configuring WebCenter Content for Oracle Identity Cloud Service (IDCS) on OKE. Configuration information is provided in the following sections:
- Updating SSL.hostnameVerifier Property
- Configuring IDCS Security Provider
- Configuring WebCenter Content for User Logout
Updating SSL.hostnameVerifier Property
To update SSL.hostnameVerifier property, do the following: This is necessary for the IDCS provider to access IDCS.
Stop all the servers in the domain including Administration server and all Managed WebLogic servers.
Update the SSL.hostnameVerifier property:
edit the file
/ /bin/setDomainEnv.sh: go to pv location file system and modify the file setDomainEnv.sh sample: /WCCFS/wccinfra/bin/setDomainEnv.sh OR
Alternatively create or modify the file
<DOMAIN_HOME>/<domain_name>/bin/setUserOverrides.sh
. Add theSSL.hostnameVerifier
property for the IDCS Authenticator: sample: /WCCFS/wccinfra/bin/setUserOverrides.shEXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dweblogic.security.SSL.hostnameVerifier=weblogic.security.utils.SSLWLSWildcardHostnameVerifier" export EXTRA_JAVA_PROPERTIES
Start the Administration server and all Managed WebLogic servers.
Configuring IDCS Security Provider
Log in to the IDCS administration console.
Create a trusted application. In the Add Confidential Application wizard:
- Enter the client name and the description (optional).
- Select the Configure this application as a client now option. To configure this application, expand the Client Configuration in the Configuration tab.
- In the Allowed Grant Types , select Client Credentials field the check box.
- In the Grant the client access to Identity Cloud Service Admin APIs section, click Add to add the APP Roles (application roles). You can add the Identity Domain Administrator role.
- Keep the default settings for the pages and click Finish.
- Record/Copy the Client ID and Client Secret.This is needed when you will create the IDCS provider.
- Activate the application.
Configuring Oracle Identity Cloud Integrator Provider
To configure Identity Cloud Integrator Provider:
- Log in to the WebLogic Server Administration console.
- Select
Security Realm
in the Domain Structure pane. - On the Summary of
Security Realms
page, select the name of the realm (for example, myrealm). Clickmyrealm
. TheSettings for myrealm
page appears. - On the Settings for Realm Name page, select
Providers
and thenAuthentication
. To create a new Authentication Provider, in the Authentication Providers table, click New. - In the
Create a New Authentication Provider
page, enter the name of the authentication provider, for example, IDCSIntegrator and select theOracleIdentityCloudIntegrator
type of authentication provider from the drop-down list and click OK. - In the Authentication Providers table, click the newly created Oracle Identity Cloud Integrator,
IDCSIntegrator
link. - In the
Settings for IDCSIntegrator
page, for the Control Flag field, select theSufficient
option from the drop-down list ClickSave
. - Go to the Provider Specific page to configure the additional attributes for the security provider. Enter the values for the following fields & Click
Save
:- Host
- Port 443(default)
- select SSLEnabled
- Tenant
- Client Id
- Client Secret.
> NOTE: If IDCS URL is idcs-abcde.identity.example.com, then IDCS host would be identity.example.com and tenant name would be idcs-abcde. Keep the default settings for other sections of the page.
- Select
Security Realm
, thenmyrealm
, and thenProviders
. In the Authentication Providers table, clickReorder
. - In the
Reorder Authentication Providers
page, moveIDCSIntegrator
on the top and click OK. - In the Authentication Providers table, click the
DefaultAuthenticator
link. In theSettings for DefaultAuthenticator
page, for the Control Flag field, select theSufficient
option from the drop-down list. ClickSave
. - All changes will be activated. Restart the Administration server.
Setting Up Trust between IDCS and WebLogic
To set up trust between IDCS and WebLogic 1. Import certificate in KSS store. * Run this from the Administration Server node. * Get IDCS certificate: ```bash echo -n | openssl s_client -showcerts -servername
#sample
echo -n | openssl s_client -showcerts -servername xyz.identity.oraclecloud.com -connect idcs-xyz.identity.oraclecloud.com:443|sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/idcs_cert_chain.crt
#copy the certificate inside the admin_pod
kubectl cp /tmp/idcs_cert_chain.crt wccns/xyz-adminserver:/u01/idcs_cert_chain.crt
```
Import certificate. Run
/oracle_common/common/bin/wlst.sh file. connect('weblogic','Welcome_1','t3://<WEBLOGIC_HOST>:7001') svc=getOpssService(name='KeyStoreService') svc.importKeyStoreCertificate(appStripe='system',name='trust',password='',alias='idcs_cert_chain',type='TrustedCertificate',filepath='/tmp/idcs_cert_chain.crt',keypassword='') syncKeyStores(appStripe='system',keystoreFormat='KSS') #sample $./wlst.sh wls:/offline> connect('weblogic','welcome','t3://xyz-adminserver:7001') wls:/wccinfra/serverConfig/> svc=getOpssService(name='KeyStoreService') wls:/wccinfra/serverConfig/>svc.importKeyStoreCertificate(appStripe='system',name='trust',password='',alias='idcs_cert_chain',type='TrustedCertificate',filepath='/u01/idcs_cert_chain.crt',keypassword='') wls:/wccinfra/domainRuntime/>syncKeyStores(appStripe='system',keystoreFormat='KSS')
exit()
- Restart the Administration server and Managed servers
Creating Admin User in IDCS Administration Console for WebCenter Content
It is important to create the Admin user in IDCS because once the Managed servers are configured for SAML, the domain admin user (typically weblogic user) will not be able to log into the Managed servers.
To create WebLogic Admin user in IDCS for WebCenter Content JaxWS connection:
- Go to the Groups tab and create Administrators and sysmanager roles in IDCS.
- Go to the Users tab and create a wls admin user, for example, weblogic and assign it to Administrators and sysmanager groups.
- Restart all the Managed servers.
Managing Group Memberships, Roles, and Accounts
This will require modifying OPSS and libOVD to access IDCS. The following steps are required if using IDCS for user authorization. Do not run these steps if you are using IDCS only for user authentication. Ensure that all the servers are stopped (including Administration) before proceeding with the following steps: > NOTE: Shutdown all the servers using WebLogic Server Administration Console. Please keep in mind - kubectl patch domain
command is the recommended way for starting/stopping pods. Please refrain from using WebLogic Server Administration Console for the same, anywhere else.
Run the following script:
#exec the Administration server kubectl exec -n wccns -it wccinfra-adminserver -- /bin/bash #Run the wlst.sh cd /u01/oracle/oracle_common/common/bin/ ./wlst.sh
NOTE: It’s not required to connect to WebLogic Administration Server.
Read the domain:
readDomain(<DOMAIN_HOME>) #sample wls:/offline> readDomain('/u01/oracle/user_projects/domains/wccinfra')
Add the template:
addTemplate(<MIDDLEWARE_HOME>/oracle_common/common/templates/wls/oracle.opss_scim_template.jar") #sample wls:/offline/wccinfra>addTemplate('/u01/oracle/oracle_common/common/templates/wls/oracle.opss_scim_template.jar')
NOTE: This step may throw a warning, which can be ignored. The addTemplate is deprecated. Use selectTemplate followed by loadTemplates in place of addTemplate.
Update the domain:
updateDomain() #sample wls:/offline/wccinfra> updateDomain()
Close the domain:
closeDomain() #sample wls:/offline/wccinfra> closeDomain()
Exit from the Administration server container:
exit
Start the servers (Administration and Managed).
Configuring WebCenter Content for User Logout
If the Logout link is selected, you will be re-authenticated by SAML. To be able to select the Logout link:
Log in to WebCenter Content Server as an administrator. Select Administration, then Admin Server, and then General Configuration.
In the Additional Configuration Variables pane, add the following parameter:
EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dweblogic.security.SSL.hostnameVerifier=weblogic.security.utils.SSLWLSWildcardHostnameVerifier"
Click Save.
Restart the Administration and Managed servers.
Configure an additional mount or shared space to a domain for Imaging and Capture
A volume can be mounted to a server pod which can be accessible directly from outside Kubernetes cluster so that an external application could write new files to it.
This can be used specifically in WebCenter Imaging and WebCenter Capture applications for File Imports.
Kubernetes supports several types of volumes as given in Volumes | Kubernetes.
Further in this section, we will take nfs
volume as an example.
Mount “nfs” as volume
Create a NFS File system as described in the section Preparing a file system or an already existing NFS server can also be used.
To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts in domain.yaml
file.
Update the domain.yaml
and apply the changes as shown in sample below for mounting nfs server (for example, 100.XXX.XXX.X with shared export path at /sharedir
) to all the server pods at /u01/sharedir
.
The path /u01/sharedir
can be configured as the file import path in WebCenter Imaging and WebCenter Capture applications and the files put to /sharedir
will be processed by the applications.
Sample entry of domain.yaml
with nfs-volume configuration
...
serverPod:
# an (optional) list of environment variable to be set on the servers
env:
- name: JAVA_OPTIONS
value: "-Dweblogic.StdoutDebugEnabled=false"
- name: USER_MEM_ARGS
value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
volumes:
- name: weblogic-domain-storage-volume
persistentVolumeClaim:
claimName: wccinfra-domain-pvc
- name: nfs-volume
nfs:
server: 100.XXX.XXX.XXX
path: /sharedir
volumeMounts:
- mountPath: /u01/oracle/user_projects/domains
name: weblogic-domain-storage-volume
- mountPath: /u01/sharedir
name: nfs-volume
...
Launch Oracle Webcenter Content Native Applications in Containers deployed in Oracle Cloud Infrastructure
This section provides the steps required to use Oracle WebCenter Content native binaries with user interfaces, from containerized Managed Servers deployed in OCI.
Issue with Launching Headful User Interfaces for Oracle WebCenter Content Native Binaries
Oracle WebCenter Content (UCM) provide a set of native binaries with headful UIs, which are delivered as part of the product container image. WebCenter Content container images are, by default, created with Oracle slim linux image, which doesn’t come with all the packages pre-installed to support headful applications with UIs to be launched. UCM provides many such native binaries which uses JAVA AWT for UI support. With current Oracle WebCenter Content container images, native applications fails to run, being unable to launch UIs.
The following sections document the solution, by providing a set of instructions, enabling users to run UCM native applications with UIs.
These instructions are divided in two parts -
Steps to Update out-of-the-box Oracle WebCenter Content Container Image Using WebLogic Image Tool
This section describes the method to update image with a OS package using WebLogic Image Tool. Please refer this for setting up the WebLogic Image Tool. #### Additional Build Commands
The installation of required OS packages in the image, can be done using yum command in additional build command option available in WebLogic Image Tool. Here is the sample additionalBuildCmds.txt
file, to be used, to install required Linux packages (libXext.x86_64, libXrender.x86_64 and libXtst.x86_64).
[final-build-commands]
USER root
RUN yum -y --downloaddir=/tmp/imagetool install libXext libXrender libXtst \
&& yum -y --downloaddir=/tmp/imagetool clean all \
&& rm -rf /var/cache/yum/* \
&& rm -rf /tmp/imagetool
USER oracle
Note: It is important to change the user to
oracle
, otherwise the user during the container execution will beroot
. #### Build arguments
The arguments required for updating the image can be passed as file to the WebLogic Image Tool.
'update' is the sub command to Image Tool for updating an existing docker image.
'--fromImage' option provides the existing docker image that has to be updated.
'--tag' option should be provided with the new tag for the updated image.
'--additionalBuildCommands' option should be provided with the above created additional build commands file.
'--chown oracle:root' option should be provided to update file permissions.
Below is a sample build argument (buildArgs) file, to be used for updating the image,
update
--fromImage <existing_WCContent_image_without_dependent_packages>
--tag <name_of_updated_WCContent_image_to_be_built>
--additionalBuildCommands ./additionalBuildCmds.txt
--chown oracle:root
Update Oracle WebCenter Content Container Image
Now we can execute the WebLogic Image Tool to update the out-of-the-box image, using the build-argument file described above -
$ imagetool @buildArgs
WebLogic Image Tool provides multiple options for updating the image. For detailed information on the update options, please refer to this document.
Updating the image does not modify the ‘CMD’ from the source image unless it is modified in the additional build commands.
$ docker inspect -f '{{.Config.Cmd}}' <name_of_updated_Wccontent_image>
[/u01/oracle/container-scripts/createDomainandStartAdmin.sh]
Steps to launch Oracle WebCenter Content native applications using VNC sessions
Once updated image is successfully built and available on all required nodes, do the following:
- Update the domain.yaml file with updated image name and apply the domain.yaml file.
$ kubectl apply -f domain.yaml
- After applying the modified domain.yaml, pods will get restarted and start running with updated image with required packages.
$ kubectl get pods -n <namespace_being_used_for_wccontent_domain>
Install VNC SERVER on any one worker node, on which there is an UCM server pod deployed.
After starting vncserver systemctl daemon in the Worker Node, execute the following command from Bastion Host to the Private Subnet Instance (Worker Node).
# The default VNC port is 5900, but that number is incremented according to the configured display number. Thus, display 1 corresponds to 5901, display 2 to 5902, and so on.
$ ssh -i <Workernode_private.key> -L 590<display_number>:localhost:590<display_number> -p 22 -L 590<display number>:localhost:590<display number> -N -f <user>@<Workernode_privateIPAddress>
# Sample command
$ ssh -i <Workernode_private.key> -L 5901:localhost:5901 -p 22 -L 5901:localhost:5901 -N -f opc@10.0.10.xx
- From personal client execute the below command with the above session opened.
# Use any Linux emulator (like, Windows Power Shell for Windows) to run the following command
$ ssh -i <Bastionnode_private.key> -L 590<display_number>:localhost:590<display_number> -p 22 -L 590<display_number>:localhost:590<display_number> -N -f <user>@<BastionHost_publicIPAddress>
# Sample command
$ ssh -i <Bastionnode_private.key> -L 5901:localhost:5901 -p 22 -L 5901:localhost:5901 -N -f opc@129.xxx.249.xxx
Open VNC Client software in personal client and connect to Worker Node VNC Server using
localhost:590<display_number>
.Open a terminal once the VNC session to the Worker Node is connected -
$ xhost +
- Run the following commands from Bastion Host terminal –
# Get into the pod's (for example, wccinfra-ucm-server1) shell:
$ kubectl exec -n wccns -it wccinfra-ucm-server1 -- /bin/bash
# Traverse to the Native Binaries' location
$ cd /u01/oracle/user_projects/domains/wccinfra/ucm/cs/bin
# Set DISPLAY variable within the container
$ export DISPLAY=<Workernode_privateIPAddress, where VNC session was created>:<dispay_number>
# Sample command
$ export DISPLAY=10.0.10.xx:1
# Launch any native UCM application, from within the container, like this:
$ ./SystemProperties
- If the application has an UI, it’ll get launched now in the VNC session connected from personal client.
Appendix
This section provides information on miscellaneous tasks related to Oracle WebCenter Content domains deployment on Kubernetes.
Domain resource sizing
Describes the resourse sizing information for Oracle WebCenter Content domains setup on Kubernetes cluster.
Oracle WebCenter Content cluster sizing recommendations
Oracle WebCenter Content | Normal Usage | Moderate Usage | High Usage |
---|---|---|---|
Administration Server | No of CPU core(s) : 1, Memory : 4GB | No of CPU core(s) : 1, Memory : 4GB | No of CPU core(s) : 1, Memory : 4GB |
Managed Server | No of Servers : 2, No of CPU core(s) : 2, Memory : 16GB | No of Servers : 2, No of CPU core(s) : 4, Memory : 16GB | No of Servers : 3, No of CPU core(s) : 6, Memory : 16-32GB |
PV Storage | Minimum 250GB | Minimum 250GB | Minimum 500GB |
Security hardening
Review resources for the Docker and Kubernetes cluster hardening.
Securing a Kubernetes cluster involves hardening on multiple fronts - securing the API servers, etcd, nodes, container images, container run-time, and the cluster network. Apply principles of defense in depth, principle of least privilege, and minimize the attack surface. Use security tools such as Kube-Bench to verify the cluster’s security posture. Since Kubernetes is evolving rapidly refer to Kubernetes Security Overview for the latest information on securing a Kubernetes cluster. Also ensure the deployed Docker containers follow the Docker Security guidance.
This section provides references on how to securely configure Docker and Kubernetes.
References
- Docker hardening
- Kubernetes hardening
- Security best practices for Oracle WebLogic Server Running in Docker and Kubernetes
Quick start deployment on-premise
Use this Quick Start to create an Oracle WebCenter Content domain deployment in a Kubernetes cluster (on-premise environments) with WebLogic Kubernetes Operator. Note that this walkthrough is for demonstration purposes only, not for use in production. These instructions assume that you are already familiar with Kubernetes. If you need more detailed instructions, refer to the Install Guide.
Hardware requirements
Supported Linux kernel for deploying and running Oracle WebCenter Content domain with the WebLogic Kubernetes Operator is Oracle Linux 8 and Red Hat Enterprise Linux 8 . Refer to the prerequisites for more details.
For this exercise the minimum hardware requirement to create a single node Kubernetes cluster and deploy Oracle WebCenter Content domain with one UCM and IBR Cluster each.
Hardware | Size |
---|---|
RAM | 32GB |
Disk Space | 250GB+ |
CPU core(s) | 6 |
See here for resourse sizing information for Oracle WebCenter Content domain setup on Kubernetes cluster.
Set up Oracle WebCenter Content in an on-premise environment
Perform the steps in this topic to create a single instance on-premise Kubernetes cluster and create an Oracle WebCenter Content domain which deploys Oracle WebCenter Content Server and Oracle WebCenter Inbound Refinery Server.
- Step 1 - Prepare a virtual machine for the Kubernetes cluster
- Step 2 - Set up a single instance Kubernetes cluster
- Step 3 - Get scripts and images
- Step 4 - Install the WebLogic Kubernetes Operator
- Step 5 - Install the Traefik (ingress-based) load balancer
- Step 6 - Create and configure an Oracle WebCenter Content Domain
1. Prepare a virtual machine for the Kubernetes cluster
For illustration purposes, these instructions are for Oracle Linux 8. If you are using a different flavor of Linux, you will need to adjust the steps accordingly.
Note: These steps must be run with the root
user, unless specified otherwise. Any time you see YOUR_USERID
in a command, you should replace it with your actual userid
.
1.1 Prerequisites
Choose the directories where your Docker and Kubernetes files will be stored. The Docker directory should be on a disk with a lot of free space (more than 100GB) because it will be used for the Docker file system, which contains all of your images and containers. The Kubernetes directory is used for the
/var/lib/kubelet
file system and persistent volume storage.$ export docker_dir=/u01/docker $ export kubelet_dir=/u01/kubelet $ mkdir -p $docker_dir $kubelet_dir $ ln -s $kubelet_dir /var/lib/kubelet
Verify that IPv4 forwarding is enabled on your host.
Note: Replace eth0 with the ethernet interface name of your compute resource if it is different.
$ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.docker0.forwarding' $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.eth0.forwarding' $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.lo.forwarding' $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.ip_nonlocal_bind'
For example: Verify that all are set to 1
$ net.ipv4.conf.docker0.forwarding = 1 $ net.ipv4.conf.eth0.forwarding = 1 $ net.ipv4.conf.lo.forwarding = 1 $ net.ipv4.ip_nonlocal_bind = 1
Solution: Set all values to 1 immediately with the following commands:
$ /sbin/sysctl net.ipv4.conf.docker0.forwarding=1 $ /sbin/sysctl net.ipv4.conf.eth0.forwarding=1 $ /sbin/sysctl net.ipv4.conf.lo.forwarding=1 $ /sbin/sysctl net.ipv4.ip_nonlocal_bind=1
To preserve the settings post-reboot: Update the above values to 1 in files in /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/
Verify the iptables rule for forwarding.
Kubernetes uses iptables to handle many networking and port forwarding rules. A standard Docker installation may create a firewall rule that prevents forwarding.
Verify if the iptables rule to accept forwarding traffic is set:
$ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"
If the output is “DROP”, then run the following command:
$ /sbin/iptables -P FORWARD ACCEPT
Verify if the iptables rule is set properly to “ACCEPT”:
$ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"
Disable and stop firewalld:
$ systemctl disable firewalld $ systemctl stop firewalld
1.2 Install CRI-O and Podman
Note : If you have already configured CRI-O and Podman, continue to Install and configure Kubernetes
Make sure that you have the right operating system version:
$ uname -a $ more /etc/oracle-release
For example:
Linux xxxxxx 5.15.0-100.96.32.el8uek.x86_64 #2 SMP Tue Feb 27 18:08:15 PDT 2024 x86_64 x86_64 x86_64 GNU/Linux Oracle Linux Server release 8.6
Installing CRI-O:
### Add OLCNE( Oracle Cloud Native Environment ) Repository to dnf config-manager. This allows dnf to install the additional packages required for CRI-O installation. $ dnf config-manager --add-repo https://yum.oracle.com/repo/OracleLinux/OL8/olcne18/x86_64 ### Installing cri-o $ dnf install -y cri-o
Note : To install a different version of CRI-O or on a different operating system, see CRI-O Installation Instructions.
Start the CRI-O service:
Set up Kernel Modules and Proxies
### Enable kernel modules overlay and br_netfilter which are required for Kubernetes Container Network Interface (CNI) plugins $ modprobe overlay $ modprobe br_netfilter ### To automatically load these modules at system start up create config as below $ cat <<EOF > /etc/modules-load.d/crio.conf overlay br_netfilter EOF $ sysctl --system ### Set the environmental variable CONTAINER_RUNTIME_ENDPOINT to crio.sock to use crio as the container runtime $ export CONTAINER_RUNTIME_ENDPOINT=unix:///var/run/crio/crio.sock ### Setup Proxy for CRIO service $ cat <<EOF > /etc/sysconfig/crio http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock EOF
Set the runtime for CRI-O
### Setting the runtime for crio ## Update crio.conf $ vi /etc/crio/crio.conf ## Append following under [crio.runtime] conmon_cgroup = "kubepods.slice" cgroup_manager = "systemd" ## Uncomment following under [crio.network] network_dir="/etc/cni/net.d" plugin_dirs=[ "/opt/cni/bin", "/usr/libexec/cni", ]
Start the CRI-O Service
## Restart crio service $ systemctl restart crio.service $ systemctl enable --now crio
Installing Podman:
On Oracle Linux 8, if podman is not available, then install Podman and related tools with following command syntax:
$ sudo dnf module install container-tools:ol8
On Oracle Linux 9, if podman is not available, then install Podman and related tools with following command syntax:
$ sudo dnf install container-tools
Since the setup uses “docker” CLI commands, on Oracle Linux 8/9, install the podman-docker package if not available, that effectively aliases the docker command to podman,with following command syntax:
$ sudo dnf install podman-docker
Configure Podman rootless:
For using podman with your User ID (Rootless environment), Podman requires the user running it to have a range of UIDs listed in the files /etc/subuid and /etc/subgid. Rather than updating the files directly, the usermod program can be used to assign UIDs and GIDs to a user with the following commands:
$ sudo /sbin/usermod --add-subuids 100000-165535 --add-subgids 100000-165535 <REPLACE_USER_ID> $ podman system migrate
Note : The above “podman system migrate” need to be executed with your User ID and not root.
Verify the user-id addition
$ cat /etc/subuid $ cat /etc/subgid
Expected similar output
opc:100000:65536 <user-id>:100000:65536
1.3 Install and configure Kubernetes
Add the external Kubernetes repository:
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF
Set SELinux in permissive mode (effectively disabling it):
$ export PATH=/sbin:$PATH $ setenforce 0 $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Export proxy and install
kubeadm
,kubelet
, andkubectl
:### Get the nslookup IP address of the master node to use with apiserver-advertise-address during setting up Kubernetes master ### as the host may have different internal ip (hostname -i) and nslookup $HOSTNAME $ ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "` $ echo $ip_addr ### Set the proxies $ export NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr $ export no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr $ export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT $ export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT $ export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT $ export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT ### install kubernetes 1.26.2-0 $ VERSION=1.26.2-0 $ yum install -y kubelet-$VERSION kubeadm-$VERSION kubectl-$VERSION --disableexcludes=kubernetes ### enable kubelet service so that it auto-restart on reboot $ systemctl enable --now kubelet
Ensure
net.bridge.bridge-nf-call-iptables
is set to 1 in yoursysctl
to avoid traffic routing issues:$ cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl --system
Disable swap check:
$ sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet $ cat /etc/sysconfig/kubelet ### Reload and restart kubelet $ systemctl daemon-reload $ systemctl restart kubelet
Pull the images using crio:
$ kubeadm config images pull --cri-socket unix:///var/run/crio/crio.sock
1.4 Set up Helm
Install Helm v3.10.x
Download Helm from https://github.com/helm/helm/releases. Example to download Helm v3.5.4:
$ wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
Unpack
tar.gz
:$ tar -zxvf helm-v3.10.3-linux-amd64.tar.gz
Find the Helm binary in the unpacked directory, and move it to its desired destination:
$ mv linux-amd64/helm /usr/bin/helm
Run
helm version
to verify its installation:$ helm version version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}
2. Set up a single instance Kubernetes cluster
Notes: * These steps must be run with the
root
user, unless specified otherwise! * If you choose to use a different cidr block (that is, other than10.244.0.0/16
for the--pod-network-cidr=
in thekubeadm init
command), then also updateNO_PROXY
andno_proxy
with the appropriate value. * Also make sure to updatekube-flannel.yaml
with the new value before deploying. * Replace the following with appropriate values: *ADD-YOUR-INTERNAL-NO-PROXY-LIST
*REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
2.1 Set up the master node
Create a shell script that sets up the necessary environment variables. You can append this to the user’s
.bashrc
so that it will run at login. You must also configure your proxy settings here if you are behind an HTTP proxy:## grab my IP address to pass into kubeadm init, and to add to no_proxy vars ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "` export pod_network_cidr="10.244.0.0/16" export service_cidr="10.96.0.0/12" export PATH=$PATH:/sbin:/usr/sbin ### Set the proxies export NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr,$pod_network_cidr,$service_cidr export no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr,$pod_network_cidr,$service_cidr export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
Source the script to set up your environment variables:
$ . ~/.bashrc
To implement command completion, add the following to the script:
$ [ -f /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion $ source <(kubectl completion bash)
Run
kubeadm init
to create the master node:$ kubeadm init \ --pod-network-cidr=$pod_network_cidr \ --apiserver-advertise-address=$ip_addr \ --ignore-preflight-errors=Swap > /tmp/kubeadm-init.out 2>&1
Log in to the terminal with
YOUR_USERID:YOUR_GROUP
. Then set up the~/.bashrc
similar to steps 1 to 3 withYOUR_USERID:YOUR_GROUP
.Note that from now on we will be using
YOUR_USERID:YOUR_GROUP
to execute anykubectl
commands and notroot
.Set up
YOUR_USERID:YOUR_GROUP
to access the Kubernetes cluster:$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify that
YOUR_USERID:YOUR_GROUP
is set up to access the Kubernetes cluster using thekubectl
command:$ kubectl get nodes
Note: At this step, the node is not in ready state as we have not yet installed the pod network add-on. After the next step, the node will show status as Ready.
Install a pod network add-on (
flannel
) so that your pods can communicate with each other.Note: If you are using a different cidr block than
10.244.0.0/16
, then download and updatekube-flannel.yml
with the correct cidr address before deploying into the cluster:$ wget https://github.com/flannel-io/flannel/releases/download/v0.25.1/kube-flannel.yml ### Update the CIDR address if you are using a CIDR block other than the default 10.244.0.0/16 $ kubectl apply -f kube-flannel.yml
Verify that the master node is in Ready status:
$ kubectl get nodes
For example:
NAME STATUS ROLES AGE VERSION mymasternode Ready master 8m26s v1.27.2
or:
$ kubectl get pods -n kube-system
For example:
NAME READY STATUS RESTARTS AGE pod/coredns-86c58d9df4-58p9f 1/1 Running 0 3m59s pod/coredns-86c58d9df4-mzrr5 1/1 Running 0 3m59s pod/etcd-mymasternode 1/1 Running 0 3m4s pod/kube-apiserver-node 1/1 Running 0 3m21s pod/kube-controller-manager-mymasternode 1/1 Running 0 3m25s pod/kube-flannel-ds-amd64-6npx4 1/1 Running 0 49s pod/kube-proxy-4vsgm 1/1 Running 0 3m59s pod/kube-scheduler-mymasternode 1/1 Running 0 2m58s
To schedule pods on the master node,
taint
the node:$ kubectl taint nodes --all node-role.kubernetes.io/master-
Congratulations! Your Kubernetes cluster environment is ready to deploy your Oracle WebCenter Content domain.
For additional references on Kubernetes cluster setup, check the documentation to set up a Kubernetes cluster..
3. Get scripts and images
3.1 Set up the code repository to deploy Oracle WebCenter Content domains
Follow these steps to set up the source code repository required to deploy Oracle WebCenter Content domains.
3.2 Get dependent images and add them to your local registry
Follow these steps to pull dependent Docker images required to deploy Oracle WebCenter Content domains.
3.3 Get Oracle WebCenter Content Docker image and add it to your local registry
Follow these steps to obtain Oracle WebCenter Content image.
4. Install WebLogic Kubernetes Operator
4.1 Prepare for WebLogic Kubernetes Operator.
Create a namespace
opns
for the WebLogic Kubernetes Operator:$ kubectl create namespace opns
Create a service account
op-sa
for WebLogic Kubernetes Operator in the operator’s namespace:$ kubectl create serviceaccount -n opns op-sa
4.2 Install the WebLogic Kubernetes Operator
Use Helm to install and start WebLogic Kubernetes Operator from the directory you just cloned:
$ cd ${WORKDIR}
$ helm install weblogic-kubernetes-operator charts/weblogic-operator \
--namespace opns \
--set image=oracle/weblogic-kubernetes-operator:4.2.9 \
--set serviceAccount=op-sa \
--set "domainNamespaces={}" \
--wait
4.3 Verify the WebLogic Kubernetes Operator
Verify that the WebLogic Kubernetes Operator’s pod is running by listing the pods in the respective namespace. You should see one for the WebLogic Kubernetes Operator:
$ kubectl get pods -n opns
Verify that the WebLogic Kubernetes Operator is up and running by viewing the operator-pod’s logs:
$ kubectl logs -n opns -c weblogic-operator deployments/weblogic-operator
The WebLogic Kubernetes Operator v4.2.9 has been installed. Continue with the load balancer and Oracle WebCenter Content domain setup.
5. Install the Traefik (ingress-based) load balancer
WebLogic Kubernetes Operator supports these load balancers: Traefik, NGINX and Apache. Samples are provided in the documentation.
This Quick Start demonstrates how to install the Traefik ingress controller to provide load balancing for an Oracle WebCenter Content domain.
Create a namespace for Traefik:
$ kubectl create namespace traefik
Set up Helm for 3rd party services:
$ helm repo add traefik https://containous.github.io/traefik-helm-chart
Install the Traefik operator in the
traefik
namespace with the provided sample values:$ cd ${WORKDIR} $ helm install traefik traefik/traefik \ --namespace traefik \ --values charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik}" \ --set "service.type=NodePort" \ --wait
6. Create and configure an Oracle WebCenter Content domain
6.1 Prepare for an Oracle WebCenter Content domain
Create a namespace that can host Oracle WebCenter Content domain:
$ kubectl create namespace wccns
Use Helm to configure the WebLogic Kubernetes Operator to manage Oracle WebCenter Content domains in this namespace:
$ cd ${WORKDIR} $ helm upgrade weblogic-kubernetes-operator charts/weblogic-operator \ --reuse-values \ --namespace opns \ --set "domainNamespaces={wccns}" \ --wait
Create Kubernetes secrets.
Create a Kubernetes secret for the domain in the same Kubernetes namespace as the domain. In this example, the username is
weblogic
, the password inwelcome1
, and the namespace iswccns
:$ cd ${WORKDIR}/create-weblogic-domain-credentials $ ./create-weblogic-credentials.sh \ -u weblogic \ -p welcome1 \ -n wccns \ -d wccinfra \ -s wccinfra-domain-credentials
Create a Kubernetes secret for the RCU in the same Kubernetes namespace as the domain:
- Schema user : WCC1
- Schema password : Oradoc_db1
- DB sys user password : Oradoc_db1
- Domain name : wccinfra
- Domain Namespace : wccns
- Secret name : wccinfra-rcu-credentials
$ cd ${WORKDIR}/create-rcu-credentials $ ./create-rcu-credentials.sh \ -u WCC1 \ -p Oradoc_db1 \ -a sys \ -q Oradoc_db1 \ -d wccinfra \ -n wccns \ -s wccinfra-rcu-credentials
Create the Kubernetes persistence volume and persistence volume claim.
- Create the Oracle WebCenter Content domain home directory. Determine if a user already exists on your host system with
uid:gid
of1000:0
:
$ sudo getent passwd 1000
If this command returns a username (which is the first field), you can skip the following
useradd
command. If not, create the oracle user withuseradd
:$ sudo useradd -u 1000 -g 0 oracle
Create the directory that will be used for the Oracle WebCenter Content domain home:
$ sudo mkdir /scratch/k8s_dir $ sudo chown -R 1000:0 /scratch/k8s_dir
- Update
create-pv-pvc-inputs.yaml
with the following values:
- baseName: domain
- domainUID: wccinfra
- namespace: wccns
- weblogicDomainStoragePath: /scratch/k8s_dir
Review and update if any changes required.
$ cd ${WORKDIR}/create-weblogic-domain-pv-pvc $ vim create-pv-pvc-inputs.yaml
- Run the
create-pv-pvc.sh
script to create the PV and PVC configuration files:
$ ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o output
- Create the PV and PVC using the configuration files created in the previous step:
$ kubectl create -f output/pv-pvcs/wccinfra-domain-pv.yaml $ kubectl create -f output/pv-pvcs/wccinfra-domain-pvc.yaml
- Create the Oracle WebCenter Content domain home directory. Determine if a user already exists on your host system with
Configure the database and create schemas for the Oracle WebCenter Content domain.
Follow configure-database-access step and run-RCU step to set up the database connection and configure product schemas required to deploy Oracle WebCenter Content domain.
Now the environment is ready to start the Oracle WebCenter Content domain creation.
6.2 Create an Oracle WebCenter Content domain
The sample scripts for Oracle WebCenter Content domain deployment are available at
${WORKDIR}/create-wcc-domain/domain-home-on-pv
. You must editcreate-domain-inputs.yaml
(or a copy of it) to provide the details for your domain.Run the
create-domain.sh
script to create a domain:$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/ $ ./create-domain.sh -i create-domain-inputs.yaml -o output
Create a Kubernetes domain object:
Once the create-domain.sh is successful, it generates the
output/weblogic-domains/wccinfra/domain.yaml
that you can use to create the Kubernetes resource domain, which starts the domain and servers:$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv $ kubectl create -f output/weblogic-domains/wccinfra/domain.yaml
Verify that the Kubernetes domain object named
wccinfra
is created:$ kubectl get domain -n wccns NAME AGE wccinfra 3m18s
Once you create the domain, introspect pod is created. This inspects the domain home and then starts the
wccinfra-adminserver
pod. Once thewccinfra-adminserver
pod starts successfully, then the Managed Server pods are started in parallel. Watch thewccns
namespace for the status of domain creation:$ kubectl get pods -n wccns
Verify that the Oracle WebCenter Content domain server pods and services are created and in Ready state:
$ kubectl get all -n wccns
6.3 Configure Traefik to access in Oracle WebCenter Content domain services
Configure Traefik to manage ingresses created in the Oracle WebCenter Content domain namespace (
wccns
):$ helm upgrade traefik traefik/traefik \ --reuse-values \ --namespace traefik \ --set "kubernetes.namespaces={traefik,wccns}" \ --wait
Create an ingress for the domain in the domain namespace by using the sample Helm chart:
$ cd ${WORKDIR} $ helm install wcc-traefik-ingress charts/ingress-per-domain \ --namespace wccns \ --values charts/ingress-per-domain/values.yaml \ --set "traefik.hostname=$(hostname -f)" \ --set tls=NONSSL
Verify the created ingress per domain details:
$ kubectl describe ingress wccinfra-traefik -n wccns
6.4 Verify that you can access the Oracle WebCenter Content domain URL
Get the
LOADBALANCER_HOSTNAME
for your environment:export LOADBALANCER_HOSTNAME=$(hostname -f)
The following URLs are available for Oracle WebCenter Content domain:
Credentials: username:
weblogic
password:welcome1
http://${LOADBALANCER_HOSTNAME}:30305/em http://${LOADBALANCER_HOSTNAME}:30305/cs http://${LOADBALANCER_HOSTNAME}:30305/ibr http://${LOADBALANCER_HOSTNAME}:30305/imaging http://${LOADBALANCER_HOSTNAME}:30305/dc-console http://${LOADBALANCER_HOSTNAME}:30305/wcc
Deploying and Managing Oracle WebCenter Content on Kubernetes
G18346-01
Last updated: December 2024
Copyright © 2024, Oracle and/or its affiliates.
Primary Author: Oracle Corporation