Oracle WebCenter Content on Kubernetes

The WebLogic Kubernetes Operator supports deployment of Oracle WebCenter Content. Follow the instructions in this document to set up Oracle WebCenter Content domains on Kubernetes.

In this release, Oracle WebCenter Content domains are supported using the “domain on a persistent volume” model only, where the domain home is located in a persistent volume (PV).

The operator has several key features to assist you with deploying and managing Oracle WebCenter Content domains in a Kubernetes environment. You can:

Current production release

The current supported production release of the Oracle WebLogic Server Kubernetes Operator, for Oracle WebCenter Content domains deployment is 4.2.9

Recent changes and known issues

See the Release Notes for recent changes and known issues for Oracle WebCenter Content domains deployment on Kubernetes.

Limitations in WebCenter Content Domain

See here for limitations in this release.

About this documentation

This documentation includes sections targeted to different audiences. To help you find what you are looking for more easily, please consult this table of contents:

Additional reading

Oracle WebCenter Content domains deployment on Kubernetes leverages the Oracle WebLogic Server Kubernetes operator framework.

Release Notes

Review the latest changes for Oracle WebCenter Content on Kubernetes.

Recent changes

Date Version Change
December 2024 14.1.2.0.0
GitHub release version 24.4.3
First release of Oracle WebCenter Content 14.1.2.0.0 on Kubernetes.

Known issues

Issue Description
Publishing via LoadBalancer Endpoint Currenly publishing is only supported via NodePort as described in section For Publishing Setting in WebCenter Content.

Install Guide

Install the WebLogic Kubernetes Operator and prepare and deploy Oracle Webcenter Content domains.

Requirements and Limitations

Understand the system requirements and limitations for deploying and running Oracle WebCenter Content domains with the WebLogic Kubernetes Operator, including the WebCenter Content domain cluster sizing recommendations.

Contents

Introduction

This document describes the special considerations for deploying and running a WebCenter Content domain with the WebLogic Kubernetes Operator. Other than those considerations listed here, WebCenter Content domains work in the same way as Fusion Middleware Infrastructure domains and WebLogic Server domains.

In this release, WebCenter Content domains are supported using the domain on a persistent volume model only where a WebCenter Content domain is located in a persistent volume (PV).

System Requirements

NOTE: Add your host IP by using hostname -i and also nslookup IP addresses to the no_proxy, NO_PROXY list above.

Limitations

Compared to running a WebLogic Server domain in Kubernetes using the WebLogic Kubernetes Operator, the following limitations currently exist for Oracle WebCenter Content domains:

For up-to-date information about the features of WebLogic Server that are supported in Kubernetes environments, see My Oracle Support Doc ID 2349228.1.

WebCenter Content Cluster Sizing Recommendations

WebCenter Content Normal Usage Moderate Usage High Usage
Admin Server No of CPU(s) : 1, Memory : 4GB No of CPU(s) : 1, Memory : 4GB No of CPU(s) : 1, Memory : 4GB
Managed Server No of Servers : 2, No of CPU(s) : 2, Memory : 16GB No of Servers : 2, No of CPU(s) : 4, Memory : 16GB No of Servers : 3, No of CPU(s) : 6, Memory : 16-32GB
PV Storage Minimum 250GB Minimum 250GB Minimum 500GB

Prepare your environment

To prepare your Oracle WebCenter Content in Kubernetes environment, complete the following steps:

  1. Set up your Kubernetes cluster

  2. Install Helm

  3. Pull dependent images

  4. Set up the code repository to deploy Oracle WebCenter Content domain

  5. Obtain the Oracle WebCenter Content Docker image

  6. Install the WebLogic Kubernetes Operator

  7. Prepare the environment for Oracle WebCenter Content domain

    1. Create a namespace for the Oracle WebCenter Content domain

    2. Create a persistent storage for the Oracle WebCenter Content domain

    3. Create a Kubernetes secret with domain credentials

    4. Create a Kubernetes secret with the RCU credentials

    5. Configure access to your database

    6. Run the Repository Creation Utility to set up your database schemas

  8. Create Oracle WebCenter Content domain

Set up your Kubernetes cluster

If you need help setting up a Kubernetes environment, check the documentation.

Install Helm

The WebLogic Kubernetes Operator uses Helm to create and deploy the necessary resources and then run it in a Kubernetes cluster. For Helm installation and usage information, see here.

Pull dependent images

Obtain dependent images and add them to your local registry. Dependent images include WebLogic Kubernetes Operator, Traefik. Pull these images and add them to your local registry:

  1. Pull these docker images and re-tag them as shown:

To pull an image from the Oracle Container Registry, in a web browser, navigate to https://container-registry.oracle.com and log in using the Oracle Single Sign-On authentication service. If you do not already have SSO credentials, at the top of the page, click the Sign In link to create them.

Use the web interface to accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. Your acceptance of these terms are stored in a database that links the software images to your Oracle Single Sign-On login credentials.

Then, pull these docker images and re-tag them:

docker login https://container-registry.oracle.com (enter your Oracle email Id and password)
This step is required once at every node to get access to the Oracle Container Registry.

WebLogic Kubernetes Operator image:

$ docker pull container-registry.oracle.com/middleware/weblogic-kubernetes-operator:4.2.9
$ docker tag container-registry.oracle.com/middleware/weblogic-kubernetes-operator:4.2.9 oracle/weblogic-kubernetes-operator:4.2.9

Pull Traefik Image

$ docker pull traefik:2.6.0

Set up the code repository to deploy Oracle WebCenter Content domain

Oracle WebCenter Content domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator infrastructure. To deploy an Oracle WebCenter Content domain, you must set up the deployment scripts.

  1. Create a working directory to set up the source code:

    $ mkdir $HOME/wcc_4.2.9
    $ cd $HOME/wcc_4.2.9
  2. Download the WebLogic Kubernetes Operator source code and Oracle WebCenter Content Suite Kubernetes deployment scripts from the WebCenter Content repository. Required artifacts are available at OracleWebCenterContent/kubernetes.

    $ git clone https://github.com/oracle/fmw-kubernetes.git
    $ export WORKDIR=$HOME/wcc_4.2.9/fmw-kubernetes/OracleWebCenterContent/kubernetes

Obtain the Oracle WebCenter Content Docker image

Obtain the Oracle WebCenter Content image using any one of the options.

1 Get Oracle WebCenter Content image from Oracle Container Registry (OCR)

2 Build Oracle WebCenter Content Container image

1. Get Oracle WebCenter Content image from the Oracle Container Registry (OCR):

For first time users, to pull an image from the Oracle Container Registry, navigate to https://container-registry.oracle.com and log in using the Oracle Single Sign-On (SSO) authentication service. If you do not already have SSO credentials, you can create an Oracle Account using: https://profile.oracle.com/myprofile/account/create-account.jspx.

Use the web interface to accept the Oracle Standard Terms and Restrictions for the Oracle software images that you intend to deploy. Your acceptance of these terms are stored in a database that links the software images to your Oracle Single Sign-On login credentials.

To obtain the image, log in to the Oracle Container Registry:

$ docker login container-registry.oracle.com

Find and then pull the prebuilt Oracle WebCenter Content Suite image :

$ docker pull container-registry.oracle.com/middleware/webcenter-content_cpu:14.1.2.0.0-<TAG>
2. Build Oracle WebCenter Content Container image :

Alternatively, if you want to build and use Oracle WebCenter Content Container image, using WebLogic Image Tool, with any additional bundle patch or interim patches, then follow these steps to create the image.

Note: The default Oracle WebCenter Content image name used for Oracle WebCenter Content domain deployment is oracle/wccontent:14.1.2.0.0. The image created must be tagged as oracle/wccontent:14.1.2.0.0 using the docker tag command. If you want to use a different name for the image, make sure to update the new image tag name in the create-domain-inputs.yaml file and also in other instances where the oracle/wccontent:14.1.2.0.0 image name is used.

Install the WebLogic Kubernetes Operator

The WebLogic Kubernetes Operator supports the deployment of Oracle WebCenter Content domain in the Kubernetes environment. Follow the steps in this document to install WebLogic Kubernetes Operator. > Note: Optionally, you can execute these steps to send the contents of the operator’s logs to Elasticsearch.

In the following example commands to install the WebLogic Kubernetes Operator, opns is the namespace and op-sa is the service account created for WebLogic Kubernetes Operator:

Creating namespace and service account for WebLogic Kubernetes Operator

$ kubectl create namespace opns
$ kubectl create serviceaccount -n opns  op-sa  

Install WebLogic Kubernetes Operator

$ cd ${WORKDIR}

$ helm install weblogic-kubernetes-operator charts/weblogic-operator --namespace opns  --set image=oracle/weblogic-kubernetes-operator:4.2.9 --set serviceAccount=op-sa --set "domainNamespaces={}" --set "javaLoggingLevel=FINE" --wait  

Prepare the environment for Oracle WebCenter Content domain

Create a namespace for the Oracle WebCenter Content domain

Create a Kubernetes namespace (for example, wccns) for the domain unless you intend to use the default namespace. Use the new namespace in the remaining steps in this section. For details, see Prepare to run a domain.

 $ kubectl create namespace wccns
 
 $ cd ${WORKDIR}
 $ helm upgrade --reuse-values --namespace opns --set "domainNamespaces={wccns}" --wait weblogic-kubernetes-operator charts/weblogic-operator

Create a persistent storage for the Oracle WebCenter Content domain

In the Kubernetes namespace you created, create the PV and PVC for the domain by running the create-pv-pvc.sh script. Follow the instructions for using the script to create a dedicated PV and PVC for the Oracle WebCenter Content domain.

Create a Kubernetes secret with domain credentials

Create the Kubernetes secrets username and password of the administrative account in the same Kubernetes namespace as the domain:

  $ cd ${WORKDIR}/create-weblogic-domain-credentials
  
  $ ./create-weblogic-credentials.sh -u weblogic -p welcome1 -n wccns -d wccinfra -s wccinfra-domain-credentials

For more details, see this document.

You can check the secret with the kubectl get secret command.

For example:

$ kubectl get secret wccinfra-domain-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2020-09-16T08:22:50Z"
labels:
  weblogic.domainName: wccinfra
  weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
  fieldsType: FieldsV1
  fieldsV1:
    f:data:
      .: {}
      f:password: {}
      f:username: {}
    f:metadata:
      f:labels:
        .: {}
        f:weblogic.domainName: {}
        f:weblogic.domainUID: {}
    f:type: {}
  manager: kubectl
  operation: Update
  time: "2020-09-16T08:22:50Z"
name: wccinfra-domain-credentials
namespace: wccns
resourceVersion: "3277100"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-domain-credentials
uid: 35a8313f-1ec2-44b0-a2bf-fee381eed57f
type: Opaque
Create a Kubernetes secret with the RCU credentials

You also need to create a Kubernetes secret containing the credentials for the database schemas. When you create your domain, it will obtain the RCU credentials from this secret.

Use the provided sample script to create the secret:

$ cd ${WORKDIR}/create-rcu-credentials

$ ./create-rcu-credentials.sh -u weblogic -p welcome1 -a sys -q welcome1 -d wccinfra -n wccns -s wccinfra-rcu-credentials 

The parameter values are:

-u username for schema owner (regular user), required.
-p password for schema owner (regular user), required.
-a username for SYSDBA user, required.
-q password for SYSDBA user, required.
-d domainUID. Example: wccinfra
-n namespace. Example: wccns
-s secretName. Example: wccinfra-rcu-credentials

You can confirm the secret was created as expected with the kubectl get secret command.

For example:

$ kubectl get secret wccinfra-rcu-credentials -o yaml -n wccns
  apiVersion: v1
data:
  password: d2VsY29tZTE=
  sys_password: d2VsY29tZTE=
  sys_username: c3lz
  username: d2VibG9naWM=
kind: Secret
metadata:
  creationTimestamp: "2020-09-16T08:23:04Z"
  labels:
    weblogic.domainName: wccinfra
    weblogic.domainUID: wccinfra
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:password: {}
        f:sys_password: {}
        f:sys_username: {}
        f:username: {}
      f:metadata:
        f:labels:
          .: {}
          f:weblogic.domainName: {}
          f:weblogic.domainUID: {}
      f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-09-16T08:23:04Z"
  name: wccinfra-rcu-credentials
  namespace: wccns
  resourceVersion: "3277132"
  selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-rcu-credentials
  uid: b75f4e13-84e6-40f5-84ba-0213d85bdf30
type: Opaque
Configure access to your database

Run a container to create rcu pod

$ kubectl run rcu --image oracle/wccontent:14.1.2.0.0 -n wccns -- sleep infinity
   
#check the status of rcu pod
$ kubectl get pods -n wccns
Run the Repository Creation Utility to set up your database schemas
Create OR Drop schemas

To create the database schemas for Oracle WebCenter Content, run the create-rcu-schema.sh script.

For example:

# make sure rcu pod status is running before executing this 
kubectl exec -n wccns -ti rcu /bin/bash

# DB details 
export CONNECTION_STRING=your_db_host:1521/your_db_service
export RCUPREFIX=your_schema_prefix
echo -e welcome1"\n"welcome1> /tmp/pwd.txt
   
# Create schemas
/u01/oracle/oracle_common/bin/rcu -silent -createRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS   -component STB -component OPSS  -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS  -tablespace USERS -tempTablespace TEMP -f < /tmp/pwd.txt
   
# Drop schemas
/u01/oracle/oracle_common/bin/rcu -silent -dropRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS  -component STB -component OPSS  -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -f < /tmp/pwd.txt 

#exit from the container
exit
   

Note: In the create and drop schema commands above, pass additional components ( -component IPM -component CAPTURE ) if IPM and CAPTURE applications are enabled respectively.

Now that you have required Docker images and have created your RCU schemas, you are ready to create your domain. To continue, follow the instructions in Create Oracle WebCenter Content domains.

Create Oracle WebCenter Content domain

The section describes creation of Oracle WebCenter Content domain home on an existing Kubernetes persistent volume (PV) and persistent volume claim (PVC), using WebCenter Content deployment scripts. The scripts also generate the domain YAML file, which can then be used to start the Kubernetes artifacts of the corresponding domain.

Contents

Prerequisites

Before you begin, complete the following steps:

  1. Review the Domain resource documentation.
  2. Review the requirements and limitations.
  3. Ensure that you have executed all the preliminary steps in Prepare your environment.
  4. Ensure that the database schemas were created and the WebLogic Kubernetes Operator are running.

Prepare to use the create domain script

The sample scripts for Oracle WebCenter Content domain deployment are available at ${WORKDIR}/create-wcc-domain.

You must edit create-domain-inputs.yaml (or a copy of it) located under ${WORKDIR}/create-wcc-domain/domian-home-on-pv to provide the details for your domain. Refer to the configuration parameters below to understand the information that you must provide in this file.

Configuration parameters

The following parameters can be provided in the inputs file.

Parameter Definition Default
sslEnabled Boolean indicating whether to enable SSL for each WebLogic Server instance. false
adminPort Port number for the Administration Server inside the Kubernetes cluster. 7001
adminServerSSLPort SSL port number of the Administration Server inside the Kubernetes cluster. 7002
adminNodePort Port number of the Administration Server outside the Kubernetes cluster. 30701
adminServerName Name of the Administration Server. AdminServer
clusterName Name of the WebLogic cluster instance to generate for the domain. By default the cluster name is ucm_cluster & ibr_cluster for the WebCenter Content domain. ucm_cluster
configuredManagedServerCount Number of Managed Server instances to generate for the domain. 5
createDomainFilesDir Directory on the host machine to locate all the files to create a WebLogic domain, including the script that is specified in the createDomainScriptName property. By default, this directory is set to the relative path wlst, and the create script will use the built-in WLST offline scripts in the wlst directory to create the WebLogic domain. An absolute path is also supported to point to an arbitrary directory in the file system. The built-in scripts can be replaced by the user-provided scripts as long as those files are in the specified directory. Files in this directory are put into a Kubernetes config map, which in turn is mounted to the createDomainScriptsMountPath, so that the Kubernetes pod can use the scripts and supporting files to create a domain home. wlst
createDomainScriptsMountPath Mount path where the create domain scripts are located inside a pod. The create-domain.sh script creates a Kubernetes job to run the script (specified in the createDomainScriptName property) in a Kubernetes pod to create a domain home. Files in the createDomainFilesDir directory are mounted to this location in the pod, so that the Kubernetes pod can use the scripts and supporting files to create a domain home. /u01/weblogic
createDomainScriptName Script that the create domain script uses to create a WebLogic domain. The create-domain.sh script creates a Kubernetes job to run this script to create a domain home. The script is located in the in-pod directory that is specified in the createDomainScriptsMountPath property. If you need to provide your own scripts to create the domain home, instead of using the built-it scripts, you must use this property to set the name of the script that you want the create domain job to run. create-domain-job.sh
domainHome Home directory of the WebCenter Content domain. If not specified, the value is derived from the domainUID as /shared/domains/<domainUID>. /u01/oracle/user_projects/domains/wccinfra
domainPVMountPath Mount path of the domain persistent volume. /u01/oracle/user_projects
domainUID Unique ID that will be used to identify this particular domain. Used as the name of the generated WebLogic domain as well as the name of the Kubernetes domain resource. This ID must be unique across all domains in a Kubernetes cluster. This ID cannot contain any character that is not valid in a Kubernetes service name. wccinfra
exposeAdminNodePort Boolean indicating if the Administration Server is exposed outside of the Kubernetes cluster. false
exposeAdminT3Channel Boolean indicating if the T3 administrative channel is exposed outside the Kubernetes cluster. false
image WebCenter Content Docker image. WebLogic Kubernetes Operator requires Oracle WebCenter Content 14.1.2.0.0 Refer to Obtain the Oracle WebCenter Content Docker image for details on how to obtain or create the image. oracle/wccontent:14.1.2.0.0
imagePullPolicy WebLogic Docker image pull policy. Legal values are IfNotPresent, Always, or Never. IfNotPresent
imagePullSecretName Name of the Kubernetes secret to access the Docker Store to pull the WebLogic Server Docker image. The presence of the secret will be validated when this parameter is specified.
includeServerOutInPodLog Boolean indicating whether to include the server .out to the pod’s stdout. true
initialManagedServerReplicas Number of UCM Managed Servers to initially start for the domain. 3
javaOptions Java options for starting the Administration Server and Managed Servers. A Java option can have references to one or more of the following pre-defined variables to obtain WebLogic domain information: $(DOMAIN_NAME), $(DOMAIN_HOME), $(ADMIN_NAME), $(ADMIN_PORT), and $(SERVER_NAME). If sslEnabled is set to true and the WebLogic demo certificate is used, add -Dweblogic.security.SSL.ignoreHostnameVerification=true to allow the Managed Servers to connect to the Administration Server while booting up. The WebLogic generated demo certificate in this environment typically contains a host name that is different from the runtime container’s host name. -Dweblogic.StdoutDebugEnabled=false
logHome The in-pod location for the domain log, server logs, server out, and Node Manager log files. If not specified, the value is derived from the domainUID as /shared/logs/<domainUID>. /u01/oracle/user_projects/domains/logs/wccinfra
managedServerNameBase Base string used to generate Managed Server names. ucm_server
managedServerPort Port number for each Managed Server. By default the managedServerPort is 16200 for the ucm_server & managedServerPort is 16250 for the ibr_server. 16200
managedServerSSLPort SSL port number for each Managed Server. By default the managedServerSSLPort is 16201 for the ucm_server & managedServerSSLPort is 16251 for the ibr_server. 16201
managedServerAdministrationPort Administration Port number for managed server. 9200
namespace Kubernetes namespace in which to create the domain. wccns
persistentVolumeClaimName Name of the persistent volume claim created to host the domain home. If not specified, the value is derived from the domainUID as <domainUID>-weblogic-sample-pvc. wccinfra-domain-pvc
productionModeEnabled Boolean indicating if production mode is enabled for the domain. true
serverStartPolicy Determines which WebLogic Server instances will be started. Legal values are NEVER, IF_NEEDED, ADMIN_ONLY. IF_NEEDED
t3ChannelPort Port for the t3 channel of the NetworkAccessPoint. 30012
t3PublicAddress Public address for the T3 channel. This should be set to the public address of the Kubernetes cluster. This would typically be a load balancer address. If not provided, the script will attempt to set it to the IP address of the Kubernetes cluster
weblogicCredentialsSecretName Name of the Kubernetes secret for the Administration Server’s user name and password. If not specified, then the value is derived from the domainUID as <domainUID>-weblogic-credentials. wccinfra-domain-credentials
weblogicImagePullSecretName Name of the Kubernetes secret for the Docker Store, used to pull the WebLogic Server image.
serverPodCpuRequest, serverPodMemoryRequest, serverPodCpuCLimit, serverPodMemoryLimit The maximum amount of compute resources allowed, and minimum amount of compute resources required, for each server pod. Please refer to the Kubernetes documentation on Managing Compute Resources for Containers for details. Resource requests and resource limits are not specified.
rcuSchemaPrefix The schema prefix to use in the database, for example WCC1. You may wish to make this the same as the domainUID in order to simplify matching domains to their RCU schemas. WCC1
rcuDatabaseURL The database URL. <YOUR DATABASE CONNECTION DETAILS>
rcuCredentialsSecret The Kubernetes secret containing the database credentials. wccinfra-rcu-credentials
ipmEnabled Boolean indicating whether to enable WebCenter Imaging application false
captureEnabled Boolean indicating whether to enable WebCenter Capture application false
adfuiEnabled Boolean indicating whether to enable WebCenter ADF UI application false
initialIpmServerReplicas Number of IPM Managed Servers to initially start for the domain. 0
initialCaptureServerReplicas Number of CAPTURE Managed Servers to initially start for the domain. 0
initialAdfuiServerReplicas Number of ADFUI Managed Servers to initially start for the domain. 0

Note that the names of the Kubernetes resources in the generated YAML files may be formed with the value of some of the properties specified in the create-inputs.yaml file. Those properties include the adminServerName, clusterName and managedServerNameBase. If those values contain any characters that are invalid in a Kubernetes service name, those characters are converted to valid values in the generated YAML files. For example, an uppercase letter is converted to a lowercase letter and an underscore ("_") is converted to a hyphen ("-").

Note: The properties ipmEnabled, captureEnabled, adfuiEnabled are set to false by default and should be updated to true if you need to enable the respective applications. If any of those three applications (IPM, CAPTURE & ADFUI) are enabled, respective initial replica count must be a non-zero number.

The sample demonstrates how to create the Oracle WebCenter Content domain home and associated Kubernetes resources for that domain. In addition, the sample provides the capability for users to supply their own scripts to create the domain home for other use cases. The generated domain YAML file could also be modified to cover more use cases.

Run the create domain script

Run the create domain script, specifying your inputs file and an output directory to store the generated artifacts:

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/

$ ./create-domain.sh \
  -i create-domain-inputs.yaml \
  -o <path to output-directory>

The script will perform the following steps:

Run the managed-server-wrapper script

Run managed-server-wrapper script, which internally applies the domain YAML. This script also applies initial configurations for Managed Server containers and readies Managed Servers for future inter-container communications.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/

$ ./start-managed-servers-wrapper.sh -o <path_to_output_directory> -p <load_balancer_port> -n <ibr_node_port> -m <ucm_node_port> -s <ssl_termination>

Note: In the above command, parameters -n and -m refers to the node-ports to be used for exposing IBR intradoc port and UCM intradoc port respectively. Suggested values for both these node-ports should be within a range of 30000-32767. Please keep in mind that <ibr_node_port> value must be specified at all time, whereas <ucm_node_port> value is only required when IPM and ADFUI Managed Servers are enabled.

A value for parameter -s needs to be provided only if SSL termination at loadbalancer is being used - acceptable value is either true or false. If this parameter value is not supplied, the script assumes that ssl termination at loadbalancer is not being used and by default the value will be taken as false.

Run the startup configuration scripts for IPM and WCCADF applications as applicable

Run the script configure-ipm-connection.sh to do startup configurations if IPM is enabled.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-ipm-connection.sh -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_or_ssl_termination>

Run the script configure-wccadf-domain.sh to do startup configurations if ADFUI is enabled.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-wccadf-domain.sh -n <node_ip> -m <ucm_node_port>

Patch the domain for the changes to be applied to the domain.

#STOP
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'

$ sleep 2m

#START
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'

The default domain created by the script has the following characteristics:

Verify the results

The create domain script will verify that the domain was created, and will report failure if there was any error. However, it may be desirable to manually verify the domain, even if just to gain familiarity with the various Kubernetes objects that were created by the script.

Generated YAML files with the default inputs

Sample content of the generated domain.yaml:

$ cat output/weblogic-domains/wccinfra/domain.yaml
# Copyright (c) 2021, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
# This is an example of how to define a Domain resource.
#
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
  name: wccinfra
  namespace: wccns
  labels:
    weblogic.domainUID: wccinfra
spec:
  # The WebLogic Domain Home
  domainHome: /u01/oracle/user_projects/domains/wccinfra
  maxClusterConcurrentStartup: 1

  # The domain home source type
  # Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image
  domainHomeSourceType: PersistentVolume

  # The WebLogic Server Docker image that WebLogic Kubernetes Operator uses to start the domain
  image: "oracle/wccontent:14.1.2.0.0"

  # imagePullPolicy defaults to "Always" if image version is :latest
  imagePullPolicy: "IfNotPresent"

  # Identify which Secret contains the credentials for pulling an image
  #imagePullSecrets:
  #- name: 

  # Identify which Secret contains the WebLogic Admin credentials (note that there is an example of
  # how to create that Secret at the end of this file)
  webLogicCredentialsSecret: 
    name: wccinfra-domain-credentials

  # Whether to include the server out file into the pod's stdout, default is true
  includeServerOutInPodLog: true

  # Whether to enable log home
  logHomeEnabled: true

  # Whether to write HTTP access log file to log home
  httpAccessLogInLogHome: true

  # The in-pod location for domain log, server logs, server out, and Node Manager log files
  logHome: /u01/oracle/user_projects/domains/logs/wccinfra
  # An (optional) in-pod location for data storage of default and custom file stores.
  # If not specified or the value is either not set or empty (e.g. dataHome: "") then the
  # data storage directories are determined from the WebLogic domain home configuration.
  dataHome: ""


  # serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
  # This determines which WebLogic Servers the WebLogic Kubernetes Operator will start up when it discovers this Domain
  # - "NEVER" will not start any server in the domain
  # - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
  # - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
  serverStartPolicy: "IF_NEEDED"

  serverPod:
    # an (optional) list of environment variable to be set on the servers
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m "
    volumes:
    - name: weblogic-domain-storage-volume
      persistentVolumeClaim:
        claimName: wccinfra-domain-pvc
    volumeMounts:
    - mountPath: /u01/oracle/user_projects/domains
      name: weblogic-domain-storage-volume

  # adminServer is used to configure the desired behavior for starting the administration server.
  adminServer:
    # serverStartState legal values are "RUNNING" or "ADMIN"
    # "RUNNING" means the listed server will be started up to "RUNNING" mode
    # "ADMIN" means the listed server will be start up to "ADMIN" mode
    serverStartState: "RUNNING"
    adminService:
      channels:
    # The Admin Server's NodePort
       - channelName: default
         nodePort: 30701
    # Uncomment to export the T3Channel as a service
    #    - channelName: T3Channel

  # clusters is used to configure the desired behavior for starting member servers of a cluster.  
  # If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.
  clusters:
  - clusterName: ibr_cluster
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 1
    serverStartPolicy: "IF_NEEDED"
  # The number of managed servers to start for unlisted clusters
  # replicas: 1

  # Istio
  # configuration:
  #   istio:
  #     enabled: 
  #     readinessPort: 

  - clusterName: ucm_cluster
    clusterService:
         annotations:
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
    serverStartPolicy: "IF_NEEDED"
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
  - clusterName: ipm_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
  - clusterName: capture_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
  - clusterName: wccadf_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1

Verify the domain

To confirm that the domain was created, enter the following command:

$ kubectl describe domain DOMAINUID -n NAMESPACE

Replace DOMAINUID with the domainUID and NAMESPACE with the actual namespace.

Sample domain description:

$ kubectl describe domain wccinfra -n wccns
Name:         wccinfra
Namespace:    wccns
Labels:       weblogic.domainUID=wccinfra
Annotations:  API Version:  weblogic.oracle/v8
Kind:         Domain
Metadata:
  Creation Timestamp:  2020-11-23T12:48:13Z
  Generation:          7
  Managed Fields:
    API Version:  weblogic.oracle/v8
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
        f:labels:
          .:
          f:weblogic.domainUID:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-11-23T13:50:28Z
    API Version:  weblogic.oracle/v8
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:clusters:
        f:conditions:
        f:servers:
        f:startTime:
    Manager:         OpenAPI-Generator
    Operation:       Update
    Time:            2020-12-03T10:20:52Z
  Resource Version:  18267402
  Self Link:         /apis/weblogic.oracle/v8/namespaces/wccns/domains/wccinfra
  UID:               1a866c30-9b29-4281-bd2b-df80914efdff
Spec:
  Admin Server:
    Admin Service:
      Channels:
        Channel Name:    default
        Node Port:       30701
    Server Start State:  RUNNING
  Clusters:
    Cluster Name:  ibr_cluster
    Replicas:      1
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start Policy:  IF_NEEDED
    Server Start State:   RUNNING
    Cluster Name:         ucm_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:           true
    Server Start Policy:           IF_NEEDED
    Server Start State:            RUNNING
    Cluster Name:         ipm_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
    Cluster Name:         capture_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
    Cluster Name:         wccadf_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    WCCSID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
  Data Home:
  Domain Home:                     /u01/oracle/user_projects/domains/wccinfra
  Domain Home Source Type:         PersistentVolume
  Http Access Log In Log Home:     true
  Image:                           oracle/wccontent_ora_final_it:14.1.2.0.0
  Image Pull Policy:               IfNotPresent
  Include Server Out In Pod Log:   true
  Log Home:                        /u01/oracle/user_projects/domains/logs/wccinfra
  Log Home Enabled:                true
  Max Cluster Concurrent Startup:  1
  Server Pod:
    Env:
      Name:   JAVA_OPTIONS
      Value:  -Dweblogic.StdoutDebugEnabled=false
      Name:   USER_MEM_ARGS
      Value:  -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m
    Volume Mounts:
      Mount Path:  /u01/oracle/user_projects/domains
      Name:        weblogic-domain-storage-volume
    Volumes:
      Name:  weblogic-domain-storage-volume
      Persistent Volume Claim:
        Claim Name:     wccinfra-domain-pvc
  Server Start Policy:  IF_NEEDED
  Web Logic Credentials Secret:
    Name:  wccinfra-domain-credentials
Status:
  Clusters:
    Cluster Name:      ibr_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    1
    Replicas:          1
    Replicas Goal:     1
    Cluster Name:      ucm_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      ipm_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      capture_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      wccadf_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
  Conditions:
    Last Transition Time:  2020-11-23T13:58:41.070Z
    Reason:                ServersReady
    Status:                True
    Type:                  Available
  Servers:
    Desired State:  RUNNING
    Health:
      Activation Time:  2020-11-25T16:55:24.930Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    AdminServer
    State:          RUNNING
    Cluster Name:   ibr_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2020-11-30T12:23:27.603Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    ibr_server1
    State:          RUNNING
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server2
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server3
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server4
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server5
    Cluster Name:   ucm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2020-12-02T14:10:37.992Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    ucm_server1
    State:          RUNNING
    Cluster Name:   ucm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2020-12-01T04:51:19.886Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    ucm_server2
    State:          RUNNING
    Cluster Name:   ucm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ucm_server3
    Cluster Name:   ucm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ucm_server4
    Cluster Name:   ucm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ucm_server5
    Cluster Name:   ipm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2020-12-01T04:51:19.886Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    ipm_server1
    State:          RUNNING
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server2
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server3
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server4
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server5
    Cluster Name:   capture_cluster
    Desired State:  RUNNING
    Health:         
      Activation Time:  2020-12-01T04:51:19.886Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime 
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    capture_server1
    State:          RUNNING
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server2
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server3
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server4
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server5
    Cluster Name:   wccadf_cluster
    Desired State:  RUNNING
    Health:         
      Activation Time:  2020-12-01T04:51:19.886Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime 
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    wccadf_server1
    State:          RUNNING
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server2
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server3
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server4
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server5
  Start Time:       2020-11-23T12:48:13.756Z
Events:             <none>

In the Status section of the output, the available servers and clusters are listed. Note that if this command is issued soon after the script finishes, there may be no servers available yet, or perhaps only the Administration Server but no Managed Servers. WebLogic Kubernetes Operator will start up the Administration Server first and wait for it to become ready before starting the Managed Servers.

Verify the pods

Enter the following command to see the pods running the servers:

$ kubectl get pods -n NAMESPACE

Here is an example of the output of this command. You can verify that an Administration Server and Managed Servers for ucm, ibr, ipm, capture and wccadf cluster are running.

$ kubectl get pod -n wccns
NAME                                                READY   STATUS      RESTARTS   AGE
rcu                                                 1/1     Running     0          78d
wccinfra-adminserver                                1/1     Running     0          9d
wccinfra-create-fmw-infra-sample-domain-job-l8r9d   0/1     Completed   0          9d
wccinfra-ibr-server1                                1/1     Running     0          9d
wccinfra-ucm-server1                                1/1     Running     0          9d
wccinfra-ucm-server2                                1/1     Running     0          9d
wccinfra-ucm-server3                                1/1     Running     0          9d
wccinfra-ipm-server1                                1/1     Running     0          9d
wccinfra-ipm-server2                                1/1     Running     0          9d
wccinfra-ipm-server3                                1/1     Running     0          9d
wccinfra-capture-server1                            1/1     Running     0          9d
wccinfra-capture-server2                            1/1     Running     0          9d
wccinfra-capture-server3                            1/1     Running     0          9d
wccinfra-wccadf-server1                             1/1     Running     0          9d
wccinfra-wccadf-server2                             1/1     Running     0          9d
wccinfra-wccadf-server3                             1/1     Running     0          9d

Verify the services

Enter the following command to see the services for the domain:

$ kubectl get services -n NAMESPACE

Here is an example of the output of this command.

Sample list of services:

$ kubectl get services -n wccns
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP       PORT(S)          AGE
wccinfra-adminserver               ClusterIP   None             <none>            7001/TCP         9d
wccinfra-adminserver-external      NodePort    10.104.100.193   <none>            7001:30701/TCP   9d
wccinfra-cluster-ibr-cluster       ClusterIP   10.98.100.212    <none>            16250/TCP        9d
wccinfra-cluster-ibr-cluster-ext   NodePort    10.109.247.52    <none>            5555:30555/TCP   9d
wccinfra-cluster-ucm-cluster       ClusterIP   10.108.47.178    <none>            16200/TCP        9d
wccinfra-cluster-ipm-cluster       ClusterIP   10.108.217.111   <none>            16000/TCP        9d
wccinfra-cluster-capture-cluster   ClusterIP   10.110.193.252   <none>            16400/TCP        9d
wccinfra-cluster-wccadf-cluster    ClusterIP   10.109.191.247   <none>            16225/TCP        9d
wccinfra-ibr-server1               ClusterIP   None             <none>            16250/TCP        9d
wccinfra-ibr-server2               ClusterIP   10.97.253.44     <none>            16250/TCP        9d
wccinfra-ibr-server3               ClusterIP   10.110.183.48    <none>            16250/TCP        9d
wccinfra-ibr-server4               ClusterIP   10.108.228.158   <none>            16250/TCP        9d
wccinfra-ibr-server5               ClusterIP   10.101.29.140    <none>            16250/TCP        9d
wccinfra-ucm-server1               ClusterIP   None             <none>            16200/TCP        9d
wccinfra-ucm-server2               ClusterIP   None             <none>            16200/TCP        9d
wccinfra-ucm-server3               ClusterIP   None             <none>            16200/TCP        9d
wccinfra-ucm-server4               ClusterIP   10.109.25.242    <none>            16200/TCP        9d
wccinfra-ucm-server5               ClusterIP   10.109.193.26    <none>            16200/TCP        9d
wccinfra-ipm-server1               ClusterIP   None             <none>            16000/TCP        9d
wccinfra-ipm-server2               ClusterIP   None             <none>            16000/TCP        9d
wccinfra-ipm-server3               ClusterIP   None             <none>            16000/TCP        9d
wccinfra-ipm-server4               ClusterIP   10.111.215.108   <none>            16000/TCP        9d
wccinfra-ipm-server5               ClusterIP   10.109.220.10    <none>            16000/TCP        9d
wccinfra-capture-server1           ClusterIP   None             <none>            16400/TCP        9d
wccinfra-capture-server2           ClusterIP   None             <none>            16400/TCP        9d
wccinfra-capture-server3           ClusterIP   None             <none>            16400/TCP        9d
wccinfra-capture-server4           ClusterIP   10.109.72.216    <none>            16400/TCP        9d
wccinfra-capture-server5           ClusterIP   10.102.90.234    <none>            16400/TCP        9d
wccinfra-wccadf-server1            ClusterIP   None             <none>            16225/TCP        9d
wccinfra-wccadf-server2            ClusterIP   None             <none>            16225/TCP        9d
wccinfra-wccadf-server3            ClusterIP   None             <none>            16225/TCP        9d
wccinfra-wccadf-server4            ClusterIP   10.99.91.229     <none>            16225/TCP        9d
wccinfra-wccadf-server5            ClusterIP   10.105.114.38    <none>            16225/TCP        9d

Scale-up/down Managed Server Counts

For an existing domain, these managed-server replica counts can be modified, independent of each other, by modifying the domain.yaml (to be handled by the customers with sufficient access). To scale up or scale down managed server counts in an existing domain, the following steps need to be performed.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/output/weblogic-domains/wccinfra/

# modify respective managed server replicas to scale up or scale down and save it.
$ vim domain.yaml

# Apply the updated domain.yaml configuration file
$ kubectl apply -f domain.yaml

Details required for configuring IBR provider on UCM

  1. Obtain details for service wccinfra-cluster-ibr-cluster-ext for the NodePort mapped to IBR intradoc port

    NAME                            TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)                      
    wccinfra-cluster-ibr-cluster-ext NodePort 10.109.247.52 <none>     5555:30555/TCP               
  2. Create the outgoing provider by providing following details and restart the servers.

    Please provide the NodePort value (in the above sample - 30555), as Server Port.

    Server Host Name:  <hostname in which IBR Server pod is deployed>
    
    Server Port: 30555

    wcc-provider-ucm-ibr

Configure an additional mount or shared space to a domain for Imaging and Capture

Optionally, if you want to configure an additional mount or shared space to a domain, for WebCenter Imaging and WebCenter Capture applications for file imports, refer to the [Configure an Additional Mount or Shared-Space to a Domain for Imaging and Capture]({{< relref “/wccontent-domains/adminguide/configure-mount-share.md” >}}).

Launch Oracle Webcenter Content Native Applications in Containers

This section provides the steps required to use product native binaries with user interfaces.

Issue with Launching Headful User Interfaces for Oracle WebCenter Content Native Binaries

Oracle WebCenter Content (UCM) provide a set of native binaries with headful UIs, which are delivered as part of the product container image. WebCenter Content container images are, by default, created with Oracle slim linux image, which doesn’t come with all the packages pre-installed to support headful applications with UIs to be launched. UCM provides many such native binaries which uses JAVA AWT for UI support. With current Oracle WebCenter Content container images, running native applications fails, being unable to launch UIs.

The following sections document the solution, by providing a set of instructions, enabling users to run UCM native applications with UIs.

These instructions are divided in two parts - 1. Steps to update the existing container image 1. Steps to launch native apps using VNC sessions

Steps to Update out-of-the-box Oracle WebCenter Content Container Image Using WebLogic Image Tool

This section describes the method to update image with a OS package using WebLogic Image Tool. Please refer this for setting up the WebLogic Image Tool. #### Additional Build Commands

The installation of required OS packages in the image, can be done using yum command in additional build command option available in WebLogic Image Tool. Here is the sample additionalBuildCmds.txt file, to be used, to install required Linux packages (libXext.x86_64, libXrender.x86_64 and libXtst.x86_64).

[final-build-commands]
USER root
RUN yum -y --downloaddir=/tmp/imagetool install libXext libXrender libXtst  \
        && yum -y --downloaddir=/tmp/imagetool clean all \
    && rm -rf /var/cache/yum/* \
    && rm -rf /tmp/imagetool
USER oracle

Note: It is important to change the user to oracle, otherwise the user during the container execution will be root. #### Build arguments

The arguments required for updating the image can be passed as file to the WebLogic Image Tool.

'update' is the sub command to Image Tool for updating an existing docker image.
'--fromImage' option provides the existing docker image that has to be updated.
'--tag' option should be provided with the new tag for the updated image.
'--additionalBuildCommands' option should be provided with the above created additional build commands file.
'--chown oracle:root' option should be provided to update file permissions.

Below is a sample build argument (buildArgs) file, to be used for updating the image,

  update
  --fromImage <existing_WCContent_image_without_dependent_packages>
  --tag <name_of_updated_WCContent_image_to_be_built>
  --additionalBuildCommands ./additionalBuildCmds.txt
  --chown oracle:root 

Update Oracle WebCenter Content Container Image

Now we can execute the WebLogic Image Tool to update the out-of-the-box image, using the build-argument file described above -

$ imagetool @buildArgs

WebLogic Image Tool provides multiple options for updating the image. For detailed information on the update options, please refer to this document.

Updating the image does not modify the ‘CMD’ from the source image unless it is modified in the additional build commands.

$ docker inspect -f '{{.Config.Cmd}}' <name_of_updated_Wccontent_image>
[/u01/oracle/container-scripts/createDomainandStartAdmin.sh]

Steps to launch Oracle WebCenter Content native applications using VNC sessions.

Once updated image is successfully built and available on all required nodes, do the following: a. Update the domain.yaml file with updated image name and apply the domain.yaml file.

$ kubectl apply -f domain.yaml
  1. After applying the modified domain.yaml, pods will get restarted and start running with updated image with required packages.
$ kubectl get pods -n <namespace_being_used_for_wccontent_domain>
  1. Create VNC sessions on the master node to launch native apps. These are the steps to be followed using the VNC session.

  2. Run this command on each VNC session:

$ xhost + <HOST-IP or HOST-NAME of the node, on which POD is deployed> 

Note: The above command works for multi-node clusters (in which master node and worker nodes are deployed on different hosts and pods are distributed among worker nodes, running on different hosts). In case of single node clusters (where there is only master node and no worker nodes and all pods are deployed on the host, on which master node is running), one needs to use container/pod’s IP instead of the master-node’s HOST-IP itself.

To obtain the container IP, follow the command mentioned in step g, from within that container’s shell.

$ xhost + <IP of the container, from which binaries are to be run >  
  1. Get into the pod’s (for example, wccinfra-ucm-server1) shell:
$ kubectl exec -n wccns -it wccinfra-ucm-server1 -- /bin/bash 
  1. Traverse to the binaries location:
$ cd /u01/oracle/user_projects/domains/wccinfra/ucm/cs/bin 
  1. Get the container IP:
$ hostname -i 
  1. Set DISPLAY variable within the container:
$ export DISPLAY=<HOST-IP/HOST-NAME of the master node, where VNC session was
created>:vnc-session display-id 
  1. Launch any native UCM application, from within the container, like this:
$ ./SystemProperties

If the application has an UI, it will get launched now.

Administration Guide

Describes how to use some of the common utility tools and configurations to administer Oracle WebCenter Content domains.

Set up a load balancer

The Oracle WebLogic Server Kubernetes operator supports ingress-based load balancers such as Traefik and NGINX (kubernetes/ingress-nginx). It also supports Apache Webtier load balancer.

Traefik

This section provides information about how to install and configure the ingress-based Traefik load balancer (version 2.6.0 or later for production deployments) to load balance Oracle WebCenter Content domain clusters. You can configure Traefik for non-SSL, SSL termination and end-to-end SSL access of the application URL.

Follow these steps to set up Traefik as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:

Non-SSL and SSL termination

Install the Traefik (ingress-based) load balancer

  1. Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the values.yaml file in the sample but set kubernetes.namespaces specifically.

     $ cd ${WORKDIR}
     $ kubectl create namespace traefik
     $ helm repo add traefik https://helm.traefik.io/traefik --force-update

    Sample output:

     "traefik" has been added to your repositories
  2. Install Traefik:

     $ cd ${WORKDIR}
     $ helm install traefik  traefik/traefik \
          --namespace traefik \
          --values charts/traefik/values.yaml \
          --set  "kubernetes.namespaces={traefik}" \
          --set "service.type=NodePort" --wait    

    Sample output:

        NAME: traefik
        LAST DEPLOYED: Sun Jan 17 23:30:20 2021
        NAMESPACE: traefik
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None   

    A sample values.yaml for deployment of Traefik 2.6.0:

    image:
    name: traefik
    tag: 2.6.0
    pullPolicy: IfNotPresent
    ingressRoute:
    dashboard:
       enabled: true
       # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
       annotations: {}
       # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
       labels: {}
    providers:
    kubernetesCRD:
       enabled: true
    kubernetesIngress:
       enabled: true
       # IP used for Kubernetes Ingress endpoints
    ports:
    traefik:
       port: 9000
       expose: true
       # The exposed port for this service
       exposedPort: 9000
       # The port protocol (TCP/UDP)
       protocol: TCP
    web:
       port: 8000
       # hostPort: 8000
       expose: true
       exposedPort: 30305
       nodePort: 30305
       # The port protocol (TCP/UDP)
       protocol: TCP
       # Use nodeport if set. This is useful if you have configured Traefik in a
       # LoadBalancer
       # nodePort: 32080
       # Port Redirections
       # Added in 2.2, you can make permanent redirects via entrypoints.
       # https://docs.traefik.io/routing/entrypoints/#redirection
       # redirectTo: websecure
    websecure:
       port: 8443
    #    # hostPort: 8443
       expose: true
       exposedPort: 30443
       # The port protocol (TCP/UDP)
       protocol: TCP
       nodePort: 30443
    additionalArguments:
      - "--log.level=INFO"
  3. Verify the Traefik status and find the port number of the SSL and non-SSL services:

     $ kubectl get all -n traefik

Sample output:

   NAME                                    READY   STATUS    RESTARTS   AGE
   pod/traefik-f9cf58697-p57nt             1/1     Running   0          22d
   
   NAME                                    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                                          AGE
   service/traefik                         NodePort   10.96.95.253   <none>        9000:32306/TCP,30305:30305/TCP,30443:30443/TCP   22d
   
   NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/traefik                 1/1     1            1           22d
   
   NAME                                    DESIRED   CURRENT   READY   AGE
   replicaset.apps/traefik-f9cf58697       1         1         1       22d
  1. Access the Traefik dashboard through the URL http://$(hostname -f):32306, with the HTTP host traefik.example.com:

    $ curl -H "host: $(hostname -f)" http://$(hostname -f):32306/dashboard/

    Note: Make sure that you specify a fully qualified node name for $(hostname -f)

Configure Traefik to manage ingresses

Configure Traefik to manage ingresses created in this namespace, where traefik is the Traefik namespace and wccns is the namespace of the domain:

$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
--set "kubernetes.namespaces={traefik,wccns}"

Sample output:

Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Sun Jan 17 23:43:02 2021
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None

Create an ingress for the domain

Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml. By default, type is TRAEFIK , tls is Non-SSL, and domainType is wccinfra. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml based on the type of configuration (non-SSL or SSL). If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths) based on the domain application URLs that need to be accessed. The template YAML file for the Traefik (ingress-based) load balancer is located at ${WORKDIR}/charts/ingress-per-domain/templates/traefik-ingress.yaml

  1. Install ingress-per-domain using Helm for non-SSL configuration:

     $ cd ${WORKDIR}
     $ helm install wcc-traefik-ingress  \
         charts/ingress-per-domain \
         --set type=TRAEFIK \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "traefik.hostname=$(hostname -f)" \
         --set tls=NONSSL

    Sample output:

      NAME: wcc-traefik-ingress
      LAST DEPLOYED: Sun Jan 17 23:49:09 2021
      NAMESPACE: wccns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
  2. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate a Kubernetes secret:

     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=<your_host_name>" \
     -extensions san -config \
     <(echo "[req]";
     echo distinguished_name=req;
     echo "[san]";
     echo subjectAltName=DNS:<your_host_name>
     )
    
     $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt

    Note: The value of CN and subjectAltName is the host on which this ingress is to be deployed.

  3. Create Traefik Middleware custom resource

    In case of SSL termination, Traefik must pass a custom header WL-Proxy-SSL:true to the WebLogic Server endpoints. Create the Middleware using the following command:

    $ cat <<EOF | kubectl apply -f -
    apiVersion: traefik.containo.us/v1alpha1
    kind: Middleware
    metadata:
      name: wls-proxy-ssl
      namespace: wccns
    spec:
      headers:
        customRequestHeaders:
           WL-Proxy-SSL: "true"
    EOF
  4. Create the Traefik TLSStore custom resource.

    In case of SSL termination, Traefik should be configured to use the user-defined SSL certificate. If the user-defined SSL certificate is not configured, Traefik will create a default SSL certificate. To configure a user-defined SSL certificate for Traefik, use the TLSStore custom resource. The Kubernetes secret created with the SSL certificate should be referenced in the TLSStore object. Run the following command to create the TLSStore:

    $ cat <<EOF | kubectl apply -f -
    apiVersion: traefik.containo.us/v1alpha1
    kind: TLSStore
    metadata:
      name: default
      namespace: wccns
    spec:
      defaultCertificate:
        secretName:  domain1-tls-cert   
    EOF
  5. Install ingress-per-domain using Helm for SSL configuration.

    The Kubernetes secret name should be updated in the template file.

    The template file also contains the following annotations:

     traefik.ingress.kubernetes.io/router.entrypoints: websecure
     traefik.ingress.kubernetes.io/router.tls: "true"
     traefik.ingress.kubernetes.io/router.middlewares: wccns-wls-proxy-ssl@kubernetescrd

    The entry point for SSL access and the Middleware name should be updated in the annotation. The Middleware name should be in the form <namespace>-<middleware name>@kubernetescrd.

     $ cd ${WORKDIR}
     $ helm install wcc-traefik-ingress  \
         charts/ingress-per-domain \
         --set type=TRAEFIK \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "traefik.hostname=$(hostname -f)" \
         --set "traefik.hostnameorip=$(hostname -f)" \
         --set tls=SSL

    Sample output:

      NAME: wcc-traefik-ingress
      LAST DEPLOYED: Mon Jul 20 11:44:13 2020
      NAMESPACE: wccns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
  6. For non-SSL access to the Oracle WebCenter Content application, get the details of the services by the ingress:

     $ kubectl describe ingress wccinfra-traefik  -n wccns

These are all the services supported by the above deployed ingress:

    Name:             wccinfra-traefik
    Namespace:        wccns
    Address:
    Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
    Rules:
  Host                                        Path  Backends
  ----                                        ----  --------
  domain1.org
                                             /em                      wccinfra-adminserver:7001 (10.244.0.201:7001)
                                             /wls-exporter            wccinfra-adminserver:7001 (10.244.0.201:7001)
                                             /cs                      wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /adfAuthentication       wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /_ocsh                   wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /_dav                    wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /idcws                   wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /idcnativews             wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /wsm-pm                  wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /ibr                     wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
                                             /ibr/adfAuthentication   wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
                                             /weblogic/ready          wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /imaging                 wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
                                             /dc-console              wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                             /dc-client               wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                             /wcc                     wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations:                                    kubernetes.io/ingress.class: traefik
                                             meta.helm.sh/release-name: wcc-traefik-ingress
                                             meta.helm.sh/release-namespace: wccns
Events:                                         <none>
  1. For SSL access to the Oracle WebCenter Content application, get the details of the services by the above deployed ingress:

     $ kubectl describe  ingress wccinfra-traefik  -n wccns

All services supported by the above deployed ingress:

  Name:             wccinfra-traefik
Namespace:        wccns
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host                                        Path  Backends
----                                        ----  --------
domain1.org
                                             /em                      wccinfra-adminserver:7001 (10.244.0.201:7001)
                                             /wls-exporter            wccinfra-adminserver:7001 (10.244.0.201:7001)
                                             /cs                      wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /adfAuthentication       wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /_ocsh                   wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /_dav                    wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /idcws                   wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /idcnativews             wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /wsm-pm                  wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /ibr                     wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
                                             /ibr/adfAuthentication   wccinfra-cluster-ibr-cluster:16250 (10.244.0.203:16250)
                                             /weblogic/ready          wccinfra-cluster-ucm-cluster:16200 (10.244.0.202:16200,10.244.0.207:16200,10.244.0.211:16200)
                                             /imaging                 wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
                                             /dc-console              wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                             /dc-client               wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                             /wcc                     wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations:                                    kubernetes.io/ingress.class: traefik
                                             meta.helm.sh/release-name: wcc-traefik-ingress
                                             meta.helm.sh/release-namespace: wccns
Events:                                         <none>
  1. To confirm that the load balancer noticed the new ingress and is successfully routing to the domain server pods, you can send a request to the URL for the “WebLogic ReadyApp framework”, which should return an HTTP 200 status code, as follows:

     $ curl -v http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER_PORT}/weblogic/ready
     * About to connect() to abc.com port 30305 (#0)
     *   Trying 100.111.156.246...
     * Connected to abc.com (100.111.156.246) port 30305 (#0)
     > GET /weblogic/ready HTTP/1.1
     > User-Agent: curl/7.29.0
     > Host: domain1.org:30305
     > Accept: */*
     >
     < HTTP/1.1 200 OK
     < Content-Length: 0
     < Date: Thu, 03 Dec 2020 13:16:19 GMT
     < Vary: Accept-Encoding
     <
     * Connection #0 to host abc.com left intact

    Verify domain application URL access

For non-SSL configuration

After setting up the Traefik (ingress-based) load balancer, verify that the domain application URLs are accessible through the non-SSL load balancer port 30305 for HTTP access. The sample URLs for Oracle WebCenter Content domain of type wcc are:

    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/weblogic/ready
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/cs
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/ibr
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/em
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/imaging
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/dc-console
    http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/wcc 
For SSL configuration

After setting up the Traefik (ingress-based) load balancer, verify that the domain applications are accessible through the SSL load balancer port 30443 for HTTPS access. The sample URLs for Oracle WebCenter Content domain are:

    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/weblogic/ready
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/em
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
    https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc

End-to-end SSL configuration

Install the Traefik load balancer for end-to-end SSL

  1. Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the values.yaml file in the sample but set kubernetes.namespaces specifically.

     $ cd ${WORKDIR}
     $ kubectl create namespace traefik
     $ helm repo add traefik https://helm.traefik.io/traefik --force-update

    Sample output:

     "traefik" has been added to your repositories
  2. Install Traefik:

     $ cd ${WORKDIR}
     $ helm install traefik  traefik/traefik \
          --namespace traefik \
          --values charts/traefik/values.yaml \
          --set  "kubernetes.namespaces={traefik}" \
          --set "service.type=NodePort" \
          --wait

    Sample output:

        NAME: traefik
        LAST DEPLOYED: Sun Jan 17 23:30:20 2021
        NAMESPACE: traefik
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
  3. Verify the Traefik operator status and find the port number of the SSL and non-SSL services:

     $ kubectl get all -n traefik

Sample output:


   NAME                                    READY   STATUS    RESTARTS   AGE
   pod/traefik-operator-676fc64d9c-skppn   1/1     Running   0          78d

   NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
   service/traefik-operator             NodePort    10.109.223.59   <none>        443:30443/TCP,80:30305/TCP   78d
   service/traefik-operator-dashboard   ClusterIP   10.110.85.194   <none>        80/TCP                       78d

   NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/traefik-operator   1/1     1            1           78d

   NAME                                          DESIRED   CURRENT   READY   AGE
   replicaset.apps/traefik-operator-676fc64d9c   1         1         1       78d
   replicaset.apps/traefik-operator-cb78c9dc9    0         0         0       78d
  1. Access the Traefik dashboard through the URL http://$(hostname -f):32306, with the HTTP host traefik.example.com:

    $ curl -H "host: $(hostname -f)" http://$(hostname -f):32306/dashboard/

    Note: Make sure that you specify a fully qualified node name for $(hostname -f).

Configure Traefik to manage the domain

Configure Traefik to manage the domain application service created in this namespace, where traefik is the Traefik namespace and wccns is the namespace of the domain:

$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
--set "kubernetes.namespaces={traefik,wccns}"

Sample output:

      Release "traefik" has been upgraded. Happy Helming!
      NAME: traefik
      LAST DEPLOYED: Sun Jan 17 23:43:02 2021
      NAMESPACE: traefik
      STATUS: deployed
      REVISION: 2
      TEST SUITE: None

Create IngressRouteTCP

  1. To enable SSL passthrough in Traefik, you can configure a TCP router. A sample YAML for IngressRouteTCP is available at ${WORKDIR}/charts/ingress-per-domain/tls/traefik-tls.yaml.

    Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.

    The following should be updated in traefik-tls.yaml:

    • The service name and the SSL port should be updated in the Services.
    • The load balancer hostname should be updated in the HostSNI rule.

    Sample traefik-tls.yaml:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: wcc-ucm-routetcp
  namespace: wccns
spec:
  entryPoints:
    - websecure
  routes:
  - match: HostSNI(`your_host_name`)
    services:
    - name: wccinfra-cluster-ucm-cluster
      port: 16201
      weight: 3
      terminationDelay: 400
  tls:
    passthrough: true   
  1. Create the IngressRouteTCP:
cd ${WORKDIR}/charts/ingress-per-domain/tls

$ kubectl apply -f traefik-tls.yaml

Verify end-to-end SSL access

Verify the access to application URLs exposed through the configured service. You should be able to access the following Oracle WebCenter Content domain URLs:

LOADBALANCER-SSLPORT is 30443

https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc

Delete the IngressRouteTCP

cd ${WORKDIR}/charts/ingress-per-domain/tls

$ kubectl delete -f traefik-tls.yaml

Uninstall Traefik

$ helm delete wcc-traefik-ingress -n wccns

$ helm delete traefik -n wccns

$ kubectl delete namespace traefik

NGINX

This section provides information about how to install and configure the ingress-based NGINX load balancer to load balance Oracle WebCenter Content domain clusters. You can configure NGINX for non-SSL, SSL termination, and end-to-end SSL access of the application URL.

Follow these steps to set up NGINX as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:

See the official installation document for prerequisites.

To get repository information, enter the following Helm commands:

  $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  $ helm repo update

Non-SSL and SSL termination

Install the NGINX load balancer

  1. Deploy the ingress-nginx controller by using Helm on the domain namespace:

     $ helm install nginx-ingress -n wccns \
            --set controller.service.type=NodePort \
            --set controller.admissionWebhooks.enabled=false \
              ingress-nginx/ingress-nginx 

    Sample output:

NAME: nginx-ingress
LAST DEPLOYED: Fri Jul 29 00:14:19 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
  export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
  export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
  export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
  echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
  echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
  1. Check the status of the deployed ingress controller:

    $ kubectl --namespace wccns get services | grep ingress-nginx-controller

    Sample output:

     nginx-ingress-ingress-nginx-controller    NodePort    10.97.189.122    <none>            80:30993/TCP,443:30232/TCP    7d2h

Configure NGINX to manage ingresses

  1. Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml. By default, type is TRAEFIK, tls is Non-SSL, and domainType is wccinfra. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml. If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths) based on the domain application URLs that need to be accessed. Update the template YAML file for the NGINX load balancer located at ${WORKDIR}/charts/ingress-per-domain/templates/nginx-ingress.yaml

    $ cd ${WORKDIR}
    $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \
    --namespace wccns \
    --values charts/ingress-per-domain/values.yaml \
    --set "nginx.hostname=$(hostname -f)" \
    --set type=NGINX \
    --set tls=NONSSL

    Sample output:

    NAME: wccinfra-nginx-ingress
    LAST DEPLOYED: Sun Feb  7 23:52:38 2021
    NAMESPACE: wccns
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  2. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate a Kubernetes secret:

     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=<your_host_name>" \
     -extensions san -config \
     <(echo "[req]";
     echo distinguished_name=req;
     echo "[san]";
     echo subjectAltName=DNS:<your_host_name>
     )
    
     $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt

    Note: The value of CN and subjectAltName is the host on which this ingress is to be deployed.

  3. Install ingress-per-domain using Helm for SSL configuration:

     $ cd ${WORKDIR}
     $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "nginx.hostname=$(hostname -f)" \
         --set "nginx.hostnameorip=$(hostname -f)" \
         --set type=NGINX --set tls=SSL

    Sample output:

     NAME: wccinfra-nginx-ingress
     LAST DEPLOYED: Mon Feb  8 00:01:13 2021
     NAMESPACE: wccns
     STATUS: deployed
     REVISION: 1
     TEST SUITE: None
  4. For non-SSL access or SSL to the Oracle WebCenter Content application, get the details of the services by the ingress:

      $ kubectl describe ingress wccinfra-nginx  -n wccns

Sample output of the services supported by the above deployed ingress:

Name:             wccinfra-nginx
Namespace:        wccns
Address:          10.97.189.122
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  domain1-tls-cert terminates domain1.org
Rules:
  Host                                        Path  Backends
  ----                                        ----  --------
  domain1.org
                                              /em                      wccinfra-adminserver:7001 (10.244.0.58:7001)
                                              /servicebus              wccinfra-adminserver:7001 (10.244.0.58:7001)
                                              /cs                      wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
                                              /adfAuthentication       wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
                                              /ibr                     wccinfra-cluster-ibr-cluster:16250 (10.244.0.59:16250)
                                              /ibr/adfAuthentication   wccinfra-cluster-ibr-cluster:16250 (10.244.0.59:16250)
                                              /weblogic/ready          wccinfra-cluster-ucm-cluster:16200 (10.244.0.60:16200,10.244.0.61:16200)
                                              /imaging                 wccinfra-cluster-ipm-cluster:16000 (10.244.0.206:16000,10.244.0.209:16000,10.244.0.213:16000)
                                              /dc-console              wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                              /dc-client               wccinfra-cluster-capture-cluster:16400 (10.244.0.204:16400,10.244.0.208:16400,10.244.0.212:16400)
                                              /wcc                     wccinfra-cluster-wccadf-cluster:16225 (10.244.0.205:16225,10.244.0.210:16225,10.244.0.214:16225)
Annotations:                                  kubernetes.io/ingress.class: nginx
                                              meta.helm.sh/release-name: wccinfra-nginx-ingress
                                              meta.helm.sh/release-namespace: wccns
                                              nginx.ingress.kubernetes.io/configuration-snippet:
                                                more_set_input_headers "X-Forwarded-Proto: https";
                                                more_set_input_headers "WL-Proxy-SSL: true";
                                              nginx.ingress.kubernetes.io/ingress.allow-http: false
Events:                                       <none>

Verify non-SSL and SSL termination access

Non-SSL configuration

Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-Non-SSLPORT:

  http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/weblogic/ready
  http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/em
  http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/cs
  http://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/ibr
  http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/imaging
  http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/dc-console
  http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-Non-SSLPORT}/wcc  
SSL configuration

Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-SSLPORT:

  https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/weblogic/ready
  https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/em
  https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
  https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
  https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
  https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
  https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc

Uninstall the ingress

Uninstall and delete the ingress-nginx deployment:

  $ helm delete wccinfra-nginx -n wccns

End-to-end SSL configuration

Install the NGINX load balancer for End-to-end SSL

  1. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate secrets:

     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt -subj "/CN=*"
     $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt
  2. Deploy the ingress-nginx controller by using Helm on the domain namespace:

     $ helm install nginx-ingress -n wccns \     
     --set controller.extraArgs.default-ssl-certificate=wccns/domain1-tls-cert \
     --set controller.service.type=NodePort \
     --set controller.admissionWebhooks.enabled=false \
     --set controller.extraArgs.enable-ssl-passthrough=true \
     ingress-nginx/ingress-nginx

Sample output:


NAME: nginx-ingress
LAST DEPLOYED: Thu Sep  8 23:59:54 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
  export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
  export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
  export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
  echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
  echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
  1. Check the status of the deployed ingress controller:

     $ kubectl --namespace wccns get services | grep ingress-nginx-controller

    Sample output:

      nginx-ingress-ingress-nginx-controller   NodePort    10.97.189.122    <none>            80:30993/TCP,443:30232/TCP    168m

Deploy tls to access individual Managed Servers

  1. Deploy tls to securely access the services. Only one application can be configured with ssl-passthrough. A sample tls file for NGINX is shown below for the service wccinfra-cluster-ucm-cluster and port 16201. All the applications running on port 16201 can be securely accessed through this ingress. For each backend service, create different ingresses as NGINX does not support multiple path/rules with annotation ssl-passthrough. That is, for wccinfra-cluster-ucm-cluster, wccinfra-cluster-ibr-cluster, wccinfra-cluster-ipm-cluster, wccinfra-cluster-capture-cluster, wccinfra-cluster-wccadf-cluster and wccinfra-adminserver, different ingresses must be created.

    Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. We can access only one Managed Server at a time.

     $ cd ${WORKDIR}/charts/ingress-per-domain/tls

    Sample nginx-ucm-tls.yaml:

Content of the nginx-ucm-tls.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wcc-ucm-ingress
  namespace: wccns
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
  tls:
  - hosts:
    - 'your_host_name'
    secretName: domain1-tls-cert
  rules:
  - host: 'your_host_name'
    http:
      paths:
      - path:
        pathType: ImplementationSpecific
        backend:
          service:
            name: wccinfra-cluster-ucm-cluster
            port: 
              number: 16201

Note: host is the server on which this ingress is deployed.

  1. Deploy the secured ingress:

    $ cd ${WORKDIR}/charts/ingress-per-domain/tls
    $ kubectl create -f nginx-ucm-tls.yaml
  2. Check the services supported by the ingress:

    $ kubectl describe ingress wcc-ucm-ingress -n wccns

Services supported by the ingress:

Name:             wcc-ucm-ingress
Namespace:        wccns
Address:          10.102.97.237
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  domain1-tls-cert terminates domain1.org
Rules:
  Host                                         Path  Backends
  ----                                         ----  --------
  domain1.org
                                                  wccinfra-cluster-ucm-cluster:16201 (10.244.238.136:16201,10.244.253.132:16201)
Annotations:                                   kubernetes.io/ingress.class: nginx
                                               nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    62s (x2 over 106s)  nginx-ingress-controller  Scheduled for sync

Verify end-to-end SSL access

Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-SSLPORT:

   https://${LOADBALANCER-HOSTNAME}:${LOADBALANCER-SSLPORT}/cs

Uninstall ingress-nginx tls

$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl  delete -f nginx-ucm-tls.yaml

Uninstall NGINX

//Uninstall and delete the `ingress-nginx` deployment
$ helm delete wccinfra-nginx-ingress -n wccns
  
//Uninstall NGINX
$ helm delete nginx-ingress -n wccns

Monitor an Oracle WebCenter Content domain

You can monitor a WebCenter Content domain using Prometheus and Grafana by exporting the metrics from the domain instance using the WebLogic Monitoring Exporter.

Set up monitoring for OracleWebCenterContent domain

Using the WebLogic Monitoring Exporter you can scrape runtime information from a running Oracle WebCenter Content Suite instance and monitor them using Prometheus and Grafana. Follow these steps to set up monitoring for an Oracle WebCenter Content Suite instance. For more details on WebLogic Monitoring Exporter, see here.

Verify monitoring using Grafana Dashboard

After set-up is complete, to view the domain metrics, you can access the Grafana dashboard at http://mycompany.com:32100/.

This displays the WebLogic Server Dashboard.

wcc-gp-dashboard

Elasticsearch integration for logs

Monitor an Oracle WebCenter Content domain and publish the WebLogic Server logs to Elasticsearch.

1. Integrate Elasticsearch to WebLogic Kubernetes Operator

For reference information, see Elasticsearch integration for the WebLogic Kubernetes Operator.

To enable elasticsearch integration, you must edit file ${WORKDIR}/charts/weblogic-operator/values.yaml before deploying the WebLogic Kubernetes Operator.

# elkIntegrationEnabled specifies whether or not ELK integration is enabled.                                            
elkIntegrationEnabled: true                                                                                             
                                                                                                                        
# logStashImage specifies the docker image containing logstash.                                                         
# This parameter is ignored if 'elkIntegrationEnabled' is false.                                                        
logStashImage: "logstash:6.8.23"                                                                                         
                                                                                                                        
# elasticSearchHost specifies the hostname of where Elasticsearch is running.                                           
# This parameter is ignored if 'elkIntegrationEnabled' is false.                                                        
elasticSearchHost: "elasticsearch.default.svc.cluster.local"                                                            
                                                                                                                        
# elasticSearchPort specifies the port number of where Elasticsearch is running.                                        
# This parameter is ignored if 'elkIntegrationEnabled' is false.                                                        
elasticSearchPort: 9200

After you’ve deployed WebLogic Kubernetes Operator and made the above changes, the weblogic-operator pod will have additional Logstash container. The Logstash container will push the weblogic-operator logs to the configured Elasticsearch server.

2. Publish WebLogic Server and WebCenter Content Logs using Logstash Pod

You can publish the WebLogic Server logs to Elasticsearch Server using Logstash pod. This Logstash pod must have access to the shared domain home. For the WebCenter Content wccinfra, you can use the persistent volume of the domain home in the Logstash pod. The steps to create the Logstash pod are as follows:

Get the persistent volume details of the domain home of the WebLogic Server(s). The following command will list the persistent volume details in the namespace - “wccns”:

$ kubectl get pv -n wccns
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS                    REASON   AGE
wccinfra-domain-pv   10Gi       RWX            Retain           Bound    wccns/wccinfra-domain-pvc   wccinfra-domain-storage-class            33d

Create the deployment yaml for Logstash pod by updating the logstash.yaml, located at $WORKDIR/logging-services/logstash/logstash.yaml according to your configurations. The mounted persistent volume of the domain home will provide access to the WebLogic server logs to Logstash pod. Given below is a sample Logstash deployment yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: wccns
spec:
  selector:
    matchLabels:
      app: logstash
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: logstash
    spec:
      volumes:
      - name: weblogic-domain-storage-volume
        persistentVolumeClaim:
          claimName: wccinfra-domain-pvc
      - name: shared-logs
        emptyDir: {}
      containers:
      - name: logstash
        image: logstash:6.8.23
        command: ["/bin/sh"]
        args: ["/usr/share/logstash/bin/logstash", "-f", "/u01/oracle/user_projects/domains/logstash.conf"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /u01/oracle/user_projects/domains
          name: weblogic-domain-storage-volume
        - name: shared-logs
          mountPath: /shared-logs
        ports:
        - containerPort: 5044
          name: logstash

Sample Logstash configuration file is located at $WORKDIR/logging-services/logstash/logstash.conf

$ vi $WORKDIR/logging-services/logstash/logstash.conf
input {                                                                                                                
  file {                                                                                                               
    path => "/u01/oracle/user_projects/domains/wccinfra/servers/**/logs/*-diagnostic.log"                                          
    start_position => beginning                                                                                        
  }              
  file {                                                                                                               
    path => "/u01/oracle/user_projects/domains/logs/wccinfra/*.log"                                          
    start_position => beginning                                                                                        
  }                                                                                                                                                                                                                                       
}

filter {                                                                                                               
  grok {                                                                                                               
    match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ]                                                                                        
  }                                                                                                                    
}                                                                                                                         
output {                                                                                                               
  elasticsearch {                                                                                                      
    hosts => ["elasticsearch.default.svc.cluster.local:9200"]                                                          
  }                                                                                                                    
}

This sample configuration will publish all server and Diagnostic logs under wccinfra to Logstash.

$ kubectl cp $WORKDIR/logging-services/logstash/logstash.conf wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/logstash.conf

Deploy Logstash pod

After you have created the Logstash deployment yaml and Logstash configuration file, deploy Logstash using following command:

$ kubectl create -f $WORKDIR/logging-services/logstash/logstash.yaml

3. Test the deployment of Elasticsearch and Kibana

The WebLogic Kubernetes Operator also provides a sample deployment of Elasticsearch and Kibana for testing purpose. You can deploy Elasticsearch and Kibana on the Kubernetes cluster as shown below:

$ cd ${WORKDIR}/elasticsearch-and-kibana/
$ kubectl create -f elasticsearch_and_kibana.yaml
Get the Kibana dashboard port information as shown below:

Wait for pods to start:

-bash-4.2$ kubectl get pods -w
NAME                            READY   STATUS    RESTARTS   AGE
elasticsearch-8bdb7cf54-mjs6s   1/1     Running   0          4m3s
kibana-dbf8964b6-n8rcj          1/1     Running   0          4m3s
-bash-4.2$ kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   10.105.205.157   <none>        9200/TCP,9300/TCP   10d
kibana          NodePort    10.98.104.41     <none>        5601:30412/TCP      10d
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP             42d

You can access the Kibana dashboard at http://<your_hostname>:30412/. In our example, the node port would be 30412.

Create an Index Pattern in Kibana

Create an index pattern logstash-* in Kibana > Management. After the servers are started, you will see the log data in the Kibana dashboard.

Publish logs to Elasticsearch

The WebLogic Logging Exporter adds a log event handler to WebLogic Server. WebLogic Server logs can be pushed to Elasticsearch in Kubernetes directly by using the Elasticsearch REST API. For more details, see to the WebLogic Logging Exporter project.

This sample shows you how to publish WebLogic Server logs to Elasticsearch and view them in Kibana. For publishing WebLogic Kubernetes Operator logs, see this sample.

Prerequisites

This document assumes that you have already set up Elasticsearch and Kibana for logs collection. If you have not, please see this document.


Download the WebLogic Logging Exporter binaries

The pre-built binaries are available on the WebLogic Logging Exporter Releases page.

Download:

$ wget https://github.com/oracle/weblogic-logging-exporter/releases/download/v1.0.1/weblogic-logging-exporter.jar
$ wget -O snakeyaml-1.27.jar https://search.maven.org/remotecontent?filepath=org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar

Note: These identifiers are used in the sample commands in this document.

Copy the JAR Files to the WebLogic Domain Home

Copy the weblogic-logging-exporter.jar and snakeyaml-1.27.jar files to the domain home directory in the Administration Server pod.

$ kubectl cp <file-to-copy>   <namespace>/<administration-server-pod>:<domainhome>
$ kubectl cp weblogic-logging-exporter.jar wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/

$ kubectl cp snakeyaml-1.27.jar wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/

Add a Startup Class to the Domain Configuration

In this step, we configure weblogic-logging-exporter JAR as a startup class in the WebLogic servers where we intend to collect the logs.

  1. In the WebLogic Remote Console, in the left navigation pane, expand Environment, and then select Startup and Shutdown Classes.

  2. Add a new startup class. You may choose any descriptive name, however, the class name must be weblogic.logging.exporter.Startup.

    wle-startup-class1

  3. Target the startup class to each server from which you want to export logs.

    wle-startup-class2

  4. You can verify this by checking for the update in your config.xml file(/u01/oracle/user_projects/domains/wccinfra/config/config.xml) which should be similar to this example:

    $ kubectl exec -n wccns -it wccinfra-adminserver  cat /u01/oracle/user_projects/domains/wccinfra/config/config.xml
    <startup-class>
      <name>weblogic-logging-exporter</name>
      <target>adminServer,ucm_cluster,ibr_cluster,ipm_cluster,capture_cluster,wccadf_cluster</target>
      <class-name>weblogic.logging.exporter.Startup</class-name>
    </startup-class>

Update the WebLogic Server CLASSPATH

  1. Copy the setDomainEnv.sh file from the pod to a local folder:

    $  kubectl cp wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/bin/setDomainEnv.sh $PWD/setDomainEnv.sh
    

    Ignore exception: tar: Removing leading '/' from member names

  2. Modify setDomainEnv.sh to update the Server Class path, add below code at the end of file:

    CLASSPATH=/u01/oracle/user_projects/domains/wccinfra/weblogic-logging-exporter.jar:/u01/oracle/user_projects/domains/wccinfra/snakeyaml-1.27.jar:${CLASSPATH}
    export CLASSPATH
  3. Copy back the modified setDomainEnv.sh file to the pod:

    $ kubectl cp setDomainEnv.sh wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/bin/setDomainEnv.sh

Create a Configuration File for the WebLogic Logging Exporter

In this step, we will be creating the configuration file for weblogic-logging-exporter.

  1. Specify the Elasticsearch server host and port number in file $WORKDIR/logging-services/weblogic-logging-exporter/WebLogicLoggingExporter.yaml:

    Sample:

    weblogicLoggingIndexName: wls
    publishHost: elasticsearch.default.svc.cluster.local
    publishPort: 9200
    domainUID: wccinfra
    weblogicLoggingExporterEnabled: true
    weblogicLoggingExporterSeverity: Notice
    weblogicLoggingExporterBulkSize: 1
    weblogicLoggingExporterFilters:
    - FilterExpression: NOT(MSGID = 'BEA-000449')
  2. Copy the WebLogicLoggingExporter.yaml file to the domain home directory in the WebLogic Administration Server pod:

    $ kubectl cp ${WORKDIR}/logging-services/weblogic-logging-exporter/WebLogicLoggingExporter.yaml  wccns/wccinfra-adminserver:/u01/oracle/user_projects/domains/wccinfra/config/

Restart All the Servers in the Domain

To restart the servers, stop and then start them using the following commands:

To STOP the servers:
$ kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'
To START the servers:
$ kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'

After all the servers are restarted, see their server logs to check that the weblogic-logging-exporter class is called, as shown below:

======================= Weblogic Logging Exporter Startup class called 
================== Reading configuration from file name: /u01/oracle/user_projects/domains/wccinfra/config/WebLogicLoggingExporter.yaml 
  
Config{weblogicLoggingIndexName='wls', publishHost='elasticsearch.default.svc.cluster.local', publishPort=9200, weblogicLoggingExporterSeverity='Notice', weblogicLoggingExporterBulkSize='1', enabled=true, weblogicLoggingExporterFilters=[
FilterConfig{expression='NOT(MSGID = 'BEA-000449')', servers=[]}], domainUID='wccinfra'} 
====================== WebLogic Logging Exporter is ebled 
================= publishHost in initialize: elasticsearch.default.svc.cluster.local 
================= publishPort in initialize: 9200 
================= url in executePutOrPostOnUrl: http://elasticsearch.default.svc.cluster.local:9200/wls

Create an Index Pattern in Kibana

Create an appropriate index pattern in Kibana > Management. After the servers are started, you will see the log data in the Kibana dashboard.

Publish logs to Elasticsearch Using Fluentd

Introduction

This page describes to how to configure a WebLogic domain to use Fluentd to send log information to Elasticsearch. Here’s the general mechanism for how this works:

Create fluentd configuration

Create a ConfigMap named fluentd-config in the namespace of the domain. The ConfigMap contains the parsing rules and Elasticsearch configuration. Here’s an explanation of some elements defined in the ConfigMap:

Here is a sample configmap for fluentd configuration,

Sample configmap for fluentd configuration fluentd_configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    weblogic.domainUID: wccinfra
    weblogic.resourceVersion: domain-v2
  name: fluentd-config
  namespace: wccns
data:
  fluentd.conf: |
    <match fluent.**>
      @type null
    </match>
    <source>
      @type tail
      path "#{ENV['LOG_PATH']}"
      pos_file /tmp/server.log.pos
      read_from_head true
      tag "#{ENV['DOMAIN_UID']}"
      # multiline_flush_interval 20s
      <parse>
        @type multiline
        format_firstline /^####/
        format1 /^####<(?<timestamp>(.*?))>/
        format2 / <(?<level>(.*?))>/
        format3 / <(?<subSystem>(.*?))>/
        format4 / <(?<serverName>(.*?))>/
        format5 / <(?<serverName2>(.*?))>/
        format6 / <(?<threadName>(.*?))>/
        format7 / <(?<info1>(.*?))>/
        format8 / <(?<info2>(.*?))>/
        format9 / <(?<info3>(.*?))>/
        format10 / <(?<sequenceNumber>(.*?))>/
        format11 / <(?<severity>(.*?))>/
        format12 / <(?<messageID>(.*?))>/
        format13 / <(?<message>(.*?))>/
      </parse>
    </source>
    <match **>
      @type elasticsearch
      host "#{ENV['ELASTICSEARCH_HOST']}"
      port "#{ENV['ELASTICSEARCH_PORT']}"
      user "#{ENV['ELASTICSEARCH_USER']}"
      password "#{ENV['ELASTICSEARCH_PASSWORD']}"
      index_name "#{ENV['DOMAIN_UID']}"
    </match>

Create the ConfigMap using the following command

   $kubectl create -f fluentd_configmap.yaml 

Mount fluentd configuration - Configmap as volume in the WebLogic container.

Edit the domain definition and configure a volume for the ConfigMap containing the fluentd configuration.

   $kubectl edit domain -n wccns

Below sample yaml code add Configmap as volume,

  volumes:
- name: weblogic-domain-storage-volume
  persistentVolumeClaim:
    claimName: wccinfra-domain-pvc
- configMap:
    defaultMode: 420
    name: fluentd-config
  name: fluentd-config-volume
   

Add fluentd container to WebLogic Server pods

Add a “fluentd container yaml” to the domain under serverPod: section that will run fluentd in the Administration Server and Managed Server pods.

Notice the container definition:

   $kubectl edit domain -n wccns

Sample fluentd container yaml fluentd container:

containers:
   - args:
     - -c
     - /etc/fluent.conf
     env:
     - name: DOMAIN_UID
       valueFrom:
         fieldRef:
           fieldPath: metadata.labels['weblogic.domainUID']
     - name: SERVER_NAME
       valueFrom:
         fieldRef:
           fieldPath: metadata.labels['weblogic.serverName']
     - name: LOG_PATH
       value: /u01/oracle/user_projects/domains/logs/wccinfra/$(SERVER_NAME).log
     - name: FLUENTD_CONF
       value: fluentd.conf
     - name: FLUENT_ELASTICSEARCH_SED_DISABLE
       value: "true"
     - name: ELASTICSEARCH_HOST
       value: elasticsearch.default.svc.cluster.local
     - name: ELASTICSEARCH_PORT
       value: "9200"
     - name: ELASTICSEARCH_USER
       value: elastic
     - name: ELASTICSEARCH_PASSWORD
       value: changeme
     image: fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-elasticsearch-1.3
     imagePullPolicy: IfNotPresent
     name: fluentd
     resources: {}
     volumeMounts:
     - mountPath: /fluentd/etc/fluentd.conf
       name: fluentd-config-volume
       subPath: fluentd.conf
     - mountPath: /u01/oracle/user_projects/domains
       name: weblogic-domain-storage-volume

Restart WebLogic Servers

To restart the servers, edit the domain and change serverStartPolicy to NEVER for the WebLogic servers to shutdown

   $kubectl edit domain -n wccns

After all the servers are shutdown edit domain again and set serverStartPolicy to IF_NEEDED for the servers to start again.

Create index pattern in Kibana

Create an index pattern “wccinfra*” in Kibana > Management. After the server starts, you will be able to see the log data in the Kibana dashboard,

wcc-kibana-dashboard

Configure an additional mount or shared space to a domain for Imaging and Capture

A volume can be mounted to a server pod which can be accessible directly from outside Kubernetes cluster so that an external application could write new files to it.

This can be used specifically in WebCenter Imaging and WebCenter Capture applications for File Imports.

Kubernetes supports several types of volumes as given in Volumes | Kubernetes.

Further in this section, we will take nfs volume as an example.

Mount “nfs” as volume

To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts in domain.yaml file.

Update the domain.yaml and apply the changes as shown in sample below for mounting nfs server (for example, 100.XXX.XXX.X with shared export path at /sharedir) to all the server pods at /u01/sharedir.

The path /u01/sharedir can be configured as the file import path in WebCenter Imaging and WebCenter Capture applications and the files put to /sharedir will be processed by the applications.

Sample entry of domain.yaml with nfs-volume configuration

...
serverPod:
    # an (optional) list of environment variable to be set on the servers
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
    volumes:
    - name: weblogic-domain-storage-volume
      persistentVolumeClaim:
        claimName: wccinfra-domain-pvc
    - name: nfs-volume
      nfs:
        server: 100.XXX.XXX.XXX
        path: /sharedir
    volumeMounts:
    - mountPath: /u01/oracle/user_projects/domains
      name: weblogic-domain-storage-volume
    - mountPath: /u01/sharedir
      name: nfs-volume
...

Patch and Upgrade

Patch an existing Oracle WebCenter Content image or upgrade the infrastructure, such as upgrading the underlying Kubernetes cluster to a new release and upgrading the WebLogic Kubernetes Operator release.

Patch a Oracle WebCenter Content product Docker image

Upgrade the underlying Oracle WebCenter Content product image in a running Oracle WebCenter Content Kubernetes environment.

These instructions describe how to upgrade a new release of Oracle WebCenter Content product Docker image in a running Oracle WebCenter Content Kubernetes environment. A rolling upgrade approach is used to upgrade managed server pods of the domain.

Note : It is expecting a Zero down time as a rolling upgrade approach is used.

Prerequisites

Recommendations: * Use the WebLogic Image Tool create feature for patching the Oracle WebCenter Content Docker image with a bundle patch and multiple interim patches. This is the recommended approach because it optimizes the size of the image. * Use the WebLogic Image Tool update feature for patching the Oracle WebCenter Content Docker image with a single interim patch. Note that the patched image size may increase considerably due to additional image layers introduced by the patch application tool.

Apply the patched image

  1. Update the image: field in domain.yaml configuration file with the patched image.

  2. Apply the updated domain.yaml configuration file:

    $ kubectl apply -f domain.yaml

    Note: The server pods will be automatically restarted (rolling restart).

Upgrade an operator release

Upgrade the WebLogic Kubernetes Operator release to a newer version.

These instructions apply to upgrading operators within the 4.x release family as additional versions are released.

To upgrade the Kubernetes operator, use the helm upgrade command. When upgrading the operator, the helm upgrade command requires that you supply a new Helm chart and image. For example:

$ helm upgrade \
  --reuse-values \
  --set image=oracle/weblogic-kubernetes-operator:4.2.9 \
  --namespace weblogic-operator-namespace \
  --wait \
  weblogic-operator \
  kubernetes/charts/weblogic-operator

Upgrade a Kubernetes cluster

Upgrade the underlying Kubernetes cluster version in a running Oracle WebCenter Content Kubernetes environment.

These instructions describe how to upgrade a Kubernetes cluster created using kubeadm on which an Oracle WebCenter Content domain is deployed. A rolling upgrade approach is used to upgrade nodes (master and worker) of the Kubernetes cluster.

Warning : It is expected that there will be a down time during the upgrade of the Kubernetes cluster as the nodes need to be drained as part of the upgrade process.

Prerequisites

Upgrade the Kubernetes version

An upgrade of Kubernetes is supported from one MINOR version to the next MINOR version, or between PATCH versions of the same MINOR. For example, you can upgrade from 1.x to 1.x+1, but not from 1.x to 1.x+2. To upgrade a Kubernetes version, first all the master nodes of the Kubernetes cluster must be upgraded sequentially, followed by the sequential upgrade of each worker node.

Create or update an image

This section describes how to create or update an Oracle WebCenter Content Docker image used for deploying Oracle WebCenter Content domains. An Oracle WebCenter Content Docker image can be created using the WebLogic Image Tool.

If you have access to the My Oracle Support (MOS), and there is a need to build a new image with a patch (bundle or interim), it is recommended to use the WebLogic Image Tool to build an Oracle WebCenter Content image for production deployments.

Create or update an Oracle WebCenter Content Docker image using the WebLogic Image Tool

Using the WebLogic Image Tool, you can create a new Oracle WebCenter Content Docker image (can include patches as well) or update an existing image with one or more patches (bundle patch and interim patches).

Recommendations: * Use create for creating a new Oracle WebCenter Content Docker image either: * without any patches * or, containing the Oracle WebCenter Content binaries, bundle patch and interim patches. This is the recommended approach if you have access to the Oracle WebCenter Content patches because it optimizes the size of the image. * Use update for patching an existing Oracle WebCenter Content Docker image with a single interim patch. Note that the patched image size may increase considerably due to additional image layers introduced by the patch application tool.

Set up the WebLogic Image Tool

Prerequisites

Verify that your environment meets the following prerequisites:

Set up the WebLogic Image Tool

To set up the WebLogic Image Tool:

  1. Create a working directory and change to it. In these steps, this directory is imagetool-setup.

    $ mkdir imagetool-setup
    $ cd imagetool-setup
  2. Download the latest version of the WebLogic Image Tool from the releases page.

  3. Unzip the release ZIP file to the imagetool-setup directory.

  4. Execute the following commands to set up the WebLogic Image Tool on a Linux environment:

    $ cd imagetool-setup/imagetool/bin
    $ source setup.sh
Validate setup

To validate the setup of the WebLogic Image Tool:

  1. Enter the following command to retrieve the version of the WebLogic Image Tool:

    $ imagetool --version
  2. Enter imagetool then press the Tab key to display the available imagetool commands:

    $ imagetool <TAB>
    cache   create  help    rebase  update
WebLogic Image Tool build directory

The WebLogic Image Tool creates a temporary Docker context directory, prefixed by wlsimgbuilder_temp, every time the tool runs. Under normal circumstances, this context directory will be deleted. However, if the process is aborted or the tool is unable to remove the directory, it is safe for you to delete it manually. By default, the WebLogic Image Tool creates the Docker context directory under the user’s home directory. If you prefer to use a different directory for the temporary context, set the environment variable WLSIMG_BLDDIR:

$ export WLSIMG_BLDDIR="/path/to/buid/dir"
WebLogic Image Tool cache

The WebLogic Image Tool maintains a local file cache store. This store is used to look up where the Java, WebLogic Server installers, and WebLogic Server patches reside in the local file system. By default, the cache store is located in the user’s $HOME/cache directory. Under this directory, the lookup information is stored in the .metadata file. All automatically downloaded patches also reside in this directory. You can change the default cache store location by setting the environment variable WLSIMG_CACHEDIR:

$ export WLSIMG_CACHEDIR="/path/to/cachedir"
Set up additional build scripts

Creating an Oracle WebCenter Content Docker image using the WebLogic Image Tool requires additional container scripts for Oracle WebCenter Content domains.

  1. Clone the docker-images repository to set up those scripts. In these steps, this directory is DOCKER_REPO:

    $ cd imagetool-setup
    $ git clone https://github.com/oracle/docker-images.git
  2. Copy the additional WebLogic Image Tool build files from the WebLogic Kubernetes Operator source repository to the imagetool-setup location:

    $ mkdir -p imagetool-setup/docker-images/WebCenterContent/imagetool/14.1.2.0.0
    $ cd imagetool-setup/docker-images/WebCenterContent/imagetool/14.1.2.0.0
    $ cp -rf ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/imagetool-scripts/* .

Create an image

After setting up the WebLogic Image Tool and required build scripts, follow these steps to use the WebLogic Image Tool to create a new Oracle WebCenter Content Docker image.

Download the Oracle WebCenter Content installation binaries and patches

You must download the required Oracle WebCenter Content installation binaries and patches as listed below from the Oracle Software Delivery Cloud and save them in a directory of your choice. In these steps, this directory is download location.

Sample list of installation binaries and patches: * JDK:
* jdk-17.0.9+10_linux-x64_bin.tar.gz

Note: This is a sample list of patches. You must get the appropriate list of patches for your Oracle WebCenter Content image.

Update required build files

The following files available in the code repository location <imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0 are used for creating the image. * additionalBuildCmds.txt * buildArgs

  1. In the buildArgs file, update all the occurrences of %DOCKER_REPO% with the docker-images repository location, which is the complete path of imagetool-setup/docker-images.

    For example, update:

    %DOCKER_REPO%/OracleWebCenterContent/imagetool/14.1.2.0.0/

    to:
    <imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0/

  2. Similarly, update the placeholders %JDK_VERSION% and %BUILDTAG% with appropriate values.

Create the image
  1. Add a JDK package to the WebLogic Image Tool cache:

    $ imagetool cache addInstaller --type jdk --version 17.0.9-10 --path <download location>/jdk-17.0.9+10_linux-x64_bin.tar.gz
  2. Add the downloaded installation binaries to the WebLogic Image Tool cache:

    $ imagetool cache addInstaller --type fmw --version 14.1.2.0.0 --path <download location>/fmw_14.1.2.0.0_infrastructure_generic.jar
    
    $ imagetool cache addInstaller --type wcc --version 14.1.2.0.0 --path <download location>/fmw_14.1.2.0.0_wccontent_generic.jar
  3. Add the downloaded patches to the WebLogic Image Tool cache:

    Commands to add patches in to the cache:

    $ imagetool cache addEntry --key p33578xyz_141200_Generic --path <download location>/p33578xyz_141200_Generic.zip
    
    $ imagetool cache addEntry --key 28186abc_13.9.4.2.8 --path <download location>/p28186abc_139428_Generic-24497645.zip
    
  4. Update the patches list to buildArgs.

    To the create command in the buildArgs file, append the Oracle WebCenter Content patches list using the --patches flag and Opatch patch using the --opatchBugNumber flag. Sample options for the list of patches above are:

    --patches 33578xyz_14.1.2.0.0
    --opatchBugNumber=28186abc_13.9.4.2.8

    Example buildArgs file after appending product’s list of patches and Opatch patch:

    create
    --jdkVersion=17.0.9-10
    --type WCC
    --version=14.1.2.0.0
    --tag=oracle/wccontent_create_1015:14.1.2.0.0
    --pull
    --chown oracle:root
    --additionalBuildCommands <imagetool-setup-location>/docker-images/OracleWebCenterContent/imagetool/14.1.2.0.0/additionalBuildCmds.txt
    --additionalBuildFiles <imagetool-setup-location>/docker-images/OracleWebCenterContent/dockerfiles/14.1.2.0.0/container-scripts
    --patches 33578xyz_14.1.2.0.0
    --opatchBugNumber=28186abc_13.9.4.2.8
    

    Refer to this page for the complete list of options available with the WebLogic Image Tool create command.

  5. Enter the following command to create the Oracle WebCenter Content image:

    $ imagetool @<absolute path to `buildargs` file>"

Sample Dockerfile generated with the imagetool command:

########## BEGIN DOCKERFILE ##########
#
# Copyright (c) 2023, Oracle and/or its affiliates.
#
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
#
FROM ghcr.io/oracle/oraclelinux:8-slim as os_update
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"
USER root

RUN yum -y --downloaddir=/tmp/imagetool install gzip tar unzip libaio jq hostname  \
 && yum -y --downloaddir=/tmp/imagetool clean all \
 && rm -rf /var/cache/yum/* \
 && rm -rf /tmp/imagetool

## Create user and group
RUN if [ -z "$(getent group root)" ]; then hash groupadd &> /dev/null && groupadd root || exit -1 ; fi \
 && if [ -z "$(getent passwd oracle)" ]; then hash useradd &> /dev/null && useradd -g root oracle || exit -1; fi \
 && mkdir -p /u01 \
 && chown oracle:root /u01 \
 && chmod 775 /u01

# Install Java
FROM os_update as jdk_build
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"

ENV JAVA_HOME=/u01/jdk

COPY --chown=oracle:root jdk-17.0.9-10-linux-x64.tar.gz /tmp/imagetool/

USER oracle


RUN tar xzf /tmp/imagetool/jdk-17.0.9-10-linux-x64.tar.gz -C /u01 \
 && $(test -d /u01/jdk* && mv /u01/jdk* /u01/jdk || mv /u01/graal* /u01/jdk) \
 && rm -rf /tmp/imagetool \
 && rm -f /u01/jdk/javafx-src.zip /u01/jdk/src.zip


# Install Middleware
FROM os_update as wls_build
LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"

ENV JAVA_HOME=/u01/jdk \
    ORACLE_HOME=/u01/oracle \
    OPATCH_NO_FUSER=true

RUN mkdir -p /u01/oracle \
 && mkdir -p /u01/oracle/oraInventory \
 && chown oracle:root /u01/oracle/oraInventory \
 && chown oracle:root /u01/oracle

COPY --from=jdk_build --chown=oracle:root /u01/jdk /u01/jdk/

COPY --chown=oracle:root fmw_14.1.2.0.0_infrastructure_generic.jar fmw.rsp /tmp/imagetool/
COPY --chown=oracle:root fmw_14.1.2.0.0_wccontent.jar wcc.rsp /tmp/imagetool/
COPY --chown=oracle:root oraInst.loc /u01/oracle/



USER oracle


RUN echo "INSTALLING MIDDLEWARE" \
 && echo "INSTALLING fmw" \
 &&  \
    /u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.2.0.0_infrastructure_generic.jar -silent ORACLE_HOME=/u01/oracle \
    -responseFile /tmp/imagetool/fmw.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation \
 && echo "INSTALLING wcc" \
 &&  \
    /u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.2.0.0_wccontent.jar -silent ORACLE_HOME=/u01/oracle \
    -responseFile /tmp/imagetool/wcc.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation \
 && chmod -R g+r /u01/oracle





FROM os_update as final_build

ARG ADMIN_NAME
ARG ADMIN_HOST
ARG ADMIN_PORT
ARG MANAGED_SERVER_PORT

ENV ORACLE_HOME=/u01/oracle \
    JAVA_HOME=/u01/jdk \
    PATH=${PATH}:/u01/jdk/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle

LABEL com.oracle.weblogic.imagetool.buildid="f46ab190-077e-4ed7-b747-7bb170fe592c"

    COPY --from=jdk_build --chown=oracle:root /u01/jdk /u01/jdk/

COPY --from=wls_build --chown=oracle:root /u01/oracle /u01/oracle/



USER oracle
WORKDIR /u01/oracle

#ENTRYPOINT /bin/bash



    ENV ORACLE_HOME=/u01/oracle \
        VOLUME_DIR=/u01/oracle/user_projects \
        SCRIPT_FILE=/u01/oracle/container-scripts/* \
        USER_MEM_ARGS="-Djava.security.egd=file:/dev/./urandom" \
        PATH=$PATH:$JAVA_HOME/bin:$ORACLE_HOME/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/container-scripts

    USER root

    RUN mkdir -p $VOLUME_DIR && \
        mkdir -p /u01/oracle/container-scripts && \
        mkdir -p /u01/oracle/silent-install-files-tmp/config && \
        mkdir -p /u01/oracle/logs && \
        chown oracle:root -R /u01 $VOLUME_DIR && \
        chmod a+xr /u01
    COPY --chown=oracle:root files/container-scripts/ /u01/oracle/container-scripts/
    RUN chmod +xr $SCRIPT_FILE


    USER oracle

    EXPOSE $UCM_PORT $UCM_INTRADOC_PORT $IBR_INTRADOC_PORT $IBR_PORT $ADMIN_PORT
    WORKDIR ${ORACLE_HOME}

    CMD ["/u01/oracle/container-scripts/createDomainandStartAdmin.sh"]

########## END DOCKERFILE ##########
  1. Check the created image using the docker images command:

      $ docker images | grep wcc

Update an image

After setting up the WebLogic Image Tool and required build scripts, use the WebLogic Image Tool to update an existing Oracle WebCenter Content Docker image:

  1. Enter the following command for each patch to add the required patch(es) to the WebLogic Image Tool cache:

    bash wrap $ cd <imagetool-setup> $ imagetool cache addEntry --key=33578xyz_14.1.2.0.0 --value <downloaded-patches-location>/p33578xyz_141200_Generic.zip [INFO ] Added entry 33578xyz_14.1.2.0.0=<downloaded-patches-location>/p33578xyz_141200_Generic.zip

  2. Provide the following arguments to the WebLogic Image Tool update command:

    • –-fromImage - Identify the image that needs to be updated. In the example below, the image to be updated is wccontent:14.1.2.0.0.
    • –-patches - Multiple patches can be specified as a comma-separated list.
    • --tag - Specify the new tag to be applied for the image being built.

    Refer here for the complete list of options available with the WebLogic Image Tool update command.

    Note: The WebLogic Image Tool cache should have the latest OPatch zip. The WebLogic Image Tool will update the OPatch if it is not already updated in the image.

    ##### Examples

Sample update command:

  # If you are using a pre-built Oracle WebCenter Content image, obtained from My Oracle Support, then please use this command:
  $ imagetool update --fromImage oracle/wccontent:14.1.2.0.0 --tag=oracle/wccontent_update_1015:14.1.2.0.0 --patches=33578xyz_14.1.2.0.0 --opatchBugNumber=28186abc_13.9.4.2.8

  # In case, you chose to build an Oracle WebCenter Content image, please use the command given below:
  $ imagetool update --chown oracle:root --fromImage oracle/wccontent:14.1.2.0.0  --tag=oracle/wccontent_update_1015:14.1.2.0.0 --patches=33578xyz_14.1.2.0.0 
    --opatchBugNumber=28186abc_13.9.4.2.8
      
  1. Check the built image using the docker images command:

      $ docker images | grep wcc

Uninstall

This section describes the process to clean up the Oracle WebCenter Content domain setup.

Stop all Administration and Managed server pods

First stop the all pods related to a domain. This can be done by patching domain “serverStartPolicy” to “NEVER”. Here is the sample command for the same.

$ kubectl patch domain wcc-domain-name -n wcc-namespace --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'

For example:

kubectl patch domain wccinfra -n wccns --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'

Remove the domain

  1. Remove the domain’s ingress (for example, Traefik ingress) using Helm:

    $ helm uninstall wcc-domain-ingress -n sample-domain1-ns

    For example:

    $ helm uninstall wccinfra-traefik -n wccns
  2. Remove the domain resources by using the sample delete-weblogic-domain-resources.sh script present at ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain:

    $ cd ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain
    $ ./delete-weblogic-domain-resources.sh -d sample-domain1

    For example:

    $ cd ${WORKDIR}/weblogic-kubernetes-operator/kubernetes/samples/scripts/delete-domain
    $ ./delete-weblogic-domain-resources.sh -d wccinfra
  3. Use kubectl to confirm that the server pods and domain resource are deleted:

    $ kubectl get pods -n sample-domain1-ns
    $ kubectl get domains -n sample-domain1-ns

    For example:

    $ kubectl get pods -n wccns
    $ kubectl get domains -n wccns

Drop the RCU schemas

Follow [these steps]({{< relref “/wccontent-domains/installguide/prepare-your-environment/#create-or-drop-schemas” >}}) to drop the RCU schemas created for Oracle WebCenter Content domain.

Remove the domain namespace

  1. Configure the installed ingress load balancer (for example, Traefik) to stop managing the ingresses in the domain namespace:

    $ helm upgrade traefik-operator traefik/traefik \
        --namespace traefik \
        --reuse-values \
        --set "kubernetes.namespaces={traefik}" \
        --wait
  2. Configure the WebLogic Kubernetes Operator to stop managing the domain:

    $ helm upgrade  sample-weblogic-operator \
      kubernetes/charts/weblogic-operator \
      --namespace sample-weblogic-operator-ns \
      --reuse-values \
      --set "domainNamespaces={}" \
      --wait

    For example:

    $ cd ${WORKDIR}/weblogic-kubernetes-operator
    $ helm upgrade weblogic-kubernetes-operator \
      kubernetes/charts/weblogic-operator \
      --namespace opns \
      --reuse-values \
      --set "domainNamespaces={}" \
      --wait
  3. Delete the domain namespace:

    $ kubectl delete namespace sample-domain1-ns

    For example:

    $ kubectl delete namespace wccns

Remove the WebLogic Kubernetes Operator

  1. Remove the WebLogic Kubernetes Operator:

    $ helm uninstall sample-weblogic-operator -n sample-weblogic-operator-ns

    For example:

    $ helm uninstall weblogic-kubernetes-operator -n opns
  2. Remove WebLogic Kubernetes Operator’s namespace:

    $ kubectl delete namespace sample-weblogic-operator-ns

    For example:

    $ kubectl delete namespace opns

Remove the load balancer

  1. Remove the installed ingress based load balancer (for example, Traefik):

    $ helm uninstall traefik -n traefik
  2. Remove the Traefik namespace:

    $ kubectl delete namespace traefik

Delete the domain home

To remove the domain home that is generated using the create-domain.sh script, with appropriate privileges manually delete the contents of the storage attached to the domain home persistent volume (PV).

For example, for the domain’s persistent volume of type host_path:

$ rm -rf /scratch/k8s_dir/WCC

Oracle Cloud Infrastructure

Setting up WebCenter Content domains with WebLogic Kubernetes Operator

This is a guide to run WebLogic Kubernetes Operator managed WebcenterContent domains on Oracle Cloud Infrastructure.

Preparing an OKE environment

Contents

Create Public SSH Key to access all the Bastion and Worker nodes

Create SSH key using ssh-keygen on linux terminal to access (ssh) the Compute instances (worker/bastion) in OCI.

ssh-keygen -t rsa -N "" -b 2048 -C demokey -f id_rsa

Create a compartment for OKE

Within your tenancy, there must be a compartment to contain the necessary network resources (VCN, subnets, internet gateway, route table, security lists). 1. Go to OCI console, and use the top-left Menu to select the Identity > Compartments option. 2. Click the Create Compartment button. 3. Enter the compartment name(For example, WCCStorage) and description(OKE compartment), the click the Create Compartment button.

Create Container Clusters (OKE)

  1. In the Console, open the navigation menu. Go to Developer Services and click Kubernetes Clusters (OKE). OKE-CLUSTER
  2. Choose a Compartment you have permission to work in. Here we will use WCCStorage compartment.
  3. On the Cluster List page, select your Compartment and click Create Cluster.OKE-CLUSTER
  4. In the Create Cluster dialog, select Quick Create and click Launch Workflow. OKE-CLUSTER
  5. On the Create Cluster page specify the values as per your environment (like the sample values shown below)
    • NAME: WCCOKEPHASE1
    • COMPARTMENT: WCCStorage
    • KUBERNETES VERSION: v1.26.2
    • CHOOSE VISIBILITY TYPE: Private
    • SHAPE: VM.Standard.E3.Flex (Choose the available shape for worker node pool. The list shows only those shapes available in your tenancy that are supported by Container Engine for Kubernetes. See Supported Images and Shapes for Worker Nodes.)
    • NUMBER OF NODES: 3 (The number of worker nodes to create in the node pool, placed in the regional subnet created for the ‘quick cluster’).
    • Click Show Advanced Options and enter PUBLIC SSK KEY: ssh-rsa AA……bmVnWgX/ demokey (The public key id_rsa.pub created at Step1) OKE-CLUSTER
  6. Click Next to review the details you entered for the new cluster.
    OKE-CLUSTER
  7. Click Create Cluster to create the new network resources and the new cluster. OKE-CLUSTER
  8. Container Engine for Kubernetes starts creating resources (as shown in the Creating cluster and associated network resources dialog). Click Close to return to the Console. OKE-CLUSTER
  9. Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a status of Active. OKE-CLUSTER
  10. Click on the Node Pools on Resources and then View to view the Node Pool and worker node status OKE-CLUSTER
  11. You can view the status of Worker node and make sure all Node State in Active and Kubernetes Node Condition is Ready.The worker node gets listed in the kubectl command once the Kubernetes Node Condition is Ready. OKE-CLUSTER
  12. To access the Cluster, Click on Access Cluster on the Cluster WCCOKEPHASE1 page. OKE-CLUSTER
  13. We will be creating the bastion node and then access the Cluster.

Create Bastion Node to access Cluster

Setup a bastion node for accessing internal resources. We will create the bastion node in same VCN following below steps, so that we can ssh into worker nodes. Here we will choose CIDR Block: 10.0.22.0/24 . You can choose a different block, if you want.

  1. Click on the VCN Name from the Cluster Page as shown below Bastion-Node

  2. Next Click on Security List and then Create Security List Bastion-Node

  3. Create a bastion-private-sec-list security with below Ingress and Egress Rules.

    Ingress Rules:
    Bastion-Node Egress Rules: Bastion-Node

  4. Create a bastion-public-sec-list security with below Ingress and Egress Rules.

    Ingress Rules:
    Bastion-Node Egress Rules: Bastion-Node

  5. Create the bastion-route-table with Internet Gateway, so that we can add to bastion instance for internet access Bastion-Node

  6. Next create a Regional Public Subnet for bastion instance with name bastion-subnet with below details:

    • CIDR BLOCK: 10.0.22.0/24
    • ROUTE TABLE: oke-bastion-routetables
    • SUBNET ACCESS: PUBLIC SUBNET
    • Security List: bastion-public-sec-list
    • DHCP OPTIONS: Select the Default DHCP Options Bastion-Node Bastion-Node
  7. Next Click on the Private Subnet which has Worker Nodes Bastion-Node

  8. And then add the bastion-private-sec-list to Worker Private Subnet, so that bastion instance can access the Worker nodes Bastion-Node

  9. Next Create Compute Instance oke-bastion with below details

    • Name: BastionHost
    • Image: Oracle Linux 8.X
    • Availability Domain: Choose any AD which has limit for creating Instance
    • VIRTUAL CLOUD NETWORK COMPARTMENT: WCCStorage( i.e., OKE Compartment)
    • SELECT A VIRTUAL CLOUD NETWORK: Select VCN created by Quick Cluster
    • SUBNET COMPARTMENT: WCCStorage ( i.e., OKE Compartment)
    • SUBNET: bastion-subnet (create above)
    • SELECT ASSIGN A PUBLIC IP ADDRESS
    • SSH KEYS: Copy content of id_rsa.pub created in Step1 Bastion-Node Bastion-Node Bastion-Node
  10. Once bastion Instance BastionHost is created, get the Public IP to ssh into the bastion instance Bastion-Node

  11. Login to bastion host as below

    ssh -i <your_ssh_bastion.key> opc@123.456.xxx.xxx

    Setup OCI CLI

  12. Install OCI CLI

    bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
  13. Respond to the Installation Script Prompts.

  14. To download the kubeconfig later after setup, we need to setup the oci config file. Follow the below command and enter the details when prompted

    $ oci setup config

    Sample Output”:

    $ oci setup config
    This command provides a walkthrough of creating a valid CLI config file.
    
    The following links explain where to find the information required by this
    script:
    
     User API Signing Key, OCID and Tenancy OCID:
    
         https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#Other
    
     Region:
    
         https://docs.cloud.oracle.com/Content/General/Concepts/regions.htm
    
     General config documentation:
    
         https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm
    
    
    Enter a location for your config [/home/opc/.oci/config]:
    Enter a user OCID: ocid1.user.oc1..aaaaaaaao3qji52eu4ulgqvg3k4yf7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Enter a tenancy OCID: ocid1.tenancy.oc1..aaaaaaaaf33wodv3uhljnn5etiuafoxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Enter a region (e.g. ap-hyderabad-1, ap-melbourne-1, ap-mumbai-1, ap-osaka-1, ap-seoul-1, ap-sydney-1, ap-tokyo-1, ca-montreal-1, ca-toronto-1, eu-amsterdam-1, eu-frankfurt-1, eu-zurich-1, me-jeddah-1, sa-saopaulo-1, uk-gov-london-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): us-phoenix-1
    Do you want to generate a new API Signing RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: Y
    Enter a directory for your keys to be created [/home/opc/.oci]:
    Enter a name for your key [oci_api_key]:
    Public key written to: /home/opc/.oci/oci_api_key_public.pem
    Enter a passphrase for your private key (empty for no passphrase):
    Private key written to: /home/opc/.oci/oci_api_key.pem
    Fingerprint: 74:d2:f2:db:62:a9:c4:bd:9b:4f:6c:d8:31:1d:a1:d8
    Config written to /home/opc/.oci/config
    
    
     If you haven't already uploaded your API Signing public key through the
     console, follow the instructions on the page linked below in the section
     'How to upload the public key':
    
         https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#How2
    
  15. Now you need to upload the created public key in $HOME/.oci (oci_api_key_public.pem) to OCI console Login to OCI Console and navigate to User Settings, which is in the drop down under your OCI userprofile, located at the top-right corner of the page. Bastion-Node

  16. On User Details page, Click Api Keys link, located near bottom-left corner of the page and then Click the Add API Key button. Copy the content of oci_api_key_public.pem and Click Add. Bastion-Node

  17. Now you can use the oci cli to access the OCI resources.

  18. To access the Cluster, Click on Access Cluster on the Cluster WCCOKEPHASE1 page Bastion-Node

  19. To access the Cluster from Bastion node perform steps as per the Local Access. Bastion-Node

    $ oci -v
    $ mkdir -p $HOME/.kube
    
    $ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.phx.aaaaaaaaae4xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxrqgjtd 
    --file $HOME/.kube/config --region us-phoenix-1 --token-version 2.0.0
    
    $ export KUBECONFIG=$HOME/.kube/config
  20. Install kubectl Client to access the Cluster

    $ curl -LO https://dl.k8s.io/release/v1.15.7/bin/linux/amd64/kubectl
    $ sudo mv kubectl  /bin/
    $ sudo chmod +x /bin/kubectl
  21. Access the Cluster from bastion node

    $ kubectl get nodes
    NAME          STATUS   ROLES   AGE   VERSION
    10.0.10.197   Ready    node    14d   v1.26.2
    10.0.10.206   Ready    node    14d   v1.26.2
    10.0.10.50    Ready    node    14d   v1.26.2
  22. Install required add-ons for Oracle WebCenter Content Cluster setup

    • Install helm v3.10.*

      $ wget wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
      $ tar -zxvf helm-v3.10.3-linux-amd64.tar.gz
      $ sudo mv linux-amd64/helm  /bin/helm
      $ helm version
      version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}
    • Install git

      sudo yum install git -y

Preparing a file system

Create Filesystem and security list for FSS

Note: Make sure you create the filesystem and security list in the OKE created VCN

Creating an OCIR

Publish images to OCIR

Push all the required images to OCIR and subsequently use from there. Follow the below steps for pushing the images to OCIR

Create an “Auth token”

Create an “Auth token” which will be used as docker password to push and pull images from OCIR. Login to OCI Console and navigate to User Settings, which is in the drop down under your OCI user-profile, located at the top-right corner of the OCI console page. OCIR * On User Details page, Click Auth Tokens link located near bottom-left corner of the page and then Click the Generate Token button: Enter a Name and Click “Generate Token” OCIR OCIR * Token will get generated OCIR * Copy the generated token. > NOTE: It will only be displayed this one time, and you will need to copy it to a secure place for further use.

Using the OCIR

Using the Docker CLI to login to OCIR ( for phoenix : phx.ocir.io , ashburn: iad.ocir.io etc) 1. docker login phx.ocir.io 1. When promoted for username enter docker username as OCIR RepoName/oci username ( eg., axcmmdmzqtqb/oracleidentitycloudservice/myemailid@oracle.com) 1. When prompted for your password, enter the generated Auth Token 1. Now you can tag the WCC Docker image and push to OCIR. Sample steps as below

$ docker login phx.ocir.io
$ username - axcmmdmzqtqb/oracleidentitycloudservice/myemailid@oracle.com
$ password - abCXYz942,vcde     (Token Generated for OCIR using user setting)

$ docker tag oracle/wccontent:14.1.2.0.0-<tag> phx.ocir.io/axcmmdmzqtqb/oracle/wccontent:14.1.2.0.0-<tag>

$ docker push phx.ocir.io/axcmmdmzqtqb/oracle/wccontent:14.1.2.0.0-<tag>

This has to be done on Bastion Node for all the images.

Verify the OCIR Images

Get the OCIR repository name by logging in to Oracle Cloud Infrastructure Console. In the OCI Console, open the Navigation menu. Under Solutions and Platform, go to Developer Services and click Container Registry (OCIR) and select the your Compartment.

OCIR

Prepare environment for WCC domain

To create your Oracle WebCenter Content domain in Kubernetes OKE environment, complete the following steps:

Contents

  1. Set up code repository to deploy Oracle WebCenter Content domain

  2. Create namespace for the Oracle WebCenter Content domain

  3. Create the imagePullSecrets

  4. Install WebLogic Kubernetes Operator in OKE

  5. Prepare environment for Oracle WebCenter Content domain

    1. Upgrade WebLogic Kubernetes Operator with the Oracle WebCenter Content domain-namespace

    2. Create persistent storage for the Oracle WebCenter Content domain

    3. Create Kubernetes secret with domain credentials

    4. Create Kubernetes secret with the RCU credentials

    5. Install and start the Database

    6. Configure access to Database

    7. Run Repository Creation Utility to set up your database schemas

  6. Create Oracle WebCenter Content domain

Set up code repository to deploy Oracle WebCenter Content domain

Oracle WebCenter Content domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator infrastructure. To deploy an Oracle WebCenter Content domain, you must set up the deployment scripts.

  1. Create a working directory to set up the source code:

    $ mkdir $HOME/wcc_4.2.9
    $ cd $HOME/wcc_4.2.9
  2. Download the WebLogic Kubernetes Operator source code and Oracle WebCenter Content Suite Kubernetes deployment scripts from the WCContent repository. Required artifacts are available at OracleWebCenterContent/kubernetes.

    $ git clone https://github.com/oracle/fmw-kubernetes.git
    $ export WORKDIR=$HOME/wcc_4.2.9/fmw-kubernetes/OracleWebCenterContent/kubernetes

Create namespace for the Oracle WebCenter Content domain

Create a Kubernetes namespace (for example, wccns) for the domain unless you intend to use the default namespace. Use the new namespace in the remaining steps in this section. For details, see Prepare to run a domain.

 $ kubectl create namespace wccns   

Create the imagePullSecrets

Create the imagePullSecrets (in wccns namespace) so that Kubernetes Deployment can pull the image automatically from OCIR.

Note: Create the imagePullSecret as per your environement using a sample command like this -

$ kubectl create secret docker-registry image-secret -n wccns --docker-server=phx.ocir.io  --docker-username=axxxxxxxxxxx/oracleidentitycloudservice/<your_user_name> --docker-password='vUv+xxxxxxxxxxx<KN7z'  --docker-email=me@oracle.com  

The parameter values are:

OCI Region is phoenix phx.ocir.io OCI Tenancy Name axxxxxxxxxxx ImagePullSecret Name image-secret Username and email address me@oracle.com Auth Token Password vUv+xxxxxxxxxxx<KN7z

Install WebLogic Kubernetes Operator in OKE

The WebLogic Kubernetes Operator supports the deployment of Oracle WebCenter Content domain in the Kubernetes environment.

In the following example commands to install the WebLogic Kubernetes Operator, opns is the namespace and op-sa is the service account created for the WebLogic Kubernetes Operator:

Creating namespace and service account for WebLogic Kubernetes Operator
$ kubectl create namespace opns
$ kubectl create serviceaccount -n opns  op-sa  
Install the WebLogic Kubernetes Operator in OKE
$ cd ${WORKDIR} 
  
$ helm install weblogic-kubernetes-operator charts/weblogic-operator --namespace opns  --set image=phx.ocir.io/xxxxxxxxxxx/oracle/weblogic-kubernetes-operator:4.2.9 --set imagePullSecret=image-secret --set serviceAccount=op-sa --set "domainNamespaces={}" --set "javaLoggingLevel=FINE" --wait
Verify the WebLogic Kubernetes Operator pod
$ kubectl get pods -n opns

NAME                                 READY   STATUS    RESTARTS   AGE
weblogic-operator-779965b66c-d8265   1/1     Running   0          11d

# Verify the Operator helm Charts
$ helm list -n opns

NAME                            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                          APP VERSION
weblogic-kubernetes-operator    opns            3               2022-02-24 06:50:29.810106777 +0000 UTC deployed        weblogic-operator-4.2.9        4.2.9

Prepare environment for Oracle WebCenter Content domain

Upgrade WebLogic Kubernetes Operator with the Oracle WebCenter Content domain-namespace
 $ cd ${WORKDIR}
 $ helm upgrade --reuse-values --namespace opns --set "domainNamespaces={wccns}" --wait weblogic-kubernetes-operator charts/weblogic-operator    
Create persistent storage for the Oracle WebCenter Content domain

In the Kubernetes namespace you created, create the PV and PVC for the domain by running the create-pv-pvc.sh script. Follow the instructions for using the script to create a dedicated PV and PVC for the Oracle WebCenter Content domain.

Here we will use the NFS Server and mount path, created on here.

Create the Kubernetes secrets username and password of the administrative account in the same Kubernetes namespace as the domain:

$ cd ${WORKDIR}/create-weblogic-domain-credentials
 
$ ./create-weblogic-credentials.sh -u weblogic -p welcome1 -n wccns -d wccinfra -s wccinfra-domain-credentials

For more details, see this document.

You can check the secret with the kubectl get secret command.

For example:

$ kubectl get secret wccinfra-domain-credentials -o yaml -n wccns
apiVersion: v1
data:
password: d2VsY29tZTE=
username: d2VibG9naWM=
kind: Secret
metadata:
creationTimestamp: "2021-07-30T06:04:33Z"
labels:
  weblogic.domainName: wccinfra
  weblogic.domainUID: wccinfra
managedFields:
- apiVersion: v1
  fieldsType: FieldsV1
  fieldsV1:
    f:data:
      .: {}
      f:password: {}
      f:username: {}
    f:metadata:
      f:labels:
        .: {}
        f:weblogic.domainName: {}
        f:weblogic.domainUID: {}
    f:type: {}
  manager: kubectl
  operation: Update
  time: "2021-07-30T06:04:36Z"
name: wccinfra-domain-credentials
namespace: wccns
resourceVersion: "90770768"
selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-domain-credentials
uid: 9c5dab09-15f3-4e1f-a40d-457904ddf96b
type: Opaque
Create Kubernetes secret with the RCU credentials

You also need to create a Kubernetes secret containing the credentials for the database schemas. When you create your domain, it will obtain the RCU credentials from this secret.

Use the provided sample script to create the secret:

$ cd ${WORKDIR}/create-rcu-credentials

$ ./create-rcu-credentials.sh -u weblogic -p welcome1 -a sys -q welcome1 -d wccinfra -n wccns -s wccinfra-rcu-credentials

The parameter values are:

-u username for schema owner (regular user), required.
-p password for schema owner (regular user), required.
-a username for SYSDBA user, required.
-q password for SYSDBA user, required.
-d domainUID. Example: wccinfra
-n namespace. Example: wccns
-s secretName. Example: wccinfra-rcu-credentials

You can confirm the secret was created as expected with the kubectl get secret command.

For example:

Sample secret description:

$ kubectl get secret wccinfra-rcu-credentials -o yaml -n wccns
  apiVersion: v1
data:
  password: d2VsY29tZTE=
  sys_password: d2VsY29tZTE=
  sys_username: c3lz
  username: d2VibG9naWM=
kind: Secret
metadata:
  creationTimestamp: "2020-09-16T08:23:04Z"
  labels:
    weblogic.domainName: wccinfra
    weblogic.domainUID: wccinfra
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:password: {}
        f:sys_password: {}
        f:sys_username: {}
        f:username: {}
      f:metadata:
        f:labels:
          .: {}
          f:weblogic.domainName: {}
          f:weblogic.domainUID: {}
      f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-09-16T08:23:04Z"
  name: wccinfra-rcu-credentials
  namespace: wccns
  resourceVersion: "3277132"
  selfLink: /api/v1/namespaces/wccns/secrets/wccinfra-rcu-credentials
  uid: b75f4e13-84e6-40f5-84ba-0213d85bdf30
type: Opaque
Install and start the Database

This step is required only when standalone database was not already setup and the user wanted to use the database in a container. The Oracle Database Docker images are supported only for non-production use. For more details, see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1). For production usecase it is suggested to use a standalone db. Sample provides steps to create the database in a container.

The database in a container can be created with a PV attached for persisting the data or without attaching the PV. In this setup we will be creating database in a container without PV attached.

$ cd ${WORKDIR}/create-oracle-db-service

$ ./start-db-service.sh -i phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x -s image-secret -n wccns

Sample Output”:

$ ./start-db-service.sh -i phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x -s image-secret -n wccns
Checking Status for NameSpace [wccns]
Skipping the NameSpace[wccns] Creation ...
NodePort[30011] ImagePullSecret[docker-store] Image[phx.ocir.io/xxxxxxxxxxxx/oracle/database/enterprise:x.x.x.x] NameSpace[wccns]
service/oracle-db created
deployment.apps/oracle-db created
[oracle-db-8598b475c5-cx5nk] already initialized ..
Checking Pod READY column for State [1/1]
NAME                         READY   STATUS    RESTARTS   AGE
oracle-db-8598b475c5-cx5nk   1/1     Running   0          20s
Service [oracle-db] found
NAME                         READY   STATUS    RESTARTS   AGE
oracle-db-8598b475c5-cx5nk   1/1     Running   0          25s
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
oracle-db   LoadBalancer   10.96.74.187   <pending>     1521:30011/TCP   28s
[1/30] Retrying for Oracle Database Availability...
[2/30] Retrying for Oracle Database Availability...
[3/30] Retrying for Oracle Database Availability...
[4/30] Retrying for Oracle Database Availability...
[5/30] Retrying for Oracle Database Availability...
[6/30] Retrying for Oracle Database Availability...
[7/30] Retrying for Oracle Database Availability...
[8/30] Retrying for Oracle Database Availability...
[9/30] Retrying for Oracle Database Availability...
[10/30] Retrying for Oracle Database Availability...
[11/30] Retrying for Oracle Database Availability...
[12/30] Retrying for Oracle Database Availability...
[13/30] Retrying for Oracle Database Availability...
Done ! The database is ready for use .
Oracle DB Service is RUNNING with NodePort [30011]
Oracle DB Service URL [oracle-db.wccns.svc.cluster.local:1521/devpdb.k8s]

Once database is created successfully, you can use the database connection string, as an rcuDatabaseURL parameter in the create-domain-inputs.yaml file.

Configure access to Database

Run a container to create rcu pod

kubectl run rcu --generator=run-pod/v1 \
  --image phx.ocir.io/xxxxxxxxxxx/oracle/wccontent:x.x.x.x \
  --namespace wccns \
  --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "image-secret"}] } }'  \
  -- sleep infinity
   
# Check the status of rcu pod
kubectl get pods -n wccns
Run Repository Creation Utility to set up your database schemas
Create or Drop schemas

To create the database schemas for Oracle WebCenter Content, run the create-rcu-schema.sh script.

For example:

# Make sure rcu pod status is running before executing this 
kubectl exec -n wccns -ti rcu /bin/bash

# DB details 
export CONNECTION_STRING=your_db_host:1521/your_db_service
export RCUPREFIX=your_schema_prefix
echo -e welcome1"\n"welcome1> /tmp/pwd.txt
   
# Create schemas
/u01/oracle/oracle_common/bin/rcu -silent -createRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS   -component STB -component OPSS  -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS  -tablespace USERS -tempTablespace TEMP -f < /tmp/pwd.txt

# Drop schemas
/u01/oracle/oracle_common/bin/rcu -silent -dropRepository -databaseType ORACLE -connectString $CONNECTION_STRING -dbUser sys -dbRole sysdba -selectDependentsForComponents true -schemaPrefix $RCUPREFIX -component CONTENT -component MDS  -component STB -component OPSS  -component IAU -component IAU_APPEND -component IAU_VIEWER -component WLS -f < /tmp/pwd.txt 

# Exit from the container
exit

Note: In the create and drop schema commands above, pass additional components ( -component IPM -component CAPTURE ) if IPM and CAPTURE applications are enabled resepectively.

Now that you have your Docker images and created RCU schemas, you are ready to create your domain, after setting-up a load balancer.

Set up a load balancer

WebLogic Kubernetes Operator managed Oracle WebCenter Content domain on Oracle Cloud Infrastructure supports ingress-based load balancers such as Traefik and NGINX.

Traefik

This section provides information about how to install and configure the ingress-based Traefik load balancer (version 2.6.0 or later for production deployments) to load balance Oracle WebCenter Content domain clusters.

Follow these steps to set up Traefik as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:

Contents

Non-SSL and SSL termination

Install the Traefik (ingress-based) load balancer

  1. Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the values.yaml file in the sample but set kubernetes.namespaces specifically.

     $ cd ${WORKDIR}
     $ kubectl create namespace traefik
     $ helm repo add traefik https://helm.traefik.io/traefik --force-update

    Sample output:

     "traefik" has been added to your repositories
  2. Install Traefik:

     $ cd ${WORKDIR}
     $ helm install traefik  traefik/traefik \
          --namespace traefik \
          --values charts/traefik/values.yaml \
          --set  "kubernetes.namespaces={traefik}" \
          --set "service.type=LoadBalancer" --wait

Sample output:

NAME: traefik-operator
LAST DEPLOYED: Mon Jun  1 19:31:20 2020
NAMESPACE: traefik
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get Traefik load balancer IP or hostname:

  NOTE: It may take a few minutes for this to become available.

  You can watch the status by running:

      $ kubectl get svc traefik-operator --namespace traefik -w

  Once 'EXTERNAL-IP' is no longer '<pending>':

      $ kubectl describe svc traefik-operator --namespace traefik | grep Ingress | awk '{print $3}'

2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP or hostname found in step 1  

A sample values.yaml for deployment of Traefik 2.6.0:

image:
name: traefik
tag: 2.6.0
pullPolicy: IfNotPresent
ingressRoute:
dashboard:
   enabled: true
   # Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
   annotations: {}
   # Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
   labels: {}
providers:
kubernetesCRD:
   enabled: true
kubernetesIngress:
   enabled: true
   # IP used for Kubernetes Ingress endpoints
ports:
traefik:
   port: 9000
   expose: true
   # The exposed port for this service
   exposedPort: 9000
   # The port protocol (TCP/UDP)
   protocol: TCP
web:
   port: 8000
   # hostPort: 8000
   expose: true
   exposedPort: 30305
   nodePort: 30305
   # The port protocol (TCP/UDP)
   protocol: TCP
   # Use nodeport if set. This is useful if you have configured Traefik in a
   # LoadBalancer
   # nodePort: 32080
   # Port Redirections
   # Added in 2.2, you can make permanent redirects via entrypoints.
   # https://docs.traefik.io/routing/entrypoints/#redirection
   # redirectTo: websecure
websecure:
   port: 8443
#    # hostPort: 8443
   expose: true
   exposedPort: 30443
   # The port protocol (TCP/UDP)
   protocol: TCP
   nodePort: 30443
additionalArguments:
  - "--log.level=INFO"
  1. Verify the Traefik (load balancer) services:

    Please note the EXTERNAL-IP of the traefik-operator service. This is the public IP address of the load balancer that you will use to access the WebLogic Server Administration Console and WebCenter Content URLs.

    $ kubectl get service -n traefik
    NAME      TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)                                          AGE
    traefik   LoadBalancer   10.96.8.30   123.456.xx.xx   9000:30734/TCP,30305:30305/TCP,30443:30443/TCP   6d23h

    To print only the Traefik EXTERNAL-IP, execute this command:

    $ TRAEFIK_PUBLIC_IP=`kubectl describe svc traefik --namespace traefik | grep Ingress | awk '{print $3}'`
    $ echo $TRAEFIK_PUBLIC_IP
    123.456.xx.xx

    Verify the helm charts:

    $ helm list -n traefik
    NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
    traefik traefik         2               2022-09-11 12:22:41.122310912 +0000 UTC deployed        traefik-10.24.3    2.8.5

    Verify the Traefik status and find the port number

     $ kubectl get all -n traefik

    Sample output:

    NAME                          READY   STATUS    RESTARTS   AGE
    pod/traefik-f9cf58697-xjhpl   1/1     Running   0          7d
    
    
    NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)                                          AGE
    service/traefik   LoadBalancer   10.96.8.30   123.456.xx.xx   9000:30734/TCP,30305:30305/TCP,30443:30443/TCP   7d
    
    
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/traefik   1/1     1            1           7d
    
    NAME                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/traefik-f9cf58697   1         1         1       7d 

Configure Traefik to manage ingresses

Configure Traefik to manage ingresses created in this namespace, where traefik is the Traefik namespace and wccns is the namespace of the domain:

$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
--set "kubernetes.namespaces={traefik,wccns}"

Sample output:

    Release "traefik" has been upgraded. Happy Helming!
    NAME: traefik
    LAST DEPLOYED: Sun Jan 17 23:43:02 2021
    NAMESPACE: traefik
    STATUS: deployed
    REVISION: 2
    TEST SUITE: None

Create an ingress for the domain

Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml. By default, type is TRAEFIK , tls is Non-SSL, and domainType is wccinfra. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml based on the type of configuration (non-SSL or SSL). If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths) based on the domain application URLs that need to be accessed. The template YAML file for the Traefik (ingress-based) load balancer is located at ${WORKDIR}/charts/ingress-per-domain/templates/traefik-ingress.yaml

  1. Install ingress-per-domain using Helm for non-SSL configuration:

    $ export LB_HOSTNAME=<Traefik load balancer DNS name>
    
    #OR leave it empty to point to Traefik load-balancer IP, by default
    $ export LB_HOSTNAME=''

    Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname, or leave it empty to point to the Traefik load-balancer IP.

     $ cd ${WORKDIR}
     $ helm install wcc-traefik-ingress  \
         charts/ingress-per-domain \
         --set type=TRAEFIK \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "traefik.hostname=$LB_HOSTNAME" \
         --set tls=NONSSL

    Sample output:

      NAME: wcc-traefik-ingress
      LAST DEPLOYED: Sun Jan 17 23:49:09 2021
      NAMESPACE: wccns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None

Create a certificate and generate a Kubernetes secret

  1. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate :

     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=<Traefik load balancer DNS name>" \
     -extensions san -config \
     <(echo "[req]"; 
     echo distinguished_name=req; 
     echo "[san]"; 
     echo subjectAltName=IP:$TRAEFIK_PUBLIC_IP 
     )
    
    #OR use the following command if you chose to leave LB_HOSTNAME empty in the previous step
    
     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=*" \
     -extensions san -config \
     <(echo "[req]"; 
     echo distinguished_name=req; 
     echo "[san]"; 
     echo subjectAltName=IP:$TRAEFIK_PUBLIC_IP 
     )

    Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname.

  2. Generate a Kubernetes secret:

    $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt 

Create Traefik custom resource

  1. Create Traefik Middleware custom resource

    In case of SSL termination, Traefik must pass a custom header WL-Proxy-SSL:true to the WebLogic Server endpoints. Create the Middleware using the following command:

    $ cat <<EOF | kubectl apply -f -
    apiVersion: traefik.containo.us/v1alpha1
    kind: Middleware
    metadata:
      name: wls-proxy-ssl
      namespace: wccns
    spec:
      headers:
        customRequestHeaders:
           WL-Proxy-SSL: "true"
    EOF
  2. Create the Traefik TLSStore custom resource.

    In case of SSL termination, Traefik should be configured to use the user-defined SSL certificate. If the user-defined SSL certificate is not configured, Traefik will create a default SSL certificate. To configure a user-defined SSL certificate for Traefik, use the TLSStore custom resource. The Kubernetes secret created with the SSL certificate should be referenced in the TLSStore object. Run the following command to create the TLSStore:

    $ cat <<EOF | kubectl apply -f -
    apiVersion: traefik.containo.us/v1alpha1
    kind: TLSStore
    metadata:
      name: default
      namespace: wccns
    spec:
      defaultCertificate:
        secretName:  domain1-tls-cert   
    EOF

Install Ingress for SSL termination configuration

  1. Install ingress-per-domain using Helm for SSL configuration.

    The Kubernetes secret name should be updated in the template file.

    The template file also contains the following annotations:

     traefik.ingress.kubernetes.io/router.entrypoints: websecure
     traefik.ingress.kubernetes.io/router.tls: "true"
     traefik.ingress.kubernetes.io/router.middlewares: wccns-wls-proxy-ssl@kubernetescrd

    The entry point for SSL access and the Middleware name should be updated in the annotation. The Middleware name should be in the form <namespace>-<middleware name>@kubernetescrd.

     $ cd ${WORKDIR}
     $ helm install wcc-traefik-ingress  \
         charts/ingress-per-domain \
         --set type=TRAEFIK \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "traefik.hostname=$LB_HOSTNAME" \
         --set "traefik.hostnameorip=$TRAEFIK_PUBLIC_IP" \
         --set tls=SSL

    Sample output:

      NAME: wcc-traefik-ingress
      LAST DEPLOYED: Mon Jul 20 11:44:13 2020
      NAMESPACE: wccns
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
  2. Get the details of the services by the above deployed ingress:

     $ kubectl describe  ingress wccinfra-traefik  -n wccns
  3. To confirm that the load balancer noticed the new ingress and is successfully routing to the domain server pods, you can send a request to the URL for the “WebLogic ReadyApp framework”, which should return an HTTP 200 status code, as follows:

     $ curl -v http://${LOADBALANCER_HOSTNAME}:${LOADBALANCER_PORT}/weblogic/ready
     * About to connect() to abc.com port 30305 (#0)
     *   Trying 100.111.156.246...
     * Connected to abc.com (100.111.156.246) port 30305 (#0)
     > GET /weblogic/ready HTTP/1.1
     > User-Agent: curl/7.29.0
     > Host: domain1.org:30305
     > Accept: */*
     >
     < HTTP/1.1 200 OK
     < Content-Length: 0
     < Date: Thu, 03 Dec 2020 13:16:19 GMT
     < Vary: Accept-Encoding
     <
     * Connection #0 to host abc.com left intact

End-to-End SSL configuration

Install the Traefik load balancer for end-to-end SSL

  1. Use Helm to install the Traefik (ingress-based) load balancer. For detailed information, see here. Use the values.yaml file in the sample but set kubernetes.namespaces specifically.

     $ cd ${WORKDIR}
     $ kubectl create namespace traefik
     $ helm repo add traefik https://helm.traefik.io/traefik --force-update

    Sample output:

     "traefik" has been added to your repositories
  2. Install Traefik:

     $ cd ${WORKDIR}
     $ helm install traefik  traefik/traefik \
          --namespace traefik \
          --values charts/traefik/values.yaml \
          --set  "kubernetes.namespaces={traefik}" \
          --set "service.type=LoadBalancer" \
          --wait

Sample output:

    NAME: traefik
    LAST DEPLOYED: Sun Jan 17 23:30:20 2021
    NAMESPACE: traefik
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  1. Verify the Traefik operator status and find the port number of the SSL and non-SSL services:

     $ kubectl get all -n traefik

    Sample output:

    
       NAME                                    READY   STATUS    RESTARTS   AGE
       pod/traefik-operator-676fc64d9c-skppn   1/1     Running   0          78d
    
       NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
       service/traefik-operator             NodePort    10.109.223.59   <none>        443:30443/TCP,80:30305/TCP   78d
       service/traefik-operator-dashboard   ClusterIP   10.110.85.194   <none>        80/TCP                       78d
    
       NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
       deployment.apps/traefik-operator   1/1     1            1           78d
    
       NAME                                          DESIRED   CURRENT   READY   AGE
       replicaset.apps/traefik-operator-676fc64d9c   1         1         1       78d
       replicaset.apps/traefik-operator-cb78c9dc9    0         0         0       78d

Configure Traefik to manage the domain

Configure Traefik to manage the domain application service created in this namespace, where traefik is the Traefik namespace and wccns is the namespace of the domain:

$ helm upgrade traefik traefik/traefik --namespace traefik --reuse-values \
--set "kubernetes.namespaces={traefik,wccns}"

Sample output:

      Release "traefik" has been upgraded. Happy Helming!
      NAME: traefik
      LAST DEPLOYED: Sun Jan 17 23:43:02 2021
      NAMESPACE: traefik
      STATUS: deployed
      REVISION: 2
      TEST SUITE: None

Create IngressRouteTCP

  1. To enable SSL passthrough in Traefik, you can configure a TCP router. A sample YAML for IngressRouteTCP is available at ${WORKDIR}/charts/ingress-per-domain/tls/traefik-tls.yaml.

    Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.

    The following should be updated in traefik-tls.yaml:

    • The service name and the SSL port should be updated in the Services.
    • The load balancer hostname(DNS name) should be updated in the HostSNI rule.

    Sample traefik-tls.yaml:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: wcc-ucm-routetcp
  namespace: wccns
spec:
  entryPoints:
    - websecure
  routes:
  - match: HostSNI(`<Traefik load balancer DNS name>`)
    services:
    - name: wccinfra-cluster-ucm-cluster
      port: 16201
      weight: 3
      terminationDelay: 400
  tls:
    passthrough: true   

Note: Make sure that you specify DNS name to point to the Traefik load balancer hostname, or specify ’*’ to point to the Traefik load balancer IP.

  1. Create the IngressRouteTCP:
cd ${WORKDIR}/charts/ingress-per-domain/tls

$ kubectl apply -f traefik-tls.yaml

Create Oracle WebCenter Content domain

With the load-balancer configured, please create your domain by following the instructions documented in [Create Oracle WebCenter Content domains]({{< relref “/wccontent-domains/oracle-cloud/create-wccontent-domains” >}}), before verifying domain application URL access.

Verify domain application URL access

Verify Non-SSL access

After setting up the Traefik (ingress-based) load balancer, verify that the domain application URLs are accessible through the load balancer port 30305 for HTTP access. The sample URLs for Oracle WebCenter Content domain of type wcc are:

http://${TRAEFIK_PUBLIC_IP}:30305/weblogic/ready
http://${TRAEFIK_PUBLIC_IP}:30305/cs
http://${TRAEFIK_PUBLIC_IP}:30305/ibr
http://${TRAEFIK_PUBLIC_IP}:30305/imaging
http://${TRAEFIK_PUBLIC_IP}:30305/dc-console
http://${TRAEFIK_PUBLIC_IP}:30305/wcc

Verify SSL termination and end-to-end SSL access

After setting up the Traefik (ingress-based) load balancer, verify that the domain applications are accessible through the SSL load balancer port 30443 for HTTPS access. The sample URLs for Oracle WebCenter Content domain are:

LOADBALANCER-SSLPORT is 30443

https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/cs
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/ibr
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/imaging
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/dc-console
https://${LOADBALANCER_HOSTNAME}:${LOADBALANCER-SSLPORT}/wcc

Uninstall Traefik

$ helm delete wcc-traefik-ingress -n wccns

$ helm delete traefik -n wccns

$ kubectl delete namespace traefik

NGINX

This section provides information about how to install and configure the ingress-based NGINX load balancer to load balance Oracle WebCenter Content domain clusters. You can configure NGINX for non-SSL, SSL termination, and end-to-end SSL access of the application URL.

Follow these steps to set up NGINX as a load balancer for an Oracle WebCenter Content domain in a Kubernetes cluster:

See the official installation document for prerequisites.

Contents

To get repository information, enter the following Helm commands:

  $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  $ helm repo update

Non-SSL and SSL termination

Install the NGINX load balancer

  1. Deploy the ingress-nginx controller by using Helm on the domain namespace:

    For Non-SSL, use the following command

     $ helm install nginx-ingress -n wccns \
            --set controller.service.type=LoadBalancer \
            --set controller.admissionWebhooks.enabled=false \
              ingress-nginx/ingress-nginx 

    For SSL termination at load balancer, use the following command

     $ helm install nginx-ingress -n wccns \
            --set controller.service.type=LoadBalancer \
            --set controller.admissionWebhooks.enabled=false \
            --set controller.extraArgs.default-ssl-certificate="wccns/domain1-tls-cert" \
              ingress-nginx/ingress-nginx 

    Sample output:

NAME: nginx-ingress
LAST DEPLOYED: Fri Jul 29 00:14:19 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
Get the application URL by running these commands:
  export HTTP_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[0].nodePort}" nginx-ingress-ingress-nginx-controller)
  export HTTPS_NODE_PORT=$(kubectl --namespace wccns get services -o jsonpath="{.spec.ports[1].nodePort}" nginx-ingress-ingress-nginx-controller)
  export NODE_IP=$(kubectl --namespace wccns get nodes -o jsonpath="{.items[0].status.addresses[1].address}")
  echo "Visit http://$NODE_IP:$HTTP_NODE_PORT to access your application via HTTP."
  echo "Visit https://$NODE_IP:$HTTPS_NODE_PORT to access your application via HTTPS."
An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
  1. Check the status of the deployed ingress controller:

    Please note the EXTERNAL-IP of the nginx-controller service. This is the public IP address of the load balancer that you will use to access the WebLogic Server Administration Console and WebCenter Content URLs. > Note: It may take a few minutes for the LoadBalancer IP(EXTERNAL-IP) to be available.

    $ kubectl --namespace wccns get services | grep ingress-nginx-controller

    Sample output:

    NAME                                   TYPE         CLUSTER-IP   EXTERNAL-IP     PORT(S)   
    nginx-ingress-ingress-nginx-controller LoadBalancer 10.96.180.215 144.24.xx.xx    80:31339/TCP,443:32278/TCP

    To print only the NGINX EXTERNAL-IP, execute this command:

    NGINX_PUBLIC_IP=`kubectl describe svc nginx-ingress-ingress-nginx-controller --namespace wccns | grep Ingress | awk '{print $3}'`
    
    $ echo $NGINX_PUBLIC_IP   
    144.24.xx.xx

    Verify the helm charts:

    $ helm list -A
    NAME          NAMESPACE REVISION  UPDATED      STATUS      CHART                APP VERSION
    nginx-ingress  wccns    1         2022-05-13  deployed   ingress-nginx-4.2.5   1.3.1

Configure NGINX to manage ingresses

  1. Create an ingress for the domain in the domain namespace by using the sample Helm chart. Here path-based routing is used for ingress. Sample values for default configuration are shown in the file ${WORKDIR}/charts/ingress-per-domain/values.yaml. By default, type is TRAEFIK, tls is Non-SSL, and domainType is wccinfra. These values can be overridden by passing values through the command line or can be edited in the sample file values.yaml. If needed, you can update the ingress YAML file to define more path rules (in section spec.rules.host.http.paths) based on the domain application URLs that need to be accessed. Update the template YAML file for the NGINX load balancer located at ${WORKDIR}/charts/ingress-per-domain/templates/nginx-ingress.yaml

    Install ingress-per-domain using Helm for non-SSL configuration:

    $ export LB_HOSTNAME=<NGINX load balancer DNS name>
    
    #OR leave it empty to point to NGINX load-balancer IP, by default
    $ export LB_HOSTNAME=''

    Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname, or leave it empty to point to the NGINX load balancer IP.

     $ cd ${WORKDIR}
     $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "nginx.hostname=$LB_HOSTNAME" \
         --set type=NGINX \
         --set tls=NONSSL

    Sample output:

     NAME: wccinfra-nginx-ingress
     LAST DEPLOYED: Tue May 10 10:37:12 2022
     NAMESPACE: wccns
     STATUS: deployed
     REVISION: 1
     TEST SUITE: None

Create a certificate and generate a Kubernetes secret

  1. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate:

     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=<NGINX load balancer DNS name>" \
     -extensions san -config \
     <(echo "[req]"; 
     echo distinguished_name=req; 
     echo "[san]"; 
     echo subjectAltName=IP:$NGINX_PUBLIC_IP 
     )
    
    #OR use the following command if you chose to leave LB_HOSTNAME empty in the previous step
    
     $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls1.key -out /tmp/tls1.crt \
     -subj "/CN=*" \
     -extensions san -config \
     <(echo "[req]"; 
     echo distinguished_name=req; 
     echo "[san]"; 
     echo subjectAltName=IP:$NGINX_PUBLIC_IP 
     )

    Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.

  2. Generate a Kubernetes secret:

    $ kubectl -n wccns create secret tls domain1-tls-cert --key /tmp/tls1.key --cert /tmp/tls1.crt 

Install Ingress for SSL termination configuration

  1. Install ingress-per-domain using Helm for SSL configuration:

     $ cd ${WORKDIR}
     $ helm install wccinfra-nginx-ingress charts/ingress-per-domain \
         --namespace wccns \
         --values charts/ingress-per-domain/values.yaml \
         --set "nginx.hostname=$LB_HOSTNAME" \
         --set "nginx.hostnameorip=$NGINX_PUBLIC_IP" \
         --set type=NGINX --set tls=SSL

    Sample output:

     NAME: wccinfra-nginx-ingress
     LAST DEPLOYED: Tue May 10 10:37:12 2022
     NAMESPACE: wccns
     STATUS: deployed
     REVISION: 1
     TEST SUITE: None
  2. For non-SSL access or SSL to the Oracle WebCenter Content application, get the details of the services by the ingress:

      $ kubectl describe ingress wccinfra-nginx  -n wccns

Sample output of the services supported by the above deployed ingress:

Name:             wccinfra-nginx
Namespace:        wccns
Address:          144.24.xx.xx
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /em                      wccinfra-adminserver:7001 (10.244.2.117:7001)
        /wls-exporter            wccinfra-adminserver:7001 (10.244.2.117:7001)
        /cs                      wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /adfAuthentication       wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /_ocsh                   wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /_dav                    wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /idcws                   wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /idcnativews             wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /wsm-pm                  wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
        /ibr                     wccinfra-cluster-ibr-cluster:16250 (10.244.2.119:16250)
        /ibr/adfAuthentication   wccinfra-cluster-ibr-cluster:16250 (10.244.2.119:16250)
        /weblogic/ready          wccinfra-cluster-ucm-cluster:16200 (10.244.2.118:16200,10.244.2.120:16200)
Annotations:
  nginx.ingress.kubernetes.io/affinity-mode:  persistent
  kubernetes.io/ingress.class:                nginx
  nginx.ingress.kubernetes.io/affinity:       cookie
Events:
  Type    Reason  Age                  From                      Message
  ----    ------  ----                 ----                      -------
  Normal  Sync    8m3s (x2 over 8m5s)  nginx-ingress-controller  Scheduled for sync

End-to-End SSL configuration

Install the NGINX load balancer for end-to-end SSL

  1. For secured access (SSL) to the Oracle WebCenter Content application, create a certificate and generate secrets: click here

  2. Deploy the ingress-nginx controller by using Helm on the domain namespace:

    helm install nginx-ingress -n wccns \
    --set controller.extraArgs.default-ssl-certificate=wccns/domain1-tls-cert \
    --set controller.service.type=LoadBalancer \
    --set controller.admissionWebhooks.enabled=false \
    --set controller.extraArgs.enable-ssl-passthrough=true \
    ingress-nginx/ingress-nginx  

Sample output:

NAME: nginx-ingress
LAST DEPLOYED: Mon Sep 19 11:08:16 2022
NAMESPACE: wccns
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace wccns get services -o wide -w nginx-ingress-ingress-nginx-controller'
An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
  1. Check the status of the deployed ingress controller:

     $ kubectl --namespace wccns get services | grep ingress-nginx-controller

    Sample output:

    NAME                                   TYPE         CLUSTER-IP   EXTERNAL-IP     PORT(S)   
    nginx-ingress-ingress-nginx-controller LoadBalancer 10.96.180.215 144.24.xx.xx    80:31339/TCP,443:32278/TCP

    To print only the NGINX EXTERNAL-IP, execute this command:

    NGINX_PUBLIC_IP=`kubectl describe svc nginx-ingress-ingress-nginx-controller --namespace wccns | grep Ingress | awk '{print $3}'`
    
    $ echo $NGINX_PUBLIC_IP   
    144.24.xx.xx

Deploy tls to access individual Managed Servers

  1. Deploy tls to securely access the services. Only one application can be configured with ssl-passthrough. A sample tls file for NGINX is shown below for the service wccinfra-cluster-ucm-cluster and port 16201. All the applications running on port 16201 can be securely accessed through this ingress. For each backend service, create different ingresses as NGINX does not support multiple path/rules with annotation ssl-passthrough. That is, for wccinfra-cluster-ucm-cluster, wccinfra-cluster-ibr-cluster, wccinfra-cluster-ipm-cluster, wccinfra-cluster-capture-cluster, wccinfra-cluster-wccadf-cluster and wccinfra-adminserver, different ingresses must be created.

    Note: There is a limitation with load-balancer in end-to-end SSL configuration - accessing multiple types of servers (different Managed Servers and/or Administration Server) at the same time, is currently not supported. we can access only one managed server at a time.

    $ cd ${WORKDIR}/charts/ingress-per-domain/tls

    Sample nginx-ucm-tls.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wcc-ucm-ingress
  namespace: wccns
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
  tls:
  - hosts:
    - '$NGINX_PUBLIC_IP'
    secretName: domain1-tls-cert
  rules:
  - host: '<NGINX load balancer DNS name>'
    http:
      paths:
      - path:
        pathType: ImplementationSpecific
        backend:
          service:
            name: wccinfra-cluster-ucm-cluster
            port:
              number: 16201

Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.

  1. Deploy the secured ingress:

    $ cd ${WORKDIR}/charts/ingress-per-domain/tls
    $ kubectl create -f nginx-ucm-tls.yaml
  2. Check the services supported by the ingress:

    $ kubectl describe ingress wcc-ucm-ingress -n wccns

Services supported by the ingress:

Name:             wcc-ucm-ingress
Namespace:        wccns
Address:          10.102.97.237
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  domain1-tls-cert terminates domain1.org
Rules:
  Host                                         Path  Backends
  ----                                         ----  --------
  domain1.org
                                                  wccinfra-cluster-ucm-cluster:16201 (10.244.238.136:16201,10.244.253.132:16201)
Annotations:                                   kubernetes.io/ingress.class: nginx
                                               nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    62s (x2 over 106s)  nginx-ingress-controller  Scheduled for sync

Deploy tls to access Administration Server

  1. As ssl-passthrough in NGINX works on the clusterIP of the backing service instead of individual endpoints, you must expose adminserver service created by the WebLogic Kubernetes Operator with clusterIP.

    For example:

    1. Get the name of Administration Server service:
      $ kubectl get svc -n wccns | grep wccinfra-adminserver

    Sample output: bash wccinfra-adminserver ClusterIP None <none> 7001/TCP,7002/TCP 7

    1. Expose the Administration Server service wccinfra-adminserver and use the new service name wccinfra-adminserver-nginx-ssl:
     $ kubectl expose svc wccinfra-adminserver -n wccns --name=wccinfra-adminserver-nginx-ssl --port=7002
    1. Deploy the secured ingress:

Sample nginx-admin-tls.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wcc-admin-ingress
  namespace: wccns
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
  tls:
  - hosts:
    - '$NGINX_PUBLIC_IP'
    secretName: domain1-tls-cert
  rules:
  - host: '<NGINX load balancer DNS name>'
    http:
      paths:
      - path:
        pathType: ImplementationSpecific
        backend:
          service:
            name: wccinfra-adminserver-nginx-ssl
            port:
              number: 7002  

Note: Make sure that you specify DNS name to point to the NGINX load balancer hostname.

$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl create -f nginx-admin-tls.yaml

Uninstall ingress-nginx tls

$ cd ${WORKDIR}/charts/ingress-per-domain/tls
$ kubectl delete -f nginx-ucm-tls.yaml

Create Oracle WebCenter Content domain

With the load-balancer configured, please create your domain by following the instructions documented in [Create Oracle WebCenter Content domains]({{< relref “/wccontent-domains/oracle-cloud/create-wccontent-domains” >}}), before verifying domain application URL access.

Verify domain application URL access

Verify Non-SSL access

Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-HOSTNAME:

  http://${LOADBALANCER-HOSTNAME}/weblogic/ready
  http://${LOADBALANCER-HOSTNAME}/em
  http://${LOADBALANCER-HOSTNAME}/cs
  http://${LOADBALANCER-HOSTNAME}/ibr
  http://${LOADBALANCER_HOSTNAME}/imaging
  http://${LOADBALANCER_HOSTNAME}/dc-console
  http://${LOADBALANCER_HOSTNAME}/wcc  

Verify SSL termination and end-to-end SSL access

Verify that the Oracle WebCenter Content domain application URLs are accessible through the LOADBALANCER-HOSTNAME:

  https://${LOADBALANCER-HOSTNAME}/weblogic/ready
  https://${LOADBALANCER-HOSTNAME}/em
  https://${LOADBALANCER-HOSTNAME}/cs
  https://${LOADBALANCER-HOSTNAME}/ibr
  https://${LOADBALANCER_HOSTNAME}/imaging
  https://${LOADBALANCER_HOSTNAME}/dc-console
  https://${LOADBALANCER_HOSTNAME}/wcc

Uninstall NGINX

Uninstall and delete the ingress-nginx deployment:

//Uninstall and delete the `ingress-nginx` deployment
$ helm delete wccinfra-nginx-ingress -n wccns

//Uninstall NGINX
$ helm delete nginx-ingress -n wccns

Create Oracle WebCenter Content domain

Contents

Run the create domain script

Run the create domain script, specifying your inputs file and an output directory to store the generated artifacts:

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/

$ ./create-domain.sh \
  -i create-domain-inputs.yaml \
  -o <path to output-directory>

The script will perform the following steps:

Run the managed-server-wrapper script

Run oke-start-managed-server-wrapper.sh script, which intrenally applies the domain YAML. This script also applies initial configurations for Managed Server containers and readies Managed Servers for future inter-container communications.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/

$ ./oke-start-managed-servers-wrapper.sh -o <path_to_output_directory> -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_termination>

Note: A value for parameter -s needs to be provided only if SSL termination at loadbalancer is being used - acceptable value is either true or false. If this parameter value is not supplied, the script assumes that ssl termination at loadbalancer is not being used and by default the value will be taken as false.

Run the startup configuration scripts for IPM and WCCADF applications as applicable

Run the script configure-ipm-connection.sh to do startup configurations if IPM is enabled.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-ipm-connection.sh -l <load_balancer_external_ip> -p <load_balancer_port> -s <ssl_or_ssl_termination>

Run the script configure-wccadf-domain.sh to do startup configurations if ADFUI is enabled.

$ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
$ ./configure-wccadf-domain.sh -n <node_ip> -m <ucm_node_port>

Patch the domain for the changes to be applied to the domain.

#STOP
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "NEVER" }]'

sleep 2m

#START
$ kubectl patch domain DOMAINUID -n NAMESPACE --type='json' -p='[{"op": "replace", "path": "/spec/serverStartPolicy", "value": "IF_NEEDED" }]'

Verify the results

The create domain script will verify that the domain was created, and will report failure if there was any error. However, it may be desirable to manually verify the domain, even if just to gain familiarity with the various Kubernetes objects that were created by the script.

Generated YAML files with the default inputs

Sample content of the generated domain.yaml:

$ cat output/weblogic-domains/wccinfra/domain.yaml
# Copyright (c) 2017, 2021, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
# This is an example of how to define a Domain resource.
#
apiVersion: "weblogic.oracle/v8"
kind: Domain
metadata:
  name: wccinfra
  namespace: wccns
  labels:
    weblogic.domainUID: wccinfra
spec:
  # The WebLogic Domain Home
  domainHome: /u01/oracle/user_projects/domains/wccinfra
  maxClusterConcurrentStartup: 1

  # The domain home source type
  # Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image
  domainHomeSourceType: PersistentVolume

  # The WebLogic Server image that the WebLogic Kubernetes Operator uses to start the domain
  image: "phx.ocir.io/xxxxxxxxxx/oracle/wccontent/oracle/wccontent:x.x.x.x"

  # imagePullPolicy defaults to "Always" if image version is :latest
  imagePullPolicy: "IfNotPresent"

  # Identify which Secret contains the credentials for pulling an image
  imagePullSecrets:
  - name: image-secret

  # Identify which Secret contains the WebLogic Admin credentials (note that there is an example of
  # how to create that Secret at the end of this file)
  webLogicCredentialsSecret:
    name: wccinfra-domain-credentials

  # Whether to include the server out file into the pod's stdout, default is true
  includeServerOutInPodLog: true

  # Whether to enable log home
  logHomeEnabled: true

  # Whether to write HTTP access log file to log home
  httpAccessLogInLogHome: true

  # The in-pod location for domain log, server logs, server out, introspector out, and Node Manager log files
  logHome: /u01/oracle/user_projects/domains/logs/wccinfra
  # An (optional) in-pod location for data storage of default and custom file stores.
  # If not specified or the value is either not set or empty (e.g. dataHome: "") then the
  # data storage directories are determined from the WebLogic domain home configuration.
  dataHome: ""


  # serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
  # This determines which WebLogic Servers the WebLogic Kubernetes Operator will start up when it discovers this Domain
  # - "NEVER" will not start any server in the domain
  # - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
  # - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
  serverStartPolicy: "IF_NEEDED"

  serverPod:
    # an (optional) list of environment variable to be set on the servers
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
    volumes:
    - name: weblogic-domain-storage-volume
      persistentVolumeClaim:
        claimName: wccinfra-domain-pvc
    volumeMounts:
    - mountPath: /u01/oracle/user_projects/domains
      name: weblogic-domain-storage-volume

  # adminServer is used to configure the desired behavior for starting the administration server.
  adminServer:
    # serverStartState legal values are "RUNNING" or "ADMIN"
    # "RUNNING" means the listed server will be started up to "RUNNING" mode
    # "ADMIN" means the listed server will be start up to "ADMIN" mode
    serverStartState: "RUNNING"
    # adminService:
    #   channels:
    # The Admin Server's NodePort
    #    - channelName: default
    #      nodePort: 30701
    # Uncomment to export the T3Channel as a service
    #    - channelName: T3Channel

  # clusters is used to configure the desired behavior for starting member servers of a cluster.
  # If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.
  clusters:
  - clusterName: ibr_cluster
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 1
  # The number of managed servers to start for unlisted clusters
  # replicas: 1

  # Istio
  # configuration:
  #   istio:
  #     enabled:
  #     readinessPort:

  - clusterName: ucm_cluster
    clusterService:
         annotations:
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
    - clusterName: ipm_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
  - clusterName: capture_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: JSESSIONID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1
  - clusterName: wccadf_cluster
    clusterService:
         annotations: 
            traefik.ingress.kubernetes.io/affinity: "true"
            traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
            traefik.ingress.kubernetes.io/session-cookie-name: WCCSID
    serverService:
      precreateService: true
    serverStartState: "RUNNING"
    serverPod:
      # Instructs Kubernetes scheduler to prefer nodes for new cluster members where there are not
      # already members of the same cluster.
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "weblogic.clusterName"
                      operator: In
                      values:
                        - $(CLUSTER_NAME)
                topologyKey: "kubernetes.io/hostname"
    replicas: 3
  # The number of managed servers to start for unlisted clusters
  # replicas: 1

Verify the domain

To confirm that the domain was created, enter the following command:

$ kubectl describe domain DOMAINUID -n NAMESPACE

Replace DOMAINUID with the domainUID and NAMESPACE with the actual namespace.

Sample domain description:

[opc@bastionhost domain-home-on-pv]$ kubectl describe domain wccinfra -n wccns
Name:         wccinfra
Namespace:    wccns
Labels:       weblogic.domainUID=wccinfra
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"weblogic.oracle/v8","kind":"Domain","metadata":{"annotations":{},"labels":{"weblogic.domainUID":"wccinfra"},"name":"wccinfr...
API Version:  weblogic.oracle/v8
Kind:         Domain
Metadata:
  Creation Timestamp:  2021-08-24T12:26:19Z
  Generation:          33
  Managed Fields:
    API Version:  weblogic.oracle/v8
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
        f:labels:
          .:
          f:weblogic.domainUID:
    Manager:      kubectl
    Operation:    Update
    Time:         2021-09-30T10:56:07Z
    API Version:  weblogic.oracle/v8
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:clusters:
        f:conditions:
        f:introspectJobFailureCount:
        f:servers:
        f:startTime:
    Manager:         Kubernetes Java Client
    Operation:       Update
    Time:            2021-10-04T20:06:17Z
  Resource Version:  115422662
  Self Link:         /apis/weblogic.oracle/v8/namespaces/wccns/domains/wccinfra
  UID:               e283c968-b80b-404b-aa1e-711080d7cc38
Spec:
  Admin Server:
    Server Start State:  RUNNING
  Clusters:
    Cluster Name:  ibr_cluster
    Replicas:      1
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
    Cluster Name:         ucm_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:        true
    Server Start State:         RUNNING
        Cluster Name:         ipm_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
    Cluster Name:         capture_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    JSESSIONID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
    Cluster Name:         wccadf_cluster
    Cluster Service:
      Annotations:
        traefik.ingress.kubernetes.io/affinity:               true
        traefik.ingress.kubernetes.io/service.sticky.cookie:  true
        traefik.ingress.kubernetes.io/session-cookie-name:    WCCSID
    Replicas:                                                 3
    Server Pod:
      Affinity:
        Pod Anti Affinity:
          Preferred During Scheduling Ignored During Execution:
            Pod Affinity Term:
              Label Selector:
                Match Expressions:
                  Key:       weblogic.clusterName
                  Operator:  In
                  Values:
                    $(CLUSTER_NAME)
              Topology Key:  kubernetes.io/hostname
            Weight:          100
    Server Service:
      Precreate Service:  true
    Server Start State:   RUNNING
  Data Home:
  Domain Home:                  /u01/oracle/user_projects/domains/wccinfra
  Domain Home Source Type:      PersistentVolume
  Http Access Log In Log Home:  true
  Image:                        phx.ocir.io/xxxxxxxxxx/oracle/wccontent:x.x.x.x
  Image Pull Policy:            IfNotPresent
  Image Pull Secrets:
    Name:                          image-secret
  Include Server Out In Pod Log:   true
  Log Home:                        /u01/oracle/user_projects/domains/logs/wccinfra
  Log Home Enabled:                true
  Max Cluster Concurrent Startup:  1
  Server Pod:
    Env:
      Name:   JAVA_OPTIONS
      Value:  -Dweblogic.StdoutDebugEnabled=false
      Name:   USER_MEM_ARGS
      Value:  -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m
    Volume Mounts:
      Mount Path:  /u01/oracle/user_projects/domains
      Name:        weblogic-domain-storage-volume
    Volumes:
      Name:  weblogic-domain-storage-volume
      Persistent Volume Claim:
        Claim Name:     wccinfra-domain-pvc
  Server Start Policy:  IF_NEEDED
  Web Logic Credentials Secret:
    Name:  wccinfra-domain-credentials
Status:
  Clusters:
    Cluster Name:      ibr_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    1
    Replicas:          1
    Replicas Goal:     1
    Cluster Name:      ucm_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      ipm_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      capture_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3
    Cluster Name:      wccadf_cluster
    Maximum Replicas:  5
    Minimum Replicas:  0
    Ready Replicas:    3
    Replicas:          3
    Replicas Goal:     3

  Conditions:
    Last Transition Time:        2021-09-30T11:04:35.889547Z
    Reason:                      ServersReady
    Status:                      True
    Type:                        Available
  Introspect Job Failure Count:  0
  Servers:
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T10:58:38.381000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      10.0.10.135
    Server Name:    adminserver
    State:          RUNNING
    Cluster Name:   ibr_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T11:01:09.987000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      10.0.10.135
    Server Name:    ibr_server1
    State:          RUNNING
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server2
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server3
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server4
    Cluster Name:   ibr_cluster
    Desired State:  SHUTDOWN
    Server Name:    ibr_server5
    Cluster Name:   ucm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T11:00:36.369000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      10.0.10.142
    Server Name:    ucm-server1
    State:          RUNNING
    Cluster Name:   ucm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T11:02:35.448000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      10.0.10.135
    Server Name:    ucm-server2
    State:          RUNNING
    Cluster Name:   ucm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T11:04:32.314000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      10.0.10.142
    Server Name:    ucm-server3
    State:          RUNNING
    Cluster Name:   ucm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ucm-server4
    Cluster Name:   ucm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ucm-server5
    Cluster Name:   ipm_cluster
    Desired State:  RUNNING
    Health:
      Activation Time:  2021-09-30T11:04:32.314000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    ipm_server1
    State:          RUNNING
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server2
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server3
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server4
    Cluster Name:   ipm_cluster
    Desired State:  SHUTDOWN
    Server Name:    ipm_server5
    Cluster Name:   capture_cluster
    Desired State:  RUNNING
    Health:         
      Activation Time:  2021-09-30T11:04:32.314000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime 
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    capture_server1
    State:          RUNNING
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server2
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server3
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server4
    Cluster Name:   capture_cluster
    Desired State:  SHUTDOWN
    Server Name:    capture_server5
    Cluster Name:   wccadf_cluster
    Desired State:  RUNNING
    Health:         
      Activation Time:  2021-09-30T11:04:32.314000Z
      Overall Health:   ok
      Subsystems:
        Subsystem Name:  ServerRuntime 
        Symptoms:
    Node Name:      MyNodeName
    Server Name:    wccadf_server1
    State:          RUNNING
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server2
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server3
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server4
    Cluster Name:   wccadf_cluster
    Desired State:  SHUTDOWN
    Server Name:    wccadf_server5

  Start Time:       2021-08-24T12:26:20.033714Z
Events:             <none>

In the Status section of the output, the available servers and clusters are listed. Note that if this command is issued soon after the script finishes, there may be no servers available yet, or perhaps only the Administration Server but no Managed Servers. The WebLogic Kubernetes Operator will start up the Administration Server first and wait for it to become ready before starting the Managed Servers.

Verify the pods

Enter the following command to see the pods running the servers:

$ kubectl get pods -n NAMESPACE

Here is an example of the output of this command. You can verify that an Administration Server and Managed Servers for ucm and ibr cluster are running.

$ kubectl get pod -n wccns
NAME                                                READY   STATUS      RESTARTS   AGE
rcu                                                 1/1     Running     0          54d
wccinfra-adminserver                                1/1     Running     0          18d
wccinfra-create-fmw-infra-sample-domain-job-xqnn4   0/1     Completed   0          54d
wccinfra-ibr-server1                                1/1     Running     0          18d
wccinfra-ucm-server1                                1/1     Running     0          18d
wccinfra-ucm-server2                                1/1     Running     0          18d
wccinfra-ucm-server3                                1/1     Running     0          18d
wccinfra-ipm-server1                                1/1     Running     0          18d
wccinfra-ipm-server2                                1/1     Running     0          18d
wccinfra-ipm-server3                                1/1     Running     0          18d
wccinfra-capture-server1                            1/1     Running     0          18d
wccinfra-capture-server2                            1/1     Running     0          18d
wccinfra-capture-server3                            1/1     Running     0          18d
wccinfra-wccadf-server1                             1/1     Running     0          18d
wccinfra-wccadf-server2                             1/1     Running     0          18d
wccinfra-wccadf-server3                             1/1     Running     0          18d

Verify the services

Enter the following command to see the services for the domain:

$ kubectl get services -n NAMESPACE

Here is an example of the output of this command.

Sample list of services:

$ kubectl get services -n wccns
NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
oracle-db                          LoadBalancer   10.96.4.194     141.148.xxx.xxx   1521:30011/TCP   15d
wccinfra-adminserver               ClusterIP      None            <none>            7001/TCP         43h
wccinfra-capture-server1           ClusterIP      None            <none>            16400/TCP        43h
wccinfra-capture-server2           ClusterIP      None            <none>            16400/TCP        43h
wccinfra-capture-server3           ClusterIP      None            <none>            16400/TCP        43h
wccinfra-capture-server4           ClusterIP      10.96.162.97    <none>            16400/TCP        43h
wccinfra-capture-server5           ClusterIP      10.96.86.213    <none>            16400/TCP        43h
wccinfra-cluster-capture-cluster   ClusterIP      10.96.107.96    <none>            16400/TCP        2d13h
wccinfra-cluster-ibr-cluster       ClusterIP      10.96.123.229   <none>            16250/TCP        2d13h
wccinfra-cluster-ipm-cluster       ClusterIP      10.96.130.117   <none>            16000/TCP        2d13h
wccinfra-cluster-ucm-cluster       ClusterIP      10.96.24.88     <none>            16200/TCP        119s
wccinfra-cluster-wccadf-cluster    ClusterIP      10.96.11.113    <none>            16225/TCP        2d13h
wccinfra-ibr-server1               ClusterIP      None            <none>            16250/TCP        43h
wccinfra-ibr-server2               ClusterIP      10.96.57.47     <none>            16250/TCP        43h
wccinfra-ibr-server3               ClusterIP      10.96.75.252    <none>            16250/TCP        43h
wccinfra-ibr-server4               ClusterIP      10.96.120.224   <none>            16250/TCP        43h
wccinfra-ibr-server5               ClusterIP      10.96.34.58     <none>            16250/TCP        43h
wccinfra-ipm-server1               ClusterIP      None            <none>            16000/TCP        43h
wccinfra-ipm-server2               ClusterIP      None            <none>            16000/TCP        43h
wccinfra-ipm-server3               ClusterIP      None            <none>            16000/TCP        43h
wccinfra-ipm-server4               ClusterIP      10.96.44.8      <none>            16000/TCP        43h
wccinfra-ipm-server5               ClusterIP      10.96.77.81     <none>            16000/TCP        43h
wccinfra-ucm-server1               ClusterIP      None            <none>            16200/TCP        43h
wccinfra-ucm-server2               ClusterIP      None            <none>            16200/TCP        43h
wccinfra-ucm-server3               ClusterIP      None            <none>            16200/TCP        43h
wccinfra-ucm-server4               ClusterIP      10.96.132.1     <none>            16200/TCP        43h
wccinfra-ucm-server5               ClusterIP      10.96.199.161   <none>            16200/TCP        43h
wccinfra-wccadf-server1            ClusterIP      None            <none>            16225/TCP        43h
wccinfra-wccadf-server2            ClusterIP      None            <none>            16225/TCP        43h
wccinfra-wccadf-server3            ClusterIP      None            <none>            16225/TCP        43h
wccinfra-wccadf-server4            ClusterIP      10.96.156.42    <none>            16225/TCP        43h
wccinfra-wccadf-server5            ClusterIP      10.96.194.175   <none>            16225/TCP        43h

Expose service for IBR intradoc port

  1. Get the IP address for the node, hosting ibr managed server pod. In this sample, node running wccinfra-ibr-server1 pod has ip ‘10.0.10.xx’

    $ kubectl get pods -n wccns -o wide
    
    #output
    NAME                                                READY   STATUS      RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES
    wccinfra-adminserver                                1/1     Running     0          4h50m   10.244.0.150   10.0.10.xxx   <none>           <none>
    wccinfra-create-fmw-infra-sample-domain-job-zbsxr   0/1     Completed   0          7d22h   10.244.1.25    10.0.10.xx    <none>           <none>
    wccinfra-ibr-server1                                1/1     Running     0          4h48m   10.244.1.38    10.0.10.xx   <none>           <none>
    wccinfra-ucm-server1                                1/1     Running     0          4h48m   10.244.1.39    10.0.10.xx    <none>           <none>
    wccinfra-ucm-server2                                1/1     Running     0          4h46m   10.244.0.151   10.0.10.xxx   <none>           <none>
    wccinfra-ucm-server3                                1/1     Running     0          4h44m   10.244.1.40    10.0.10.xx    <none>           <none>
  2. Expose the IBR intradoc port as a NodePort > Note: Choose NodePort value from a range (default: 30000-32767). In this sample, we have chosen nodePort value as 30555

    $ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
    
    kubectl expose  service/wccinfra-cluster-ibr-cluster --name wccinfra-cluster-ibr-cluster-ext --port=5555 --type=NodePort -n wccns --dry-run=true -o yaml > wccinfra-cluster-ibr-cluster-ext.yaml
    
    sed -i -e '/targetPort:*/a\ \ \ \ nodePort: 30555' wccinfra-cluster-ibr-cluster-ext.yaml
    
    kubectl -n wccns apply -f wccinfra-cluster-ibr-cluster-ext.yaml   
  3. Verify ibr service name ‘wccinfra-cluster-ibr-cluster-ext’ 

    $ kubectl get svc -n wccns
    NAME                            TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)
    wccinfra-cluster-ibr-cluster-ext NodePort 10.109.247.52 <none>     5555:30555/TCP   
  4. Create the outgoing provider by providing following details and restart the servers.

    Please provide the NodePort value (in the above sample - 30555), as Server Port.

    Server Host Name:  <your-ibr-managed-server-node-ip>
    
    Server Port: 30555

    oke-wcc-provider-ucm-ibr

Expose service for UCM intradoc port

  1. Get the IP address for the node, hosting ucm managed server pod. In this sample, node running wccinfra-ucm-server1 pod has ip ‘10.0.10.xx’

    $ kubectl get pods -n wccns -o wide
    
    #output
    NAME                                                READY   STATUS      RESTARTS   AGE     IP             NODE          NOMINATED NODE   READINESS GATES
    wccinfra-adminserver                                1/1     Running     0          4h50m   10.244.0.150   10.0.10.xxx   <none>           <none>
    wccinfra-create-fmw-infra-sample-domain-job-zbsxr   0/1     Completed   0          7d22h   10.244.1.25    10.0.10.xx    <none>           <none>
    wccinfra-ibr-server1                                1/1     Running     0          4h48m   10.244.1.38    10.0.10.xx   <none>           <none>
    wccinfra-ucm-server1                                1/1     Running     0          4h48m   10.244.1.39    10.0.10.xx    <none>           <none>
    wccinfra-ucm-server2                                1/1     Running     0          4h46m   10.244.0.151   10.0.10.xxx   <none>           <none>
    wccinfra-ucm-server3                                1/1     Running     0          4h44m   10.244.1.40    10.0.10.xx    <none>           <none>
  2. Expose the UCM intradoc port as a NodePort > Note: Choose NodePort value from a range (default: 30000-32767). In this sample, we have chosen nodePort value as 30444

    $ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
    
    $ kubectl expose  service/wccinfra-cluster-ucm-cluster --name wccinfra-cluster-ucm-cluster-ext --port=4444 --type=NodePort -n wccns --dry-run=true -o yaml > wccinfra-cluster-ucm-cluster-ext.yaml
    
    $ sed -i -e '/targetPort:*/a\ \ \ \ nodePort: 30444' wccinfra-cluster-ucm-cluster-ext.yaml
    
    $ kubectl -n wccns apply -f wccinfra-cluster-ucm-cluster-ext.yaml   
  3. Verify ucm service name ‘wccinfra-cluster-ucm-cluster-ext’ 

    $ kubectl get svc -n wccns
    NAME                            TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)
    wccinfra-cluster-ucm-cluster-ext NodePort 10.109.247.52 <none>     4444:30444/TCP   

Configuring Oracle WebCenter Content for Oracle Identity Cloud Service (IDCS)

Contents

Introduction

Configuring WebCenter Content for Oracle Identity Cloud Service (IDCS) on OKE. Configuration information is provided in the following sections:

Updating SSL.hostnameVerifier Property

To update SSL.hostnameVerifier property, do the following: This is necessary for the IDCS provider to access IDCS.

  1. Stop all the servers in the domain including Administration server and all Managed WebLogic servers.

  2. Update the SSL.hostnameVerifier property:

    edit the file //bin/setDomainEnv.sh: go to pv location file system and modify the file setDomainEnv.sh sample: /WCCFS/wccinfra/bin/setDomainEnv.sh

    OR

    Alternatively create or modify the file <DOMAIN_HOME>/<domain_name>/bin/setUserOverrides.sh. Add the SSL.hostnameVerifier property for the IDCS Authenticator: sample: /WCCFS/wccinfra/bin/setUserOverrides.sh

     EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dweblogic.security.SSL.hostnameVerifier=weblogic.security.utils.SSLWLSWildcardHostnameVerifier"
    
     export EXTRA_JAVA_PROPERTIES
  3. Start the Administration server and all Managed WebLogic servers.

Configuring IDCS Security Provider

  1. Log in to the IDCS administration console.

  2. Create a trusted application. In the Add Confidential Application wizard:

    1. Enter the client name and the description (optional).
    2. Select the Configure this application as a client now option. To configure this application, expand the Client Configuration in the Configuration tab.
    3. In the Allowed Grant Types , select Client Credentials field the check box.
    4. In the Grant the client access to Identity Cloud Service Admin APIs section, click Add to add the APP Roles (application roles). You can add the Identity Domain Administrator role.
    5. Keep the default settings for the pages and click Finish.
    6. Record/Copy the Client ID and Client Secret.This is needed when you will create the IDCS provider.
    7. Activate the application.

Configuring Oracle Identity Cloud Integrator Provider

To configure Identity Cloud Integrator Provider:

  1. Log in to the WebLogic Server Administration console.
  2. Select Security Realm in the Domain Structure pane.
  3. On the Summary of Security Realms page, select the name of the realm (for example, myrealm). Click myrealm. The Settings for myrealm page appears.
  4. On the Settings for Realm Name page, select Providers and then Authentication. To create a new Authentication Provider, in the Authentication Providers table, click New.
  5. In the Create a New Authentication Provider page, enter the name of the authentication provider, for example, IDCSIntegrator and select the OracleIdentityCloudIntegrator type of authentication provider from the drop-down list and click OK.
  6. In the Authentication Providers table, click the newly created Oracle Identity Cloud Integrator, IDCSIntegrator link.
  7. In the Settings for IDCSIntegrator page, for the Control Flag field, select the Sufficient option from the drop-down list Click Save.
  8. Go to the Provider Specific page to configure the additional attributes for the security provider. Enter the values for the following fields & Click Save:
    • Host
    • Port 443(default)
    • select SSLEnabled
    • Tenant
    • Client Id
    • Client Secret.
      > NOTE: If IDCS URL is idcs-abcde.identity.example.com, then IDCS host would be identity.example.com and tenant name would be idcs-abcde. Keep the default settings for other sections of the page.
  9. Select Security Realm, then myrealm, and then Providers. In the Authentication Providers table, click Reorder.
  10. In the Reorder Authentication Providers page, move IDCSIntegrator on the top and click OK.
  11. In the Authentication Providers table, click the DefaultAuthenticator link. In the Settings for DefaultAuthenticator page, for the Control Flag field, select the Sufficient option from the drop-down list. Click Save.
  12. All changes will be activated. Restart the Administration server.

Setting Up Trust between IDCS and WebLogic

To set up trust between IDCS and WebLogic 1. Import certificate in KSS store. * Run this from the Administration Server node. * Get IDCS certificate: ```bash echo -n | openssl s_client -showcerts -servername -connect :443|sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > /tmp/idcs_cert_chain.crt

 #sample
 echo -n | openssl s_client -showcerts -servername xyz.identity.oraclecloud.com -connect idcs-xyz.identity.oraclecloud.com:443|sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/idcs_cert_chain.crt

 #copy the certificate inside the admin_pod
 kubectl cp /tmp/idcs_cert_chain.crt wccns/xyz-adminserver:/u01/idcs_cert_chain.crt
 ```
  1. Restart the Administration server and Managed servers

Creating Admin User in IDCS Administration Console for WebCenter Content

It is important to create the Admin user in IDCS because once the Managed servers are configured for SAML, the domain admin user (typically weblogic user) will not be able to log into the Managed servers.

To create WebLogic Admin user in IDCS for WebCenter Content JaxWS connection:

  1. Go to the Groups tab and create Administrators and sysmanager roles in IDCS.
  2. Go to the Users tab and create a wls admin user, for example, weblogic and assign it to Administrators and sysmanager groups.
  3. Restart all the Managed servers.

Managing Group Memberships, Roles, and Accounts

This will require modifying OPSS and libOVD to access IDCS. The following steps are required if using IDCS for user authorization. Do not run these steps if you are using IDCS only for user authentication. Ensure that all the servers are stopped (including Administration) before proceeding with the following steps: > NOTE: Shutdown all the servers using WebLogic Server Administration Console. Please keep in mind - kubectl patch domain command is the recommended way for starting/stopping pods. Please refrain from using WebLogic Server Administration Console for the same, anywhere else.

  1. Run the following script:

    #exec the Administration server
    kubectl exec -n wccns -it wccinfra-adminserver -- /bin/bash
    
    #Run the wlst.sh
    cd /u01/oracle/oracle_common/common/bin/
    ./wlst.sh

    NOTE: It’s not required to connect to WebLogic Administration Server.

  2. Read the domain:

    readDomain(<DOMAIN_HOME>)
    
    #sample
    wls:/offline> readDomain('/u01/oracle/user_projects/domains/wccinfra')
  3. Add the template:

    addTemplate(<MIDDLEWARE_HOME>/oracle_common/common/templates/wls/oracle.opss_scim_template.jar")
    
    #sample
    wls:/offline/wccinfra>addTemplate('/u01/oracle/oracle_common/common/templates/wls/oracle.opss_scim_template.jar')

    NOTE: This step may throw a warning, which can be ignored. The addTemplate is deprecated. Use selectTemplate followed by loadTemplates in place of addTemplate.

  4. Update the domain:

    updateDomain()
    
    #sample
    wls:/offline/wccinfra> updateDomain()
  5. Close the domain:

    closeDomain()
    
    #sample
    wls:/offline/wccinfra> closeDomain()
  6. Exit from the Administration server container:

    exit
  7. Start the servers (Administration and Managed).

Configuring WebCenter Content for User Logout

If the Logout link is selected, you will be re-authenticated by SAML. To be able to select the Logout link:

  1. Log in to WebCenter Content Server as an administrator. Select Administration, then Admin Server, and then General Configuration.

  2. In the Additional Configuration Variables pane, add the following parameter:

    EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Dweblogic.security.SSL.hostnameVerifier=weblogic.security.utils.SSLWLSWildcardHostnameVerifier"
  3. Click Save.

  4. Restart the Administration and Managed servers.

Configure an additional mount or shared space to a domain for Imaging and Capture

A volume can be mounted to a server pod which can be accessible directly from outside Kubernetes cluster so that an external application could write new files to it.

This can be used specifically in WebCenter Imaging and WebCenter Capture applications for File Imports.

Kubernetes supports several types of volumes as given in Volumes | Kubernetes.

Further in this section, we will take nfs volume as an example.

Mount “nfs” as volume

Create a NFS File system as described in the section Preparing a file system or an already existing NFS server can also be used.

To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts in domain.yaml file.

Update the domain.yaml and apply the changes as shown in sample below for mounting nfs server (for example, 100.XXX.XXX.X with shared export path at /sharedir) to all the server pods at /u01/sharedir.

The path /u01/sharedir can be configured as the file import path in WebCenter Imaging and WebCenter Capture applications and the files put to /sharedir will be processed by the applications.

Sample entry of domain.yaml with nfs-volume configuration

...
serverPod:
    # an (optional) list of environment variable to be set on the servers
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m "
    volumes:
    - name: weblogic-domain-storage-volume
      persistentVolumeClaim:
        claimName: wccinfra-domain-pvc
    - name: nfs-volume
      nfs:
        server: 100.XXX.XXX.XXX
        path: /sharedir
    volumeMounts:
    - mountPath: /u01/oracle/user_projects/domains
      name: weblogic-domain-storage-volume
    - mountPath: /u01/sharedir
      name: nfs-volume
...

Launch Oracle Webcenter Content Native Applications in Containers deployed in Oracle Cloud Infrastructure

This section provides the steps required to use Oracle WebCenter Content native binaries with user interfaces, from containerized Managed Servers deployed in OCI.

Issue with Launching Headful User Interfaces for Oracle WebCenter Content Native Binaries

Oracle WebCenter Content (UCM) provide a set of native binaries with headful UIs, which are delivered as part of the product container image. WebCenter Content container images are, by default, created with Oracle slim linux image, which doesn’t come with all the packages pre-installed to support headful applications with UIs to be launched. UCM provides many such native binaries which uses JAVA AWT for UI support. With current Oracle WebCenter Content container images, native applications fails to run, being unable to launch UIs.

The following sections document the solution, by providing a set of instructions, enabling users to run UCM native applications with UIs.

These instructions are divided in two parts -

  1. Steps to update the existing container image
  2. Steps to launch native apps using VNC sessions

Steps to Update out-of-the-box Oracle WebCenter Content Container Image Using WebLogic Image Tool

This section describes the method to update image with a OS package using WebLogic Image Tool. Please refer this for setting up the WebLogic Image Tool. #### Additional Build Commands

The installation of required OS packages in the image, can be done using yum command in additional build command option available in WebLogic Image Tool. Here is the sample additionalBuildCmds.txt file, to be used, to install required Linux packages (libXext.x86_64, libXrender.x86_64 and libXtst.x86_64).

[final-build-commands]
USER root
RUN yum -y --downloaddir=/tmp/imagetool install libXext libXrender libXtst  \
        && yum -y --downloaddir=/tmp/imagetool clean all \
    && rm -rf /var/cache/yum/* \
    && rm -rf /tmp/imagetool
USER oracle

Note: It is important to change the user to oracle, otherwise the user during the container execution will be root. #### Build arguments

The arguments required for updating the image can be passed as file to the WebLogic Image Tool.

'update' is the sub command to Image Tool for updating an existing docker image.
'--fromImage' option provides the existing docker image that has to be updated.
'--tag' option should be provided with the new tag for the updated image.
'--additionalBuildCommands' option should be provided with the above created additional build commands file.
'--chown oracle:root' option should be provided to update file permissions.

Below is a sample build argument (buildArgs) file, to be used for updating the image,

  update
  --fromImage <existing_WCContent_image_without_dependent_packages>
  --tag <name_of_updated_WCContent_image_to_be_built>
  --additionalBuildCommands ./additionalBuildCmds.txt
  --chown oracle:root 

Update Oracle WebCenter Content Container Image

Now we can execute the WebLogic Image Tool to update the out-of-the-box image, using the build-argument file described above -

$ imagetool @buildArgs

WebLogic Image Tool provides multiple options for updating the image. For detailed information on the update options, please refer to this document.

Updating the image does not modify the ‘CMD’ from the source image unless it is modified in the additional build commands.

$ docker inspect -f '{{.Config.Cmd}}' <name_of_updated_Wccontent_image>
[/u01/oracle/container-scripts/createDomainandStartAdmin.sh]

Steps to launch Oracle WebCenter Content native applications using VNC sessions

Once updated image is successfully built and available on all required nodes, do the following:

  1. Update the domain.yaml file with updated image name and apply the domain.yaml file.
$ kubectl apply -f domain.yaml
  1. After applying the modified domain.yaml, pods will get restarted and start running with updated image with required packages.
$ kubectl get pods -n <namespace_being_used_for_wccontent_domain>
  1. Install VNC SERVER on any one worker node, on which there is an UCM server pod deployed.

  2. After starting vncserver systemctl daemon in the Worker Node, execute the following command from Bastion Host to the Private Subnet Instance (Worker Node).

# The default VNC port is 5900, but that number is incremented according to the configured display number. Thus, display 1 corresponds to 5901, display 2 to 5902, and so on.
$ ssh -i <Workernode_private.key> -L 590<display_number>:localhost:590<display_number> -p 22 -L 590<display number>:localhost:590<display number> -N -f <user>@<Workernode_privateIPAddress>

# Sample command 
$ ssh -i <Workernode_private.key> -L 5901:localhost:5901 -p 22 -L 5901:localhost:5901 -N -f opc@10.0.10.xx
  1. From personal client execute the below command with the above session opened.
# Use any Linux emulator (like, Windows Power Shell for Windows) to run the following command
$ ssh -i <Bastionnode_private.key> -L 590<display_number>:localhost:590<display_number> -p 22 -L 590<display_number>:localhost:590<display_number> -N -f <user>@<BastionHost_publicIPAddress>

#  Sample command
$ ssh -i <Bastionnode_private.key> -L 5901:localhost:5901 -p 22 -L 5901:localhost:5901 -N -f opc@129.xxx.249.xxx
  1. Open VNC Client software in personal client and connect to Worker Node VNC Server using localhost:590<display_number>.

  2. Open a terminal once the VNC session to the Worker Node is connected -

$ xhost +
  1. Run the following commands from Bastion Host terminal –
# Get into the pod's (for example, wccinfra-ucm-server1) shell:
$ kubectl exec -n wccns -it wccinfra-ucm-server1 -- /bin/bash

# Traverse to the Native Binaries' location
$ cd /u01/oracle/user_projects/domains/wccinfra/ucm/cs/bin

# Set DISPLAY variable within the container
$ export DISPLAY=<Workernode_privateIPAddress, where VNC session was created>:<dispay_number>
# Sample command 
$ export DISPLAY=10.0.10.xx:1

# Launch any native UCM application, from within the container, like this:
$ ./SystemProperties 
  1. If the application has an UI, it’ll get launched now in the VNC session connected from personal client.

Appendix

This section provides information on miscellaneous tasks related to Oracle WebCenter Content domains deployment on Kubernetes.

Domain resource sizing

Describes the resourse sizing information for Oracle WebCenter Content domains setup on Kubernetes cluster.

Oracle WebCenter Content cluster sizing recommendations

Oracle WebCenter Content Normal Usage Moderate Usage High Usage
Administration Server No of CPU core(s) : 1, Memory : 4GB No of CPU core(s) : 1, Memory : 4GB No of CPU core(s) : 1, Memory : 4GB
Managed Server No of Servers : 2, No of CPU core(s) : 2, Memory : 16GB No of Servers : 2, No of CPU core(s) : 4, Memory : 16GB No of Servers : 3, No of CPU core(s) : 6, Memory : 16-32GB
PV Storage Minimum 250GB Minimum 250GB Minimum 500GB

Security hardening

Review resources for the Docker and Kubernetes cluster hardening.

Securing a Kubernetes cluster involves hardening on multiple fronts - securing the API servers, etcd, nodes, container images, container run-time, and the cluster network. Apply principles of defense in depth, principle of least privilege, and minimize the attack surface. Use security tools such as Kube-Bench to verify the cluster’s security posture. Since Kubernetes is evolving rapidly refer to Kubernetes Security Overview for the latest information on securing a Kubernetes cluster. Also ensure the deployed Docker containers follow the Docker Security guidance.

This section provides references on how to securely configure Docker and Kubernetes.

References

  1. Docker hardening
  2. Kubernetes hardening
  3. Security best practices for Oracle WebLogic Server Running in Docker and Kubernetes

Quick start deployment on-premise

Use this Quick Start to create an Oracle WebCenter Content domain deployment in a Kubernetes cluster (on-premise environments) with WebLogic Kubernetes Operator. Note that this walkthrough is for demonstration purposes only, not for use in production. These instructions assume that you are already familiar with Kubernetes. If you need more detailed instructions, refer to the Install Guide.

Hardware requirements

Supported Linux kernel for deploying and running Oracle WebCenter Content domain with the WebLogic Kubernetes Operator is Oracle Linux 8 and Red Hat Enterprise Linux 8 . Refer to the prerequisites for more details.

For this exercise the minimum hardware requirement to create a single node Kubernetes cluster and deploy Oracle WebCenter Content domain with one UCM and IBR Cluster each.

Hardware Size
RAM 32GB
Disk Space 250GB+
CPU core(s) 6

See here for resourse sizing information for Oracle WebCenter Content domain setup on Kubernetes cluster.

Set up Oracle WebCenter Content in an on-premise environment

Perform the steps in this topic to create a single instance on-premise Kubernetes cluster and create an Oracle WebCenter Content domain which deploys Oracle WebCenter Content Server and Oracle WebCenter Inbound Refinery Server.

1. Prepare a virtual machine for the Kubernetes cluster

For illustration purposes, these instructions are for Oracle Linux 8. If you are using a different flavor of Linux, you will need to adjust the steps accordingly.

Note: These steps must be run with the root user, unless specified otherwise. Any time you see YOUR_USERID in a command, you should replace it with your actual userid.

1.1 Prerequisites

  1. Choose the directories where your Docker and Kubernetes files will be stored. The Docker directory should be on a disk with a lot of free space (more than 100GB) because it will be used for the Docker file system, which contains all of your images and containers. The Kubernetes directory is used for the /var/lib/kubelet file system and persistent volume storage.

    $ export docker_dir=/u01/docker
    $ export kubelet_dir=/u01/kubelet
    $ mkdir -p $docker_dir $kubelet_dir
    $ ln -s $kubelet_dir /var/lib/kubelet
  2. Verify that IPv4 forwarding is enabled on your host.

    Note: Replace eth0 with the ethernet interface name of your compute resource if it is different.

    $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.docker0.forwarding'
    $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.eth0.forwarding'
    $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.conf.lo.forwarding'
    $ /sbin/sysctl -a 2>&1|grep -s 'net.ipv4.ip_nonlocal_bind'

    For example: Verify that all are set to 1

    $ net.ipv4.conf.docker0.forwarding = 1
    $ net.ipv4.conf.eth0.forwarding = 1
    $ net.ipv4.conf.lo.forwarding = 1
    $ net.ipv4.ip_nonlocal_bind = 1

    Solution: Set all values to 1 immediately with the following commands:

    $ /sbin/sysctl net.ipv4.conf.docker0.forwarding=1
    $ /sbin/sysctl net.ipv4.conf.eth0.forwarding=1
    $ /sbin/sysctl net.ipv4.conf.lo.forwarding=1
    $ /sbin/sysctl net.ipv4.ip_nonlocal_bind=1

    To preserve the settings post-reboot: Update the above values to 1 in files in /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/

  3. Verify the iptables rule for forwarding.

    Kubernetes uses iptables to handle many networking and port forwarding rules. A standard Docker installation may create a firewall rule that prevents forwarding.

    Verify if the iptables rule to accept forwarding traffic is set:

    $ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"

    If the output is “DROP”, then run the following command:

    $ /sbin/iptables -P FORWARD ACCEPT

    Verify if the iptables rule is set properly to “ACCEPT”:

    $ /sbin/iptables -L -n | awk '/Chain FORWARD / {print $4}' | tr -d ")"
  4. Disable and stop firewalld:

    $ systemctl disable firewalld
    $ systemctl stop firewalld

1.2 Install CRI-O and Podman

Note : If you have already configured CRI-O and Podman, continue to Install and configure Kubernetes

  1. Make sure that you have the right operating system version:

    $ uname -a
    $ more /etc/oracle-release

    For example:

    Linux xxxxxx 5.15.0-100.96.32.el8uek.x86_64 #2 SMP Tue Feb 27 18:08:15 PDT 2024 x86_64 x86_64 x86_64 GNU/Linux
    Oracle Linux Server release 8.6
  2. Installing CRI-O:

    ### Add OLCNE( Oracle Cloud Native Environment ) Repository to dnf config-manager. This allows dnf to install the additional packages required for CRI-O installation.
    $ dnf config-manager --add-repo https://yum.oracle.com/repo/OracleLinux/OL8/olcne18/x86_64
    
    ### Installing cri-o
    $ dnf install -y cri-o

    Note : To install a different version of CRI-O or on a different operating system, see CRI-O Installation Instructions.

  3. Start the CRI-O service:

    Set up Kernel Modules and Proxies

    ### Enable kernel modules overlay and br_netfilter which are required for Kubernetes Container Network Interface (CNI) plugins
    $ modprobe overlay
    $ modprobe br_netfilter
    
    ### To automatically load these modules at system start up create config as below
    $ cat <<EOF > /etc/modules-load.d/crio.conf
    overlay
    br_netfilter
    EOF
    $ sysctl --system
    
    ### Set the environmental variable CONTAINER_RUNTIME_ENDPOINT to crio.sock to use crio as the container runtime
    $ export CONTAINER_RUNTIME_ENDPOINT=unix:///var/run/crio/crio.sock
    
    ### Setup Proxy for CRIO service
    $ cat <<EOF > /etc/sysconfig/crio
    http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock
    NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/crio/crio.sock
    EOF

    Set the runtime for CRI-O

    ### Setting the runtime for crio
    ## Update crio.conf
    $ vi /etc/crio/crio.conf
    ## Append following under [crio.runtime]
    conmon_cgroup = "kubepods.slice"
    cgroup_manager = "systemd"
    ## Uncomment following under [crio.network]
    network_dir="/etc/cni/net.d"
    plugin_dirs=[
        "/opt/cni/bin",
        "/usr/libexec/cni",
    ]

    Start the CRI-O Service

    ## Restart crio service
    $ systemctl restart crio.service
    $ systemctl enable --now crio
  4. Installing Podman:

    On Oracle Linux 8, if podman is not available, then install Podman and related tools with following command syntax:

    $ sudo dnf module install container-tools:ol8

    On Oracle Linux 9, if podman is not available, then install Podman and related tools with following command syntax:

    $ sudo dnf install container-tools

    Since the setup uses “docker” CLI commands, on Oracle Linux 8/9, install the podman-docker package if not available, that effectively aliases the docker command to podman,with following command syntax:

    $ sudo dnf install podman-docker
  5. Configure Podman rootless:

    For using podman with your User ID (Rootless environment), Podman requires the user running it to have a range of UIDs listed in the files /etc/subuid and /etc/subgid. Rather than updating the files directly, the usermod program can be used to assign UIDs and GIDs to a user with the following commands:

    $ sudo /sbin/usermod --add-subuids 100000-165535 --add-subgids 100000-165535 <REPLACE_USER_ID>
    $ podman system migrate

    Note : The above “podman system migrate” need to be executed with your User ID and not root.

    Verify the user-id addition

    $ cat /etc/subuid
    $ cat /etc/subgid

    Expected similar output

    opc:100000:65536
    <user-id>:100000:65536

1.3 Install and configure Kubernetes

  1. Add the external Kubernetes repository:

    $ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
    enabled=1
    gpgcheck=1
    gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
    exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
    EOF
  2. Set SELinux in permissive mode (effectively disabling it):

    $ export PATH=/sbin:$PATH
    $ setenforce 0
    $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  3. Export proxy and install kubeadm, kubelet, and kubectl:

    ### Get the nslookup IP address of the master node to use with apiserver-advertise-address during setting up Kubernetes master
    ### as the host may have different internal ip (hostname -i) and nslookup $HOSTNAME
    $ ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "`
    $ echo $ip_addr
    
    ### Set the proxies
    $ export NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr
    $ export no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr
    $ export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    $ export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    $ export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    $ export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    
    ### install kubernetes 1.26.2-0
    $ VERSION=1.26.2-0
    $ yum install -y kubelet-$VERSION kubeadm-$VERSION kubectl-$VERSION --disableexcludes=kubernetes
    
    ### enable kubelet service so that it auto-restart on reboot
    $ systemctl enable --now kubelet
  4. Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl to avoid traffic routing issues:

    $ cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    $ sysctl --system
  5. Disable swap check:

    $ sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet
    $ cat /etc/sysconfig/kubelet
    ### Reload and restart kubelet
    $ systemctl daemon-reload
    $ systemctl restart kubelet
  6. Pull the images using crio:

    $ kubeadm config images pull --cri-socket unix:///var/run/crio/crio.sock

1.4 Set up Helm

  1. Install Helm v3.10.x

    1. Download Helm from https://github.com/helm/helm/releases. Example to download Helm v3.5.4:

      $ wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
    2. Unpack tar.gz:

      $ tar -zxvf helm-v3.10.3-linux-amd64.tar.gz
    3. Find the Helm binary in the unpacked directory, and move it to its desired destination:

      $ mv linux-amd64/helm /usr/bin/helm
  2. Run helm version to verify its installation:

    $ helm version
      version.BuildInfo{Version:"v3.10.3", GitCommit:"835b7334cfe2e5e27870ab3ed4135f136eecc704", GitTreeState:"clean", GoVersion:"go1.18.9"}

2. Set up a single instance Kubernetes cluster

Notes: * These steps must be run with the root user, unless specified otherwise! * If you choose to use a different cidr block (that is, other than 10.244.0.0/16 for the --pod-network-cidr= in the kubeadm init command), then also update NO_PROXY and no_proxy with the appropriate value. * Also make sure to update kube-flannel.yaml with the new value before deploying. * Replace the following with appropriate values: * ADD-YOUR-INTERNAL-NO-PROXY-LIST * REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT

2.1 Set up the master node

  1. Create a shell script that sets up the necessary environment variables. You can append this to the user’s .bashrc so that it will run at login. You must also configure your proxy settings here if you are behind an HTTP proxy:

    ## grab my IP address to pass into  kubeadm init, and to add to no_proxy vars
    ip_addr=`nslookup $(hostname -f) | grep -m2 Address | tail -n1| awk -F: '{print $2}'| tr -d " "`
    export pod_network_cidr="10.244.0.0/16"
    export service_cidr="10.96.0.0/12"
    export PATH=$PATH:/sbin:/usr/sbin
    
    ### Set the proxies
    export NO_PROXY=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr,$pod_network_cidr,$service_cidr
    export no_proxy=localhost,127.0.0.0/8,ADD-YOUR-INTERNAL-NO-PROXY-LIST,/var/run/docker.sock,$ip_addr,$pod_network_cidr,$service_cidr
    export http_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    export https_proxy=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    export HTTPS_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
    export HTTP_PROXY=http://REPLACE-WITH-YOUR-COMPANY-PROXY-HOST:PORT
  2. Source the script to set up your environment variables:

    $ . ~/.bashrc
  3. To implement command completion, add the following to the script:

    $ [ -f /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion
    $ source <(kubectl completion bash)
  4. Run kubeadm init to create the master node:

    $ kubeadm init \
      --pod-network-cidr=$pod_network_cidr \
      --apiserver-advertise-address=$ip_addr \
      --ignore-preflight-errors=Swap  > /tmp/kubeadm-init.out 2>&1
  5. Log in to the terminal with YOUR_USERID:YOUR_GROUP. Then set up the ~/.bashrc similar to steps 1 to 3 with YOUR_USERID:YOUR_GROUP.

    Note that from now on we will be using YOUR_USERID:YOUR_GROUP to execute any kubectl commands and not root.

  6. Set up YOUR_USERID:YOUR_GROUP to access the Kubernetes cluster:

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  7. Verify that YOUR_USERID:YOUR_GROUP is set up to access the Kubernetes cluster using the kubectl command:

    $ kubectl get nodes

    Note: At this step, the node is not in ready state as we have not yet installed the pod network add-on. After the next step, the node will show status as Ready.

  8. Install a pod network add-on (flannel) so that your pods can communicate with each other.

    Note: If you are using a different cidr block than 10.244.0.0/16, then download and update kube-flannel.yml with the correct cidr address before deploying into the cluster:

    $ wget https://github.com/flannel-io/flannel/releases/download/v0.25.1/kube-flannel.yml
    ### Update the CIDR address if you are using a CIDR block other than the default 10.244.0.0/16
    $ kubectl apply -f kube-flannel.yml
  9. Verify that the master node is in Ready status:

    $ kubectl get nodes

    For example:

    NAME        STATUS   ROLES    AGE     VERSION
    mymasternode Ready    master   8m26s   v1.27.2

    or:

    $ kubectl get pods -n kube-system

    For example:

    NAME                                    READY       STATUS      RESTARTS    AGE
    pod/coredns-86c58d9df4-58p9f                1/1         Running         0       3m59s
    pod/coredns-86c58d9df4-mzrr5                1/1         Running         0       3m59s
    pod/etcd-mymasternode                       1/1         Running         0       3m4s
    pod/kube-apiserver-node                     1/1         Running         0       3m21s
    pod/kube-controller-manager-mymasternode    1/1         Running         0       3m25s
    pod/kube-flannel-ds-amd64-6npx4             1/1         Running         0       49s
    pod/kube-proxy-4vsgm                        1/1         Running         0       3m59s
    pod/kube-scheduler-mymasternode             1/1         Running         0       2m58s
  10. To schedule pods on the master node, taint the node:

    $ kubectl taint nodes --all node-role.kubernetes.io/master-

Congratulations! Your Kubernetes cluster environment is ready to deploy your Oracle WebCenter Content domain.

For additional references on Kubernetes cluster setup, check the documentation to set up a Kubernetes cluster..

3. Get scripts and images

3.1 Set up the code repository to deploy Oracle WebCenter Content domains

Follow these steps to set up the source code repository required to deploy Oracle WebCenter Content domains.

3.2 Get dependent images and add them to your local registry

Follow these steps to pull dependent Docker images required to deploy Oracle WebCenter Content domains.

3.3 Get Oracle WebCenter Content Docker image and add it to your local registry

Follow these steps to obtain Oracle WebCenter Content image.

4. Install WebLogic Kubernetes Operator

4.1 Prepare for WebLogic Kubernetes Operator.

  1. Create a namespace opns for the WebLogic Kubernetes Operator:

    $ kubectl create namespace opns
  2. Create a service account op-sa for WebLogic Kubernetes Operator in the operator’s namespace:

    $ kubectl create serviceaccount -n opns op-sa

4.2 Install the WebLogic Kubernetes Operator

Use Helm to install and start WebLogic Kubernetes Operator from the directory you just cloned:

$ cd ${WORKDIR}
$ helm install weblogic-kubernetes-operator charts/weblogic-operator \
--namespace opns \
--set image=oracle/weblogic-kubernetes-operator:4.2.9 \
--set serviceAccount=op-sa \
--set "domainNamespaces={}" \
--wait

4.3 Verify the WebLogic Kubernetes Operator

  1. Verify that the WebLogic Kubernetes Operator’s pod is running by listing the pods in the respective namespace. You should see one for the WebLogic Kubernetes Operator:

    $ kubectl get pods -n opns
  2. Verify that the WebLogic Kubernetes Operator is up and running by viewing the operator-pod’s logs:

    $ kubectl logs -n opns -c weblogic-operator deployments/weblogic-operator

The WebLogic Kubernetes Operator v4.2.9 has been installed. Continue with the load balancer and Oracle WebCenter Content domain setup.

5. Install the Traefik (ingress-based) load balancer

WebLogic Kubernetes Operator supports these load balancers: Traefik, NGINX and Apache. Samples are provided in the documentation.

This Quick Start demonstrates how to install the Traefik ingress controller to provide load balancing for an Oracle WebCenter Content domain.

  1. Create a namespace for Traefik:

    $ kubectl create namespace traefik
  2. Set up Helm for 3rd party services:

    $ helm repo add traefik https://containous.github.io/traefik-helm-chart
  3. Install the Traefik operator in the traefik namespace with the provided sample values:

    $ cd ${WORKDIR}
    $ helm install traefik traefik/traefik \
     --namespace traefik \
     --values charts/traefik/values.yaml \
     --set "kubernetes.namespaces={traefik}" \
     --set "service.type=NodePort" \
     --wait

6. Create and configure an Oracle WebCenter Content domain

6.1 Prepare for an Oracle WebCenter Content domain

  1. Create a namespace that can host Oracle WebCenter Content domain:

    $ kubectl create namespace wccns
  2. Use Helm to configure the WebLogic Kubernetes Operator to manage Oracle WebCenter Content domains in this namespace:

    $ cd ${WORKDIR}
    $ helm upgrade weblogic-kubernetes-operator charts/weblogic-operator \
       --reuse-values \
       --namespace opns \
       --set "domainNamespaces={wccns}" \
       --wait
  3. Create Kubernetes secrets.

    1. Create a Kubernetes secret for the domain in the same Kubernetes namespace as the domain. In this example, the username is weblogic, the password in welcome1, and the namespace is wccns:

        $ cd ${WORKDIR}/create-weblogic-domain-credentials
        $ ./create-weblogic-credentials.sh \
           -u weblogic \
           -p welcome1 \
           -n wccns    \
           -d wccinfra \
           -s wccinfra-domain-credentials
    2. Create a Kubernetes secret for the RCU in the same Kubernetes namespace as the domain:

    • Schema user : WCC1
    • Schema password : Oradoc_db1
    • DB sys user password : Oradoc_db1
    • Domain name : wccinfra
    • Domain Namespace : wccns
    • Secret name : wccinfra-rcu-credentials
      $ cd ${WORKDIR}/create-rcu-credentials
      $ ./create-rcu-credentials.sh \
             -u WCC1 \
             -p Oradoc_db1 \
             -a sys \
             -q Oradoc_db1 \
             -d wccinfra \
             -n wccns \
             -s wccinfra-rcu-credentials
  4. Create the Kubernetes persistence volume and persistence volume claim.

    1. Create the Oracle WebCenter Content domain home directory. Determine if a user already exists on your host system with uid:gid of 1000:0:
    $ sudo getent passwd 1000

    If this command returns a username (which is the first field), you can skip the following useradd command. If not, create the oracle user with useradd:

    $ sudo useradd -u 1000 -g 0 oracle

    Create the directory that will be used for the Oracle WebCenter Content domain home:

    $ sudo mkdir /scratch/k8s_dir
    $ sudo chown -R 1000:0 /scratch/k8s_dir
    1. Update create-pv-pvc-inputs.yaml with the following values:
    • baseName: domain
    • domainUID: wccinfra
    • namespace: wccns
    • weblogicDomainStoragePath: /scratch/k8s_dir

    Review and update if any changes required.

    $ cd ${WORKDIR}/create-weblogic-domain-pv-pvc
    $ vim create-pv-pvc-inputs.yaml  
    1. Run the create-pv-pvc.sh script to create the PV and PVC configuration files:
    $ ./create-pv-pvc.sh -i create-pv-pvc-inputs.yaml -o output
    1. Create the PV and PVC using the configuration files created in the previous step:
    $ kubectl create -f  output/pv-pvcs/wccinfra-domain-pv.yaml
    $ kubectl create -f  output/pv-pvcs/wccinfra-domain-pvc.yaml
  5. Configure the database and create schemas for the Oracle WebCenter Content domain.

    Follow configure-database-access step and run-RCU step to set up the database connection and configure product schemas required to deploy Oracle WebCenter Content domain.

Now the environment is ready to start the Oracle WebCenter Content domain creation.

6.2 Create an Oracle WebCenter Content domain

  1. The sample scripts for Oracle WebCenter Content domain deployment are available at ${WORKDIR}/create-wcc-domain/domain-home-on-pv. You must edit create-domain-inputs.yaml (or a copy of it) to provide the details for your domain.

  2. Run the create-domain.sh script to create a domain:

    $ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv/
    $ ./create-domain.sh -i create-domain-inputs.yaml -o output
  3. Create a Kubernetes domain object:

    Once the create-domain.sh is successful, it generates the output/weblogic-domains/wccinfra/domain.yaml that you can use to create the Kubernetes resource domain, which starts the domain and servers:

    $ cd ${WORKDIR}/create-wcc-domain/domain-home-on-pv
    $ kubectl create -f output/weblogic-domains/wccinfra/domain.yaml
  4. Verify that the Kubernetes domain object named wccinfra is created:

    $ kubectl get domain -n wccns
    NAME       AGE
    wccinfra   3m18s
  5. Once you create the domain, introspect pod is created. This inspects the domain home and then starts the wccinfra-adminserver pod. Once the wccinfra-adminserver pod starts successfully, then the Managed Server pods are started in parallel. Watch the wccns namespace for the status of domain creation:

    $ kubectl get pods -n wccns
  6. Verify that the Oracle WebCenter Content domain server pods and services are created and in Ready state:

    $ kubectl get all -n wccns

6.3 Configure Traefik to access in Oracle WebCenter Content domain services

  1. Configure Traefik to manage ingresses created in the Oracle WebCenter Content domain namespace (wccns):

    $ helm upgrade traefik traefik/traefik \
      --reuse-values \
      --namespace traefik \
      --set "kubernetes.namespaces={traefik,wccns}" \
      --wait
  2. Create an ingress for the domain in the domain namespace by using the sample Helm chart:

    $ cd ${WORKDIR}
    $ helm install wcc-traefik-ingress charts/ingress-per-domain \
    --namespace wccns \
    --values charts/ingress-per-domain/values.yaml \
    --set "traefik.hostname=$(hostname -f)" \
    --set tls=NONSSL
  3. Verify the created ingress per domain details:

    $ kubectl describe ingress wccinfra-traefik -n wccns

6.4 Verify that you can access the Oracle WebCenter Content domain URL

  1. Get the LOADBALANCER_HOSTNAME for your environment:

    export LOADBALANCER_HOSTNAME=$(hostname -f)
  2. The following URLs are available for Oracle WebCenter Content domain:

    Credentials: username: weblogic password: welcome1

    http://${LOADBALANCER_HOSTNAME}:30305/em
    http://${LOADBALANCER_HOSTNAME}:30305/cs
    http://${LOADBALANCER_HOSTNAME}:30305/ibr
    http://${LOADBALANCER_HOSTNAME}:30305/imaging
    http://${LOADBALANCER_HOSTNAME}:30305/dc-console
    http://${LOADBALANCER_HOSTNAME}:30305/wcc