21 Scaling Procedures for an Enterprise Deployment

The scaling procedures for an enterprise deployment include scale out, scale in, scale up, and scale down. During a scale-out operation, you add managed servers to new nodes. You can remove these managed servers by performing a scale in operation. During a scale-up operation, you add managed servers to existing hosts. You can remove these servers by performing a scale-down operation.

This chapter describes the procedures to scale out/in and scale up/down clusters.

Scaling Out the Topology

This section lists the prerequisites, explains the procedure to scale out the topology, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and SOA. Use the existing installations in shared storage. You do not need to install WebLogic Server or SOA binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Cluster

The steps provided in this procedure use the SOA EDG topology as a reference. Initially there are two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. A new host SOAHOST3 is added to scale up the clusters with a third managed server. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3, WLS_OSB3, WLS_ESS3, and so on.
The scale-out procedure requires downtime for the existing servers in the WLS cluster being scaled if service migration has been configured for them with a different migration policy from the default one (manual). It also implies downtime if the existing migratable targets do not use an empty Candidate Server list. Using empty candidate lists is the best practice because it means that all the servers in the cluster are candidates for migration. You can check the list of candidates for each migratable targets through the Weblogic Remote console:
  1. Access the domain with the WebLogic Remote Console.

  2. In the left-top corner, click Edit Tree in the the Remote Console Screen

  3. Expand Environment in the navigation tree.

  4. Expand Migratable targets in the navigation tree.

  5. Click each migratable target and verify the Constrained Candidate Servers list under Migration tab.

If you have created your environment following the Enterprise Deployment Guide, these lists are empty out-of-the-box. When you add a new server to the cluster, the server is automatically considered for migration without the need to restart the existing servers.

If you had decided to constraint the migration to some specific servers of the cluster only, your Candidate Server lists will not be empty. When you add a new server to the cluster, you may need to modify them to add the new server. In this case, you will have to restart the existing nodes during the scale-out process. Changing migration policy from the manual one for the new server also prompts for a restart of existing members in the cluster. Oracle recommends that you “batch” these two changes and perform one single restart later you complete both these changes (migration policy and list of candidates).

To scale out the cluster, complete the following steps:

  1. On the new node, mount the existing FMW Home, which should include the SOA installation and the domain directory. Ensure that the new node has access to this directory, similar to the rest of the nodes in the domain.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the alias SOAHOSTn for the new node, as described in Verifying IP Addresses and Host Names in DNS or Hosts File.

    For example:

    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 SOAHOST1
    10.229.188.206 host2.example.com host2 SOAHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 SOAHOST3 
  4. Configure a per host node manager in the new node, as described in Creating a Per Host Node Manager Configuration, but do not start it yet. It will be started later.
  5. Log into the WebLogic Remote Console to create a new machine:
    1. Go to Environment and select Machines.
    2. Click New to create a new machine for the new node.
    3. Set Name to SOAHOSTn (or MFTHOSTn or BAMHOSTn).
    4. Click the Node Manager tab.
    5. Set Type to SSL.
    6. Set Listen Address to SOAHOSTn.
    7. Click Save and Commit changes in the Shopping Cart.
  6. Use the Oracle WebLogic Remote Console to clone the first managed server in the cluster into a new managed server.
    1. Go to Environment and select Servers.
    2. Click Create and in the Copy settings from another server, select the first managed server in the cluster to scale out and click Create.
    3. Select the first managed server in the cluster to scale out and click Create.
    4. Use Table 21-1 to set the correspondent name, listen address, and SSL listen port depending on the cluster that you want to scale out.
    5. Click the new managed server, select Configuration, and then click General.
    6. Update the Machine from SOAHOST1 to SOAHOSTn.
    7. Update the Administration port for the server also to be consistent with other server in the cluster. For example, for SOA servers use port 9004, for OSB servers use 9007, and so on. Refer to the existing servers for their appropriate Administration Port.
    8. Click Save and Commit changes in the Shopping Cart.

    Table 21-1 Details of the Cluster to be Scaled Out

    Cluster to Scale Out Server to Clone New Server Name Server Listen Address SSL Server Listen Port Local Administrative Port Override

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSM3

    SOAHOST3

    7010

    9003

    SOA_Cluster

    WLS_SOA1

    WLS_SOA3

    SOAHOST3

    7004

    9004

    ESS_Cluster

    WLS_ESS1

    WLS_ESS3

    SOAHOST3

    7008

    9006

    OSB_Cluster

    WLS_OSB1

    WLS_OSB3

    SOAHOST3

    8003

    9007

    BAM_Cluster

    WLS_BAM1

    WLS_BAM3

    SOAHOST3

    7006

    9005

    MFT_Cluster

    WLS_MFT1

    WLS_MFT3

    MFTHOST3

    7010

    9014

  7. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment.
  8. Create a new key certificate and update your domain certificate store, as described in the Creating Certificates and Certificate Stores for the WebLogic Domain in Creating the Initial Infrastructure Domain for an Enterprise Deployment chapter. For adding only a new address (instead of all the ones detected in config.xml) you can use the generate_perdomainCACERTS.sh script with the following syntax:
    ./generate_perdomainCACERTS.sh [WLS_DOMAIN_DIRECTORY] [WL_HOME] [KEYSTORE_HOME] [KEYPASS] [NEWADDR]

    Where NEWADDR is the listen address for the new server being added.

  9. Your new server’s keystore location and ssl configuration is carried over from the server copied (WLS_SOA1) but you must update the password again (since it will be encrypted again for the new server) and the “Server private key alias” entry for this new server.
    1. Navigate to Environment > Servers.

    2. Click the new server.

    3. Navigate to Security > Keystores.

    4. Update the Custom Identity Key Store Pass Phrase and Custom Trust Key Store Pass Phrase with the password provided to the generate_perdomainCACERTS.sh script.

    5. Click the SSL tab under Security.

    6. Update the Server Private Key Pass Phrase with the password provided to the generate_perdomainCACERTS.sh script

    7. Add the listen address that you used in the previous step (certificate generation for the new server) as Server Private Key Alias.

  10. Update the TLOG JDBC persistent store of the new managed server:
    1. Log into the WebLogic Remote Console.
    2. Go to Environment and expand the Servers link on the navigation tree on the left.
    3. Click the new server WLS_XYZn .
    4. Click the Services > JTA tab.
    5. Ensure Transaction Log Store in JDBC is selected and change the Transaction Log Prefix name to TLOG_WLS_XYZn.
    6. The rest of the fields are carried over from the server copied (including the Datasource used for the JDBC store) WLSRuntimeSchemaDataSource.
    7. Click Save and Commit changes in the Shopping Cart.

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 21-2 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Out New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSM3

    Default (file)

    SOA_Cluster

    WLS_SOA3

    JDBC

    ESS_Cluster

    WLS_ESS3

    Default (file)

    OSB_Cluster

    WLS_OSB3

    JDBC

    BAM_Cluster

    WLS_BAM3

    JDBC

    MFT_Cluster

    WLS_MFT3

    JDBC

  11. If the cluster that you are scaling out is configured for automatic service migration, update the JTA Migration Policy to the required value.
    1. Go to Environment and expand Servers. From the list of servers, select WLS_XYZn , click the JTA Migratable Target.
    2. Use Table 21-3 to ensure the recommended JTA Migration Policy depending on the cluster that you want to scale out is set.

      Table 21-3 The Recommended JTA Migration Policy for the Cluster to be Scaled Out

      Cluster to Scale Out New Server Name JTA Migration Policy

      WSM-PM_Cluster

      WLS_WSM3

      Manual

      SOA_Cluster

      WLS_SOA3

      Failure Recovery

      ESS_Cluster

      WLS_ESS3

      Manual

      OSB_Cluster

      WLS_OSB3

      Failure Recovery

      BAM_Cluster

      WLS_BAM3

      Failure Recovery

      MFT_Cluster

      WLS_MFT3

      Failure Recovery

    3. In the servers already existing in the cluster, verify that the list of the JTA candidate servers for JTA migration is empty:
      1. Click Environment and expand Servers.
      2. Select the server.
      3. Select the JTA Migratable Target in the context menu.
      4. Check the Constrained Candidate Servers list and verify that the list is empty (an empty list indicates that all the servers in the cluster are JTA candidate servers). The list should be empty out-of-the-box so no changes are needed.
      5. If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Click Save and Commit Changes in the Shopping Cart. Note that a change in the candidate list requires a restart of the existing servers in the cluster.
  12. If the cluster you are scaling out is configured for automatic service migration, use the Oracle WebLogic Remote Console to update the automatically created WLS_XYZn (migratable) target with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 21-4 The Recommended Migratable Targets to Update

    Cluster to Scale Out Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    NA

    NA

    SOA_Cluster

    WLS_SOA3 (migratable)

    Failure Recovery

    ESS_Cluster

    NA

    NA

    OSB_Cluster

    WLS_OSB3 (migratable)

    Failure Recovery

    BAM_Cluster

    WLS_BAM3 (migratable)

    Exactly-Once

    MFT_Cluster

    WLS_MFT3 (migratable)

    Failure Recovery

    1. Go to Environment then Migratable Targets.
    2. Click WLS_XYZ3 (migratable).
    3. Change the Service Migration Policy to the value listed in the table.
    4. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    5. Click Save and Commit Changes in the Shopping Cart. Notice that a change from the default migration policy (manual) requires a restart of the existing servers in the cluster.
  13. For components that use multiple migratable targets, in addition to step 11, Oracle WebLogic Server Remote Console create a new (migratable) target copying the settings from the existing ones in the cluster. Use the steps above for the required customizable settings.
  14. Verify that the Constrained Candidate Server list in the existing migratable servers in the cluster is empty. It should be empty out-of-the-box because the Configuration Wizard leaves it empty. An empty candidate list means that all the servers in the cluster are candidates, which is the best practice.
    1. Go to each migratable server.
    2. Click the Migration tab and check the Constrained Candidate Servers list.
    3. Ensure that "Chosen" server list is empty. It should be empty out-of-the-box.
    4. If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Click Save and Commit Changes in the Shopping Cart. Notice that a change in the candidate list requires a restart of the existing servers in the cluster.
  15. Create the required persistent stores for the JMS servers used in the new server.
    1. Log into the WebLogic Remote Console. In the Edit Tree expand Services and select JDBC stores.
    2. Click New.

    Use the following table to create the required persistent stores:

    Note:

    The number in names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation. For example:

    • UMSJMSJDBCStore_auto_1 — soa_1

    • UMSJMSJDBCStore_auto_2 — soa_2

    • BPMJMSJDBCStore_auto_1 — soa_3

    • BPMJMSJDBCStore_auto_2 — soa_4

    • SOAJMSJDBCStore_auto_1 — soa_5

    • SOAJMSJDBCStore_auto_2 — soa_6

    So review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 21-5 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Out Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    NA

    NA

    NA

    NA

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    ESS_Cluster

    NA

    NA

    NA

    NA

    OSB_Cluster

    UMSJMSJDBCStore_osb_scaled_3

    osbums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_OSB3 (migratable)

    OSBJMSJDBCStore_osb_scaled_3

    osbjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSJDBCStore_bam_scaled_3

    bamums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsJDBCStore_bam_scaled_3

    bamP_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsJDBCStore_bam_scaled_3

    bamR_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    bamA_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamJmsJDBCStore_bam_scaled_3

    bamjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsJDBCStore_bam_scaled_3

    bamC_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3*

    MFT_Cluster

    MFTJMSJDBCStore_mft_scaled_3

    mftjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  16. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Remote Console. In the Edit Tree, select Services, and then click JMS Servers.
    2. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation.

    So review the existing JMS server names and select a new and unique name for each new JMS server.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Out JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    NA

    NA

    NA

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    ESS_Cluster

    NA

    NA

    NA

    OSB_Cluster

    UMSJMSServer_osb_scaled_3

    UMSJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    wlsbJMSServer_osb_scaled_3

    OSBJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSServer_bam_scaled_3

    UMSJMSJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsServer_bam_scaled_3

    BamPersistenceJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsServer_bam_scaled_3

    BamReportCacheJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsServer_bam_scaled_3

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BAMJMSServer_bam_scaled_3

    BamJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsServer_bam_scaled_3

    BamCQServiceJmsJDBCStore_bam_scaled_3

    WLS_BAM3*

    MFT_Cluster

    MFTJMSServer_mft_scaled_3

    MFTJMSJDBCStore_mft_scaled_3

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  17. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand Services and select JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Expand the Sub Deployments and select the corresponding one to update the targets, use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:

      Table 21-6 The JMS Modules to Update

      Cluster to Scale Out JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      NA

      NA

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      ESS_Cluster

      NA

      NA

      OSB_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_osb_scaled_3

      jmsResources (scope Global)

      wlsbJMSServer_osb_scaled_3

      BAM_Cluster

      BamPersistenceJmsSystemModule

      BamPersistenceJmsServer_bam_scaled_3

      BamReportCacheJmsSystemModule

      BamReportCacheJmsServer_bam_scaled_3

      BamAlertEngineJmsSystemModule

      BamAlertEngineJmsServer_bam_scaled_3

      BAMJMSSystemResource

      BAMJMSServer_bam_scaled_3

      BamCQServiceJmsSystemModule

      N/A (do not update existing subdeployments. New subdeyploment for the new server will be created in next steps)

      UMSJMSSystemResource *

      UMSJMSServer_bam_scaled_3

      MFT_Cluster

      MFTJMSModule

      MFTJMSServer_mft_scaled_3

      (*) Some modules (UMSJMSystemResource) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).
    4. Click Save and Commit Changes in the Shopping Cart.
  18. In case you are scaling out a BAM cluster, you need to create some additional resources (subdeployment and local queues) for the new server in the BamCQServiceJmsSystemModule module. Follow these steps to create them:
    1. Go to WebLogic Remote Console. Click the Edit Tree and select Environment > Services
    2. Click JMS Modules and select BamCQServiceJmsSystemModule.
    3. Click Targets.
    4. Add WLS_BAM3 to the targets and click Save.
    5. Create a new Subdeployment in the BamCQServiceJmsSystemModule JMS Module with the name BamCQServiceAlertEngineSubdeployment_scaled_3. Then select BamCQServiceJmsServer_bam_scaled_3 as the target of this subdeployment.

      Table 21-7 Information to Create the Additional Subdeployment for Local Queues

      Subdeployment Name Subdeployment Target

      BamCQServiceAlertEngineSubdeployment_scaled_3

      BamCQServiceJmsServer_bam_scaled_3

    6. Select Queues under the Module and click New.
    7. Click Create.
    8. Name it BamCQServiceAlertEngineQueue_auto_3.
    9. Click in the newly created queue BamCQServiceAlertEngineQueue_auto_3
    10. Select General tab.
    11. Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
    12. Set Sub Deployment Name to BamCQServiceAlertEngineSubdeployment_scaled_3.
    13. Click Save and Commit changes in the Shopping Cart.
    14. Repeat these steps to create the other queue BamCQServiceReportCacheQueue_auto_3 with the information in Table 21-8.
    15. After you finish, you have these new local queues.

      Table 21-8 Information to Create the Local Queues

      Name Type Local JNDI Name Subdeployment

      BamCQServiceAlertEngineQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.alertengine

      BamCQServiceAlertEngineSubdeployment_scaled_3

      BamCQServiceReportCacheQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.reportcache

      BamCQServiceAlertEngineSubdeployment_scaled_3

  19. The configuration is finished. Now sign in to SOAHOST1 and run the pack command to create a template pack, as follows:
    cd $ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true 
              -domain=ASERVER_HOME
              -template=/full_path/scaleout_domain.jar
              -template_name=scaleout_domain_template
              tmp/pack.log  

    In this example:

    • Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  20. Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory, as follows:
    cd $ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=MSERVER_HOME
                -overwrite_domain=true
                -template=/full_path/scaleout_domain.jar
                -log_priority=DEBUG
                -log=/tmp/unpack.log
                -app_dir=APPLICATION_HOME

    In this example:

    • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  21. When scaling out OSB_Cluster:
    1. Restart the Admin Server to see the new server in the Service Bus Dashboard.
  22. When scaling out MFT_Cluster:
    1. Default SFTP/FTP ports are used in the new server. If you are not using the defaults, configure the ports in the SFTP server as described in Configuring the SFTP Ports to configure the ports in the SFTP server.
  23. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  24. Start the new managed server.
  25. Update the web tier configuration to include the new server:
    1. If you are using OHS, there is no need to add the new server to OHS. By default, the Dynamic Server List is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So, adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server is up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /soa-infra>
       WLSRequest ON
       WebLogicCluster SOAHOST1:7004,SOAHOST2:7004,SOAHOST3:7004
      </Location>

Verifying the Scale Out

After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://soa.example.com/soa-infra
    2. Check that there is activity in the new server also:
      In the Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.
    3. You can also verify that the web sessions are created in the new server:
      • In Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.

      • Go to Component Runtimes and click <WLS_SOA3_/soa-infra.

      • Verify if there are sessions.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      https://soainternal.example.com:444/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. In the Remote Console, go to the Monitoring Tree.
    2. Navigate to Dashboards > JMS Destinations.
  3. Verify the service migration, as described in Validating Automatic Service Migration.

Scaling in the Topology

This section describes how to scale in the topology for a cluster.

Perform the following steps to scale in the topology for a cluster:
  1. To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-in procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Remote Console in the monitoring tree, click Servers > <server name> > Services > Transactions > JTA Runtime and select the Transactions tab.

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

  4. Use the Oracle WebLogic Remote Console to delete the new server:
    1. Click Edit Tree.
    2. Go to Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  5. Use the Oracle WebLogic Remote Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale in JMS Module JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSSystemResource

    jmsResources (scope Global)

    UMSJMSServer_osb_scaled_3

    wlsbJMSServer_osb_scaled_3

    BAM_Cluster

    BamPersistenceJmsSystemModule

    BamReportCacheJmsSystemModule

    BamAlertEngineJmsSystemModule

    BAMJMSSystemResource

    BamCQServiceJmsSystemModule

    BamPersistenceJmsServer_bam_scaled_3

    BamReportCacheJmsServer_bam_scaled_3

    BamAlertEngineJmsServer_bam_scaled_3

    BAMJMSServer_bam_scaled_3

    Not applicable (existing subdeployments are not modified on scale-out)

    MFT_Cluster

    MFTJMSModule

    MFTJMSServer_mft_scaled_3

    1. Click Edit Tree.
    2. Go to Services > JMS System Resources.
    3. Click the JMS module.
    4. Click Sub Deployment.
    5. Select the Sub Deployment Module
    6. Unselect the JMS server that was created for the deleted server.
    7. Click Save and Commit changes in the Shopping Cart.
  6. In case you want to scale in a BAM cluster, use the Oracle WebLogic Remote Console to delete the local queues that are created for the new server:
    1. Click Edit Tree.
    2. Go to Services>JMS Modules.
    3. Click the JMS module.
    4. Click in BamCQServiceJmsSystemModule.
    5. Delete the local queues that are created for the new server:
      • BamCQServiceAlertEngineQueue_auto_3

      • BamCQServiceReportCacheQueue_auto_3

    6. Delete the following subdeployment created for the server:
      BamCQServiceAlertEngineSubdeployment_scaled_3
    7. Click Save and Commit changes in the Shopping Cart.
  7. Use the Oracle WebLogic Remote Console to delete the JMS servers:
    1. Click Edit Tree.
    2. Go to Services > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  8. Use the Oracle WebLogic Remote Console to delete the JMS persistent stores:
    1. Click Edit Tree.
    2. Go to Services > JDBC Stores.
    3. Select the JDBC Store that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  9. If the machine that was hosting the deleted server is not used by any other servers you must delete it performing the following steps:
    1. Click Edit Tree.
    2. Go to Environment > Machines.
    3. Select the machine that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  10. Update the Web tier configuration to remove references to the deleted server.

Scaling Up the Topology

This section describes how to scale up the topology.

You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack because the new server is going to run in the existing node.

Prerequisites for Scaling Up

Before you perform a scale up of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Up

Use the SOA EDG topology as a reference, with two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in SOAHOST1. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3, WLS_OSB3, WLS_ESS3, and so on.

The scale-up procedure requires downtime for the existing servers in the WLS cluster being scaled if service migration has been configured for them with a different migration policy from the default one (manual). It also implies downtime if the existing migratable targets do not use an empty Candidate Server list ( a precise subset of servers in the cluster is used as candidates). Using empty candidate lists is the best practice because it means that all the servers in the cluster are candidates for migration. You can check the list of candidates for each migratable targets through the Weblogic Remote Console:

  1. Access the domain with the WebLogic Remote Console.

  2. Click the Edit Tree at the top left side in the Remote Console.

  3. Expand Environment in the navigation tree on the left.

  4. Expand Migratable targets in the navigation tree on the left.

  5. Click each migratable target and verify the Constrained Candidate Servers list under Migration tab.

If you have created your environment following the Enterprise Deployment Guide, these lists are empty out-of-the-box. When you add a new server to the cluster, the server is automatically considered for migration without the need to restart the existing servers.

If you had decided to constraint the migration to some specific servers of the cluster only, your Candidate Server lists will not be empty. When you add a new server to the cluster, you may need to modify them to add the new server. In this case, you will have to restart the existing nodes during the scale-out process. Changing migration policy from the manual one for the new server also prompts for a restart of existing members in the cluster. Oracle recommends that you “batch” these two changes and perform one single restart after you complete both these changes (migration policy and list of candidates).

To scale up the cluster, complete the following steps:

  1. Use the Oracle WebLogic Remote Console to clone the first managed server in the cluster into a new managed server.
    1. Go to Environment and select Servers.
    2. Click Create, in the Copy settings from another server select the first managed server in the cluster to scale out and click Create.
    3. Use Table 21-1 to set the correspondent name, listen address, and SSL listen port depending on the cluster that you want to scale out.

      Note:

      The port value is incremented by 1 to avoid binding conflicts with the managed server that is already created and running in the same host.

    4. Click the new managed server, select Configuration, and then click General.
    5. Verify that the Machine assigned is SOAHOST1.
    6. Update the Administration port for the server to be consistent with other server in the cluster. Note that the port value is incremented by 1 to avoid binding conflicts with the managed server that is already created and running in the same host.

    Table 21-9 List of Clusters that You Want to Scale Up

    Cluster to Scale Up Server to Clone New Server Name Server Listen Address SSL Server Listen Port Local Administration Port Override Scale up

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSM3

    SOAHOST1

    7011

    9013

    SOA_Cluster

    WLS_SOA1

    WLS_SOA3

    SOAHOST1

    7005

    9024

    ESS_Cluster

    WLS_ESS1

    WLS_ESS3

    SOAHOST1

    7009

    9016

    OSB_Cluster

    WLS_OSB1

    WLS_OSB3

    SOAHOST1

    8004

    9017

    BAM_Cluster

    WLS_BAM1

    WLS_BAM3

    SOAHOST1

    7007

    9015

    MFT_Cluster

    WLS_MFT1

    WLS_MFT3

    MFTHOST1

    7011

    9024

  2. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment.
  3. Your new server’s keystore location and ssl configuration is carried over from the server copied (WLS_SOA1) but it is required to update the password again (since it will be encrypted again for the new server) and the “Server private key alias” entry for this new server.
    1. Navigate to Environment > Servers.
    2. Click on the new server.
    3. Navigate to Security > Keystores.
    4. Update the Custom Identity Key Store Pass Phrase and Custom Trust Key Store Pass Phrase with the password provided to the generate_perdomainCACERTS.sh script.
    5. Click on the SSL tab under Security.
    6. Update the Server Private Key Pass Phrase with the password provided to the generate_perdomainCACERTS.sh script
    7. Click Save and Commit changes in the Shopping Cart.
  4. Update the TLOG JDBC persistent store of the new managed server:
    1. Log into the WebLogic Remote Console.
    2. Go to Environment and expand the Servers link on the navigation tree on the left.
    3. Click the new server WLS_XYZn.
    4. Click the Services > JTA tab.
    5. Ensure Transaction Log Store in JDBC is selected and change the Transaction Log Prefix name to TLOG_WLS_XYZn.
      The rest of the fields are carried over from the server copied (including the Datasource used for the JDBC store).
    6. Click Save and Commit changes in the Shopping Cart.

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 21-10 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Up New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSM3

    Default (file)

    SOA_Cluster

    WLS_SOA3

    JDBC

    ESS_Cluster

    WLS_ESS3

    Default (file)

    OSB_Cluster

    WLS_OSB3

    JDBC

    BAM_Cluster

    WLS_BAM3

    JDBC

    MFT_Cluster

    WLS_MFT3

    JDBC

  5. If the cluster you are scaling up is configured for automatic service migration, update the JTA Migration Policy to the required value.

    Use the following table to identify the clusters for which you have to update the JTA Migration Policy:

    Table 21-11 The Recommended JTA Migration Policy for the Cluster to be Scaled Up

    Cluster to Scale Up New Server Name JTA Migration Policy

    WSM-PM_Cluster

    WLS_WSM3

    Manual

    SOA_Cluster

    WLS_SOA3

    failure-recovery

    ESS_Cluster

    WLS_ESS3

    Manual

    OSB_Cluster

    WLS_OSB3

    failure-recovery

    BAM_Cluster

    WLS_BAM3

    failure-recovery

    MFT_Cluster

    WLS_MFT3

    failure-recovery

    Complete the following steps:

    1. Go to Environment Tree and select Servers. From the list of servers, select WLS_XYZn , click JTA Migratable.
    2. Use Table 21-11 to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.
    3. Click Save and Commit changes in the Shopping Cart.
    4. In the servers already existing in the cluster, verify that the list of the JTA candidate servers for JTA migration is empty:
      1. Click Environment and expand Servers.
      2. Select the server.
      3. Select the JTA Migratable Target in the context menu.
      4. Check the Constrained Candidate Servers list and verify that the list is empty (an empty list indicates that all the servers in the cluster are JTA candidate servers). The list should be empty out-of-the-box so no changes are needed.
      5. If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Save and commit the changes. Restart the existing servers for this change to become effective.
  6. If the cluster you are scaling up is configured for automatic service migration, use the Oracle WebLogic Remote Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 21-12 The Recommended Migratable Targets to Update

    Cluster to Scale Up Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    WLS_SOA3 (migratable)

    failure-recovery

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    WLS_OSB3 (migratable)

    failure-recovery

    BAM_Cluster

    WLS_BAM3 (migratable)

    exactly-once

    MFT_Cluster

    WLS_MFT3 (migratable)

    failure-recovery

    1. Go to Environment > Migratable Targets.
    2. Click WLS_XYZ3 (migratable).
    3. Change the Service Migration Policy to the value listed in the table.
    4. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    5. Click Save and Commit changes in the Shopping Cart. Notice that a change from the default migration policy (manual) requires restart.
  7. For components that use multiple migratable targets in addition to Step 11, Oracle WebLogic Server Remote Console create a new migratable target copying the settings from the existing ones in the cluster. Use the steps above for the required customizable settings.
  8. Verify that the Constrained Candidate Server list in the existing migratable servers in the cluster is empty. It should be empty out-of-the-box because the Configuration Wizard leaves it empty. An empty candidate list means that all the servers in the cluster are candidates, which is the best practice.
    1. Go to each migratable server.
    2. Click the Migration tab and check the Constrained Candidate Servers list.
    3. Ensure that Chosen server list is empty. It should be empty out-of-the-box.
    4. If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constraint the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Click Save and Commit Changes in the Shopping Cart. Restart the existing servers for this change to become effective
  9. Create the required persistent stores for the JMS servers.
    1. Sign into the WebLogic Remote Console and go to Services and select JDBC Stores.
    2. Click New and select Create JDBCStore.

    Use the following table to create the required persistent stores:

    Note:

    The number in the names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation.

    For example:
    UMSJMSJDBCStore_auto_1 — soa_1
    UMSJMSJDBCStore_auto_2 — soa_2
    BPMJMSJDBCStore_auto_1 — soa_3
    BPMJMSJDBCStore_auto_2 — soa_4
    SOAJMSJDBCStore_auto_1 — soa_5
    SOAJMSJDBCStore_auto_2 — soa_6

    Review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 21-13 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Up Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_SOA3 (migratable)

    ESS_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSJDBCStore_osb_scaled_3

    osbums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_OSB3 (migratable)

    OSBJMSJDBCStore_osb_scaled_3

    osbjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSJDBCStore_bam_scaled_3

    bamums_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsJDBCStore_bam_scaled_3

    bamP_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsJDBCStore_bam_scaled_3

    bamR_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    bamA_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamJmsJDBCStore_bam_scaled_3

    bamjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsJDBCStore_bam_scaled_3

    bamC_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_BAM3*

    MFT_Cluster

    MFTJMSJDBCStore_mft_scaled_3

    mftjms_scaled_3

    WLSRuntimeSchemaDataSource

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  10. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Remote Console. In the Edit Tree, select Services, and click JMS Servers.
    2. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation. Review the existing JMS server names and select a new and unique name for each new JMS server. To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.
    Cluster to Scale Up JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    ESS_Cluster

    Not applicable

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSServer_osb_scaled_3

    UMSJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    wlsbJMSServer_osb_scaled_3

    OSBJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSServer_bam_scaled_3

    UMSJMSJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsServer_bam_scaled_3

    BamPersistenceJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsServer_bam_scaled_3

    BamReportCacheJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsServer_bam_scaled_3

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BAMJMSServer_bam_scaled_3

    BamJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsServer_bam_scaled_3

    BamCQServiceJmsJDBCStore_bam_scaled_3

    WLS_BAM3*

    MFT_Cluster

    MFTJMSServer_mft_scaled_3

    MFTJMSJDBCStore_mft_scaled_3

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  11. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand Services, select JMS Modules, and then click the JMS module. For example, BPMJMSModule.
    2. Expand the Sub Deployments and select the corresponding one to update the targets. Use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:

      Use the following table to identify the JMS modules to update depending on the cluster that you are scaling up:

      Cluster to Scale-up JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      Not applicable

      Not applicable

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      ESS_Cluster

      Not applicable

      Not applicable

      OSB_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_osb_scaled_3

      jmsResources (scope Global)

      wlsbJMSServer_osb_scaled_3

      BAM_Cluster

      BamPersistenceJmsSystemModule

      BamPersistenceJmsServer_bam_scaled_3

      BamReportCacheJmsSystemModule

      BamReportCacheJmsServer_bam_scaled_3

      BamAlertEngineJmsSystemModule

      BamAlertEngineJmsServer_bam_scaled_3

      BAMJMSSystemResource

      BAMJMSServer_bam_scaled_3

      BamCQServiceJmsSystemModule

      Not applicable (Do not update existing subdeployments. New subdeployment for the new server will be created in next steps)

      UMSJMSSystemResource *

      UMSJMSServer_bam_scaled_3 *

      MFT_Cluster

      MFTJMSModule

      MFTJMSServer_mft_scaled_3

      (*) Some modules (UMSJMSystemResource) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.

    3. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The Subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).
    4. Click Save and Commit changes in the Shopping Cart.
  12. In case you are scaling out a BAM cluster, you need to create some additional resources (subdeployment and local queues) for the new server in the BamCQServiceJmsSystemModule module. Follow these steps to create them:
    1. Go to WebLogic Remote Console, click the Edit tree and Environment > Services.
    2. Click Jms System Resources and select the BamCQServiceJmsSystemModule.
    3. Click Targets.
    4. Add WLS_BAM3 to the targets and click Save.
    5. Create a new Subdeployment in the BamCQServiceJmsSystemModule JMS Module with the name BamCQServiceAlertEngineSubdeployment_scaled_3. Then select BamCQServiceJmsServer_bam_scaled_3 as the target of this subdeployment.

      Table 21-14 Information to Create the Additional Subdeployment for Local Queues

      Subdeployment Name Subdeployment Target

      BamCQServiceAlertEngineSubdeployment_scaled_3

      BamCQServiceJmsServer_bam_scaled_3

    6. Select Queues under the Module and click New.
    7. Name it BamCQServiceAlertEngineQueue_auto_3.
    8. Click Create.
    9. Click in the newly created queue BamCQServiceAlertEngineQueue_auto_3.
    10. Select General tab.
    11. Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
    12. Set Sub Deployment Name to BamCQServiceAlertEngineSubdeployment_scaled_3.
    13. Click Save and Commit changes in the Shopping Cart.
    14. Repeat these steps to create the other queue BamCQServiceReportCacheQueue_auto_3 with the information in Table 21-15.
    15. After you finish, you have the following new local queues.

      Table 21-15 Information to Create the Local Queues

      Name Type Local JNDI Name Subdeployment

      BamCQServiceAlertE ngineQueue_auto_3

      Queue

      queue/ oracle.beam.cqservice .mdbs.alertengine

      BamCQServiceAlertEngineSubdeployment_scaled_3

      BamCQServiceReportCacheQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.reportcache

      BamCQServiceAlertEngineSubdeployment_scaled_3

  13. Start the new managed server.
  14. When scaling up OSB_Cluster:

    Restart the Admin Server to see the new server in the Service Bus Dashboard.

  15. When scaling up the MFT_Cluster:

    Default SFTP/FTP ports are used in the new server. If you are not using the defaults, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server . When scaling up, use different ports SFTP/FTP for the new server that do not conflict with the existing server in the same machine.

  16. Update the web tier configuration to include this new server:
    • If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      <Location /soa-infra>
        WLSRequest ON
        WebLogicCluster SOAHOST1:7004,SOAHOST2:7004,SOAHOST2:7005
      </Location>
      

Verifying the Scale Up of Clusters

After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://soa.example.com/soa-infra
    2. Check that there is activity in the new server also:

      In the Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.

    3. You can also verify that the web sessions are created in the new server:
      • In Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.

      • Go to Component Runtimes and click WLS_SOA3_/soa-infra.

      • Verify if there are sessions.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      https://soainternal.example.com:444/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. In Remote Console, go to Monitoring Tree.
    2. Navigate to Dashboards > JMS Destinations.
  3. Verify the service migration, as described in Validating Automatic Service Migration.

Scaling Down the Topology

This section describes how to scale down the topology.

To scale down the topology:
  1. To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-in procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Remote Console and in the Monitoring Tree click Servers > <server name> > Services > Transactions > JTA Runtime.

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

  4. Use the Oracle WebLogic Server Remote Console to delete the new server:
    1. Click Edit Tree.
    2. Go to Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    
    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  5. Use the Oracle WebLogic Server Remote Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Table 21-16 Identify the Module for Each Cluster

    Cluster to Scale in JMS Module JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSSystemResource

    jmsResources (scope Global)

    UMSJMSServer_osb_scaled_3

    wlsbJMSServer_osb_scaled_3

    BAM_Cluster

    BamPersistenceJmsSystemModule

    BamReportCacheJmsSystemModule

    BamAlertEngineJmsSystemModule

    BAMJMSSystemResource

    BamCQServiceJmsSystemModule

    BamPersistenceJmsServer_bam_scaled_3

    BamReportCacheJmsServer_bam_scaled_3

    BamAlertEngineJmsServer_bam_scaled_3

    BAMJMSServer_bam_scaled_3

    Not applicable (existing subdeployments are not modified on scale-up)

    MFT_Cluster

    MFTJMSModule

    MFTJMSServer_mft_scaled_3

    1. Click Edit Tree.
    2. Go to Services > JMS System Resources.
    3. Click the JMS module.
    4. Click Sub Deployments.
    5. Select the Sub Deployment Module.
    6. Unselect the JMS server that was created for the deleted server.
    7. Click Save and Commit changes in the Shopping Cart.
  6. In case you want to scale in a BAM cluster, use the Oracle WebLogic Remote Console to delete the local queues that are created for the new server:
    1. Click Edit Tree.
    2. Go to Services > JMS Modules.
    3. Click the JMS module.
    4. Click BamCQServiceJmsSystemModule.
    5. Delete the local queues that are created for the new server:
      BamCQServiceAlertEngineQueue_auto_3
      BamCQServiceReportCacheQueue_auto_3
    6. Delete the subdeployment created for the server:
      BamCQServiceAlertEngineSubdeployment_scaled_3
    7. Click Save and Commit changes in the Shopping Cart.
  7. Use the Oracle WebLogic Remote Console to delete the JMS servers:
    1. Click Edit Tree.
    2. Go to Services > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  8. Use the Oracle WebLogic Server Remote Console to delete the JMS persistent stores:
    1. Click Edit Tree.
    2. Go to Services > JDBC Stores.
    3. Select the JDBC Store that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  9. Update the web tier configuration to remove references to the deleted server.
  10. If the Machine that was hosting the deleted server is not used by any other servers you can also delete it.
    1. Click Edit Tree.
    2. Go to Environment > Machines.
    3. Select the Machine that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.