bea.com | products | dev2dev | support | askBEA |
![]() |
![]() |
|
![]() |
e-docs > WebLogic Platform > WebLogic Integration > Deploying Solutions > Understanding WebLogic Integration High Availability |
Deploying Solutions
|
Understanding WebLogic Integration High Availability
A clustered WebLogic Integration application provides scalability and high availability. A highly available deployment has recovery provisions in the event of hardware or network failures, and provides for the transfer of control to a backup component when a failure occurs.
The following sections describe clustering and high availability for a WebLogic Integration deployment:
About WebLogic Integration High Availability
For a cluster to provide high availability, it must be able to recover from service failures. WebLogic Server supports failover for replicated HTTP session states, clustered objects, and services pinned to servers in a clustered environment . For information about how WebLogic Server handles such failover scenarios, see Communications in a Cluster in Using WebLogic Server Clusters, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/cluster/index.html
Recommended Hardware and Software
The basic components of a highly available WebLogic Integration environment include the following:
When you run WebLogic Integration with persistent mode on, the in-memory, dynamic state of objects is saved to persistent storage in the WebLogic Integration repository from where it can be retrieved if necessary. Persistence mode ensures that run-time states can be recovered in the event of an abnormal shutdown or crash.
A discussion of how to plan the network topology of your clustered system is beyond the scope of this section. For information about how to fully utilize load balancing and failover features for your Web application by organizing one or more WebLogic Server clusters in relation to load balancers, firewalls, and Web servers, see Cluster Architectures in Using WebLogic Server Clusters, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/cluster/planning.html
What to Expect from WebLogic Integration Recovery
A highly available deployment has recovery provisions in the event of system failures. You can configure WebLogic Integration for automatic restart or manual migration:
Note: High availability is not supported for WebLogic Integration applications that are based on the XOCP business protocol; such applications are not recoverable.
When you configure WebLogic Integration appropriately, you can expect the following behavior from your deployment:
WebLogic Integration supports the ebXML Message Service Specification v1.0 and the RosettaNet Implementation Framework v1.1 and v2.0.
If your WebLogic Integration application includes RosettaNet workflows developed in previous versions of WebLogic Integration, you must make changes to those workflows before running your application on WebLogic Integration 7.0. For information about migrating your workflows, see Migrating WebLogic Integration 2.1 to WebLogic Integration 7.0 in the BEA WebLogic Integration Migration Guide.
Configuring WebLogic Integration for Automatic Restart
Whether WebLogic Integration is deployed in a clustered environment or a nonclustered environment, you can configure your system to automatically restart servers that have shut down because of a system crash, hardware reboot, server failure, and so on.
Note: The procedures in this section address a clustered environment, but you can follow the same procedure to configure a nonclustered environment, that is, one in which you deploy an administration server and a managed server.
Node Manager
The procedures in this section describe how to configure your system to start a managed server when the Node Manager is running on the machine on which the managed server is located. The Node Manager is a Java program provided with WebLogic Server that enables you to perform the following tasks for managed servers:
For information about the Node Manager, see Managing Server Availability with Node Manager in Creating and Configuring WebLogic Server Domains, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/admin_domain/index.html
Complete the following procedures to configure your WebLogic Integration cluster for automatic restart:
Step 1. Configure Managed Servers for Remote Start
You must first configure each managed server in your cluster so that it can be started from a remote server.
To configure managed servers for remote start, complete the following steps:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/domain_server_config_server-start.html
Step 2. Configure SSL for Your Administration Server
Because the administration server communicates with the Node Manager using SSL, you must configure SSL for your administration server. Complete the following steps:
-Dweblogic.security.SSL.trustedCAKeyStore=WL_HOME\lib\cacerts
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/ConsoleHelp/domain_server_connections_ssl.html
Step 3. Configure the Node Manager
To configure the Node Manager for a managed server, you must use the WebLogic Server Administration Console to create a machine, specify attributes for the Node Manager on that machine, and deploy the managed server that you configured for remote start on that machine. Specifically, you must complete the following steps:
Step 4. Configure Self-Health Monitoring
This step describes how to configure the frequency of your managed server's automated health checks, and the frequency with which the Node Manager checks the servers's health. You can also specify whether the Node Manager automatically stops and restarts the server if the server reaches a failed health state.
Complete the following procedure for each managed server:
Step 5. Start the Node Manager
You can start the Node Manager manually, by running the java command from an operating system prompt, or automatically, by running a script.
Syntax for the Start Node Manager Command
The java command syntax for starting the Node Manager is as follows:
java [java_property=value ...] -D[nodemanager_property=value]
-D[server_property=value] weblogic.nodemanager.NodeManager
Caution: You must start the Node Manager from the same directory in which you start the managed server manually.
In the preceding java command line:
Note: To avoid running out of memory, always specify a minimum heap size of 32 megabytes (-Xms32m) for the Node Manager.
The information in the preceding tables, along with details about configuring and running the Node Manager, are available in "Starting Node Manager" in Managed Server Availability with Node Manager in Creating and Configuring WebLogic Server Domains, which is available at the following URL: Starting the Node Manager When a Machine Is Booted In a production environment, the Node Manager should start automatically when a machine is booted. You can ensure that it does so by creating a startup script for UNIX systems, or by setting up the Node Manager as a Windows service for Windows systems. For information about how to perform these tasks, see "Starting Node Manager" in Managed Server Availability with Node Manager in Creating and Configuring WebLogic Server Domains, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/admin_domain/index.html
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/admin_domain/index.html
Configuring WebLogic Integration for Migration from Failed to Healthy Node
When a managed server fails and is deemed not to be usable, you can migrate the services from the failed managed server to a healthy node in the cluster. Complete the following procedures to configure your system for a manual migration:
For instructions about how to perform the migration when a node in your cluster fails, see Manual Migration of WebLogic Integration from Failed to Healthy Node.
Step 1. Configure Your Cluster
Make sure that your WebLogic Integration resources are distributed appropriately and your clustered domain is configured as described in Configuring a Clustered Deployment.
Step 2. Configure Migratable Targets for JMS Servers and JTA Recovery Service
To achieve high availability for your WebLogic Integration deployment, you must configure JTA and JMS servers for failover; the process involves configuring migratable targets for JMS servers and the JTA Recovery Service. You can do this by using the WebLogic Server Administration Console or by editing your config.xml file appropriately.
Complete the following procedure:
Note: JTA and JMS service migration is a two-step process. It is recommended that when you migrate WebLogic Integration resources, you first migrate JTA services, and then migrate JMS services. For more information, see Manual Migration of WebLogic Integration from Failed to Healthy Node.
For more information about configuring migratable targets, see:
Note: Online Help is accessible from the Administration Console, and also at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/ConsoleHelp/index.html
The following listing, an excerpt from a sample config.xml file, shows how to configure migratable targets. It demonstrates the configuration of migratable targets for both JMS servers and the JTA Recovery Service in a clustered WebLogic Integration environment. In this example configuration, the cluster contains two managed servers: MyServer-1 and MyServer-2.
Listing 4-1 Configuration for Migratable Targets
<JMSServer Name="WLCJMSServer-MyServer-1"
Store="JMSWLCStore-MyServer-2" Targets="MyServer-1 (migratable)"
TemporaryTemplate="TemporaryTemplate">
<JMSQueue JNDIName="com.bea.b2b.OutboundQueue-MyServer-1"
Name="B2bOutboundQueue-MyServer-2"/>
<JMSQueue ...
:
</JMSServer>
<JMSServer Name="WLCJMSServer-MyServer-2"
Store="JMSWLCStore-MyServer-2" Targets="MyServer-2 (migratable)"
TemporaryTemplate="TemporaryTemplate">
<JMSQueue JNDIName="com.bea.b2b.OutboundQueue-MyServer-2"
Name="B2bOutboundQueue-MyServer-2"/>
<JMSQueue ...
:
</JMSServer>
...
<MigratableTarget Cluster="MyCluster"
ConstrainedCandidateServers="MyServer-1,MyServer-2"
Name="MyServer-1 (migratable)"
Notes="This is a system generated default migratable target for a server.
Do not delete manually."
UserPreferredServer="MyServer-1"/>
<MigratableTarget Cluster="MyCluster"
ConstrainedCandidateServers="MyServer-1,MyServer-2"
Name="MyServer-2 (migratable)"
Notes="This is a system generated default migratable target for a server.
Do not delete manually."
UserPreferredServer="MyServer-2"/>
...
<Server Cluster="MyCluster" JTARecoveryService="MyServer-1"
ListenAddress="localhost" ListenPort="7901" Name="MyServer-1"
ServerVersion="7.0.0.0">
<COM Name="MyServer-1"/><ExecuteQueue Name="default" ThreadCount="15"/>
<IIOP Name="MyServer-1"/>
<JTAMigratableTarget Cluster="MyCluster"
ConstrainedCandidateServers="MyServer-1,MyServer-2 Name="MyServer-1"
UserPreferredServer="MyServer-1"/>
</Server>
<Server Cluster="MyCluster" JTARecoveryService="MyServer-2"
ListenAddress="localhost" ListenPort="7901" Name="MyServer-2"
ServerVersion="7.0.0.0">
<COM Name="MyServer-2"/><ExecuteQueue Name="default" ThreadCount="15"/>
<IIOP Name="MyServer-2"/>
<JTAMigratableTarget Cluster="MyCluster"
ConstrainedCandidateServers="MyServer-1,MyServer-2 Name="MyServer-2"
UserPreferredServer="MyServer-2"/>
</Server>
Note the following XML elements in the preceding listing:
The MigratableTarget element also contains the comma-separated list of servers for ConstrainedCandidateServers. The servers in this list are those that you have specified as capable of acting as JMS server backups. Note that you must include the UserPreferredServer in the list of ConstrainedCandidateServers; the WebLogic Server Administration Console enforces this rule.
Failover and Recovery
This section describes how WebLogic Integration failover and recovery works in specific scenarios. It contains the following topics:
Backup and Failover for an Administration Server
To provide for quick failover in case of an administration server crash or other failure, you may wish to create another instance of the administration server on a different machine that is ready to use if the original server fails.
Because the administration server uses the configuration file (config.xml), security files, and application files to administer the domain, we recommend that you at least keep an archived copy of these files, so that in case of a failure of the administration server you can safely restart the administration server on another machine without interrupting the functioning of the managed servers.
When the administration server for a cluster crashes, the managed servers continue serving requests. However, you cannot change configuration for the cluster or perform new deployment activities until the administration server is recovered. For example, if the administration server for a cluster is not running, you cannot add new nodes to the cluster, deploy new application views, or undeploy the connection factories associated with those application views.
The WebLogic Integration B2B Console is deployed only on the administration server. It is never deployed on the managed servers in a cluster. Therefore B2B integration management and monitoring functions are unavailable when the administration server is down. For example, you cannot add, delete, or modify trading partner information until the administration server is recovered.
If the managed servers are running but the administration server is stopped, you can recover management of the domain without the need to stop and restart the managed servers.
For instructions to restart your administration server when managed servers are running, see Starting and Stopping WebLogic Servers in the BEA WebLogic Server Administration Guide, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/adminguide/startstop.html
Manual Migration of WebLogic Integration from Failed to Healthy Node
This section describes a controlled fail over. That is, source and destination servers are not serving any requests when you migrate services from a failed node to a healthy node in your cluster.
Before attempting to migrate WebLogic Integration from a failed node to a healthy node:
Note: JTA and JMS service migration is a two-step process. When you migrate WebLogic Integration resources, you should first migrate JTA services, and then migrate JMS services.
You can migrate WebLogic Integration using one of the following methods:
Using the weblogic.Admin Command-Line Utility
Use the following command line (weblogic.Admin with the MIGRATE command) to migrate a JMS service or a JTA service to a targeted server within the cluster:
java weblogic.Admin [-url http://hostname:port]
[-username username]
[-password password]
MIGRATE -jta -migratabletarget (migratabletarget_name|servername)
-destination servername [-sourcedown] [-destinationdown]
In the previous command line:
Warning: As mentioned previously in this section, it is important to ensure that your source server is down when you invoke weblogic.Admin with the MIGRATE command.
For more information about the weblogic.Admin command-line tool, see WebLogic Server Command-Line Interface Reference in BEA WebLogic Server Administration Guide, which is available at the following URL:
http://download.oracle.com/docs/cd/E13222_01/wls/docs70/adminguide/cli.html
Using the WebLogic Server Administration Console
As an alternative to the weblogic.Admin command-line tool, you can use the WebLogic Server Administration Console to migrate a JTS service or a JMS service to a targeted server within the cluster:
Services running on the server you selected in step 2 are migrated to the destination server you selected.
The services that are migrated depend on the selection you made in step 3. That is, if you selected the JTA Migrate tab, only the JTA service is migrated to the selected server. If you selected the Migrate tab, only JMS services are migrated to the selected server.
For more information about how to use the Administration Console to migrate JMS or JTA services to a targeted server within the cluster, see Migrating Services to a New Server in "Servers" in WebLogic Server Adminstration Console Online Help.
Recovering a Database
WebLogic Integration does not attempt to recover a crashed database. In the event of a database crash or database shutdown, it may be necessary to restart WebLogic Integration.
For example, if WebLogic Integration and the database are running on the same machine and the machine is unplugged, you should take steps to recover the database before attempting to recover WebLogic Integration.
Recovering JMS Stores
There is no migration of JMS stores after a server crash. WebLogic Integration uses JDBC for JMS stores. That is, it uses JDBC to access JMS JDBC Stores, which can be on another server. WebLogic Integration uses the same database for all nodes in a cluster. If you plan to use separate database instances for each node in your cluster, you should take advantage of any high availability or failover solutions offered by your database vendor. For example, you could use warm database standby in the event of database crash.
![]() |
![]() |
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |