19 Common Configuration and Management Tasks for an Enterprise Deployment
The configuration and management tasks that may need to be performed on the enterprise deployment environment are detailed in this section.
- Configuration and Management Tasks for All Enterprise Deployments
These are some of the typical configuration and management tasks you are likely need to perform on an Oracle Fusion Middleware enterprise deployment. - Configuration and Management Tasks for an Oracle SOA Suite Enterprise Deployment
These are some of the key configuration and management tasks that you likely need to perform on an Oracle SOA Suite enterprise deployment.
Configuration and Management Tasks for All Enterprise Deployments
These are some of the typical configuration and management tasks you are likely need to perform on an Oracle Fusion Middleware enterprise deployment.
- Verifying Appropriate Sizing and Configuration for the WLSRuntimeSchemaDataSource
In Oracle FMW 14.1.2,WLSRuntimeSchemaDataSource
is the common datasource that is reserved for use by the FMW components for JMS JDBC Stores, JTA JDBC stores, and Leasing services.WLSRuntimeSchemaDataSource
is used to avoid contention in critical WLS infrastructure services and to guard against dead-locks. - Verifying Manual Failover of the Administration Server
In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from SOAHOST1 and SOAHOST2 are detailed in the following sections. - Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment
After you configure the domain and unpack it to the Managed Server domain directories on all the hosts, verify and update the upload and stage directories for Managed Servers in the new clusters. Also, update the upload directory for the AdminServer to have the same absolute path instead of relative, otherwise deployment issues can occur. - Setting the Front End Host and Port for a WebLogic Cluster
You must set the front-end HTTP host and port for the Oracle WebLogic Server cluster that hosts the Oracle SOA Suite servers. You can specify these values in the Configuration Wizard while you are specifying the properties of the domain. However, when you add a SOA Cluster as part of an Oracle SOA Suite enterprise deployment, Oracle recommends that you perform this task after you verify the SOA Managed Servers. - About Using Third Party SSL Certificates in the WebLogic and Oracle HTTP Servers
This Oracle SOA Suite Enterprise Deployment Topology uses SSL all the way from the external clients to the backend WebLogic Servers. The previous chapters in this guide provided scripts (generate_perdomainCACERTS.sh and generate_perdomainCACERTS-ohs.sh
) to generate the required SSL certificates for the different FMW components. - Enabling SSL Communication Between the Middle Tier and SSL Endpoints
It is important to understand how to enable SSL communication between the middle tier and the frontend hardware load balancer or any other external SSL endpoints that needs to be accessed by the SOA Suite WebLogic Server. For example, for external webservices invocations, callbacks, and so on. - Configuring Roles for Administration of an Enterprise Deployment
In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group. - Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment
The Oracle WebLogic persistent store framework provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence. - About JDBC Persistent Stores for Web Services
By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services. - Best Configuration Practices When Using RAC and Gridlink Datasources
Oracle recommends that you use GridLink data sources when you use an Oracle RAC database. If you follow the steps described in the Enterprise Deployment guide, the datasources will be configured as GridLink. - Using TNS Alias in Connect Strings
You can create an alias to map the URL information instead of specifying long database connection strings in the jdbc connection pool of a datasource. The connection string information is stored in atnsnames.ora
file with an associated alias name. This alias is used in the connect string of the connection pool. - Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle SOA Suite enterprise deployment.
Verifying Appropriate Sizing and Configuration for the WLSRuntimeSchemaDataSource
In Oracle FMW 14.1.2, WLSRuntimeSchemaDataSource
is the
common datasource that is reserved for use by the FMW components for JMS JDBC
Stores, JTA JDBC stores, and Leasing services.
WLSRuntimeSchemaDataSource
is used to avoid contention in
critical WLS infrastructure services and to guard against dead-locks.
To reduce the WLSRuntimeSchemaDataSource
connection usage, you can
change the JMS JDBC and TLOG JDBC stores connection caching policy from
Default to Minimal by using the respective connection
caching policy settings. When there is a need to reduce connections in the
back-end database system, Oracle recommends that you set the caching policy
to Minimal . Avoid using the caching policy None because it
causes a potential degradation in performance. For a detailed tuning advice
about connections that are used by JDBC stores, see Configuring a JDBC Store Connection Caching
Policy in Administering the WebLogic Persistent
Store.
The default WLSRuntimeSchemaDataSource
connection pool size is 75
(size is double in the case of a GridLink DataSource). You can tune this
size to a higher value depending on the size of the different FMW clusters
and the candidates that are configured for migration. For example, consider
a typical SOA EDG deployment with the default number of worker threads per
store. If more than 25 JDBC Stores or TLOG-in-DB instances or both can fail
over to the same Weblogic server, and the Connection Caching Policy is not
changed from Default to Minimal, possible connection
contention issues could arise. In these cases, increasing the default
WLSRuntimeSchemaDataSource
pool size (maximum
capacity) becomes necessary (each JMS store uses a minimum of two
connections, and leasing and JTA are also added to compete for the
pool).
Verifying Manual Failover of the Administration Server
In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from SOAHOST1 and SOAHOST2 are detailed in the following sections.
Assumptions:
-
The Administration Server is configured to listen on ADMINVHN or any custom virtual host that maps to a floating IP/VIP. It should not listen on ANY (blank listen address), localhost or any host name that uniquely identifies a single node.
For more information about the ADMINVHN virtual IP address, see Reserving the Required IP Addresses for an Enterprise Deployment.
-
These procedures assume that the Administration Server domain home (ASERVER_HOME) has been mounted on both host computers. This ensures that the Administration Server domain configuration files and the persistent stores are saved on the shared storage device.
-
The Administration Server is failed over from SOAHOST1 to SOAHOST2, and the two nodes have these IPs:
-
SOAHOST1: 100.200.140.165
-
SOAHOST2: 100.200.140.205
-
ADMINVHN : 100.200.140.206. This is the Virtual IP where the Administration Server is running, assigned to a virtual sub-interface (for example, eth0:1), to be available on SOAHOST1 or SOAHOST2.
-
-
Oracle WebLogic Server and Oracle Fusion Middleware components have been installed in SOAHOST2 as described in the specific configuration chapters in this guide.
Specifically, both host computers use the exact same path to reference the binary files in the Oracle home.
- Failing Over the Administration Server When Using a Per Host Node Manager
The following procedure shows how to fail over the Administration Server to a different node (SOAHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine). - Validating Access to the Administration Server on SOAHOST2 Through Load Balancer
If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs. - Failing the Administration Server Back to SOAHOST1 When Using a Per Host Node Manager
After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.
Failing Over the Administration Server When Using a Per Host Node Manager
The following procedure shows how to fail over the Administration Server to a different node (SOAHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine).
This procedure assumes you’ve configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.
To fail over the Administration Server to a different host:
-
Stop the Administration Server on SOAHOST1.
-
Stop the Node Manager on SOAHOST1.
You can use the script
stopNodeManager.sh
that was created in NM_HOME. -
Migrate the ADMINVHN virtual IP address to the second host:
-
Run the following command as root on SOAHOST1 (where X is the current interface used by ADMINVHN) to check the virtual IP address at its CIDR:
ip addr show dev ethX
For example:
ip addr show dev eth0
-
Run the following command as root on SOAHOST1 (where X is the current interface used by ADMINVHN):
ip addr del ADMINVHN/CIDR dev ethX
For example:
ip addr del 100.200.140.206/24 dev eth0
-
Run the following command as root on SOAHOST2:
ip addr add ADMINVHN/CIDR dev ethX label ethX:Y
For example:
ip addr add 100.200.140.206/24 dev eth0 label eth0:1
Note:
Ensure that the CIDR and interface to be used match the available network configuration in SOAHOST2.
-
-
Update the routing tables by using
arping
, for example:arping -b -A -c 3 -I eth0 100.200.140.206
-
From SOAHOST1, change directory to the Node Manager home directory:
cd $NM_HOME
-
Edit the
nodemanager.domains
file and remove the reference to ASERVER_HOME.The resulting entry in the SOAHOST1
nodemanager.domains
file should appear as follows:soaedg_domain
=MSERVER_HOME; -
From SOAHOST2, change directory to the Node Manager home directory:
cd $NM_HOME
-
Edit the
nodemanager.domains
file and add the reference to ASERVER_HOME.The resulting entry in the SOAHOST2
nodemanager.domains
file should appear as follows:soaedg_domain
=MSERVER_HOME;ASERVER_HOME -
Start the Node Manager on SOAHOST1 and restart the Node Manager on SOAHOST2.
-
Start the Administration Server on SOAHOST2.
-
Check that you can access the Administration Server on SOAHOST2 and verify the status of components in Fusion Middleware Control using the following URL:
https://ADMINVHN:9002/em
Parent topic: Verifying Manual Failover of the Administration Server
Validating Access to the Administration Server on SOAHOST2 Through Load Balancer
If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs.
From the load balancer, access the following URLs to ensure that you can access the Administration Server when it is running on SOAHOST2:
-
https://admin.example.com:445/em
Where, 445 is the port you use to access to the Fusion Middleware Control in the Load Balancer.
This URL should display Oracle Enterprise Manager Fusion Middleware Control.
- Verify that you can log into the WebLogic Remote Console through the provider you defined for this domain.
Parent topic: Verifying Manual Failover of the Administration Server
Failing the Administration Server Back to SOAHOST1 When Using a Per Host Node Manager
After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.
This procedure assumes that you have configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.
Parent topic: Verifying Manual Failover of the Administration Server
Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment
After you configure the domain and unpack it to the Managed Server domain directories on all the hosts, verify and update the upload and stage directories for Managed Servers in the new clusters. Also, update the upload directory for the AdminServer to have the same absolute path instead of relative, otherwise deployment issues can occur.
This step is necessary to avoid potential issues when you perform remote deployments and for deployments that require the stage mode.
To update the directory paths for the Deployment Stage and Upload locations, complete the following steps:
-
Log into the WebLogic Remote Console to access the provider of this domain.
-
Open the Edit Tree.
-
Expand Environment.
-
Expand Servers.
-
Click the name of the Managed Server you want to edit. Perform the following steps for each of the Managed Server:
- Click the Advanced tab.
- Click the Deployment tab.
- Verify that the Staging Directory Name is set to the
following:
MSERVER_HOME/servers/server_name/stage
Replace
MSERVER_HOME
with the full path for theMSERVER_HOME
directory.Update with the correct name of the Managed Server that you are editing.
- Update the Upload Directory Name to the following value:
ASERVER_HOME/servers/AdminServer/upload
Replace
ASERVER_HOME
with the directory path for theASERVER_HOME
directory. - Click Save.
- Return to the Summary of Servers screen.
Repeat the same steps for each of the new managed servers.
-
Navigate to and update the Upload Directory Name value for the AdminServer:
- Navigate to Servers and select the AdminServer.
- Click the Advanced tab.
- Click the Deployment tab
- Verify that the Staging Directory Name is set to the
following absolute path:
ASERVER_HOME/servers/AdminServer/stage
- Update the Upload Directory Name to the following absolute
path:
ASERVER_HOME/servers/AdminServer/upload
Replace
ASERVER_HOME
with the directory path for theASERVER_HOME
directory. - Click Save.
-
When you have modified all the appropriate objects, commit the changes in the shopping cart.
-
Restart all the Servers for the changes to take effect. If you are following the EDG steps in-order and are not going to make any deployments immediately, you can wait until the next restart.
Note:
If you continue directly with further domain configurations, a restart to enable the stage and upload directory changes is not strictly necessary at this time.
Setting the Front End Host and Port for a WebLogic Cluster
You must set the front-end HTTP host and port for the Oracle WebLogic Server cluster that hosts the Oracle SOA Suite servers. You can specify these values in the Configuration Wizard while you are specifying the properties of the domain. However, when you add a SOA Cluster as part of an Oracle SOA Suite enterprise deployment, Oracle recommends that you perform this task after you verify the SOA Managed Servers.
To set the frontend host and port from the WebLogic Remote Console:
About Using Third Party SSL Certificates in the WebLogic and Oracle HTTP Servers
This Oracle SOA Suite Enterprise Deployment Topology uses SSL all the way
from the external clients to the backend WebLogic Servers. The previous chapters in this
guide provided scripts (generate_perdomainCACERTS.sh and
generate_perdomainCACERTS-ohs.sh
) to generate the required SSL certificates for
the different FMW components.
These scripts generate the different SSL certificates using the WebLogic per domain Certification Authority in the WebLogic domain. These scripts also add the frontend’s SSL certificates to the trust keystore. However, in a production environment, you may want to use your own SSL certificates, issued by your own or by a 3rd party certificate authority. This section provides you some guidelines to configure the EDG system with this type of SSL certificates.
Using Third Party SSL Certificates in WebLogic Servers
Here are some guidelines about using custom or third party SSL certificates with the WebLogic Servers:
-
The SSL certificate used by each WebLogic server (identity key, private key) must be issued to that server’s listen address. For example, if the server WLS_PROD1 listens in apphost1.example.com, the CN of its SSL certificate must be that hostname or wildcard name valid for that hostname.
-
Oracle recommends using an identity keystore shared by all the servers in the same domain where you import all the private keys used by the different WebLogic servers each mapped to a different alias.
-
Oracle recommends using a trust keystore shared by all the servers in the domain. You must import the Certificate Authority’s certificate (and intermediate and root CA if needed) into this trust keystore.
-
You must specify the identity keystore, alias of the identity key and the trust keystore for each WebLogic server in the WebLogic domain’s configuration. Use WebLogic’s Remote Console to configure these SSL settings for each server.
-
Start the WebLogic servers using the appropriate java options to point to the trusted keystore so that they can communicate with external SSL endpoints that use the Certificate Authorities included in such a trust store.
The following commands are useful to manage SSL certificates in WebLogic.
-
Command to import an SSL certificate (a private key) into the identity keystore:
Syntax
WL_HOME/server/bin/setWLSEnv.sh java utils.ImportPrivateKey -certfile cert_file -keyfile private_key_file [-keyfilepass private_key_password] -keystore keystore -storepass storepass [-storetype storetype] -alias alias [-keypass keypass]
Example for a Certificate Issued to
apphost1.example.com
WL_HOME/server/bin/setWLSEnv.sh java utils.ImportPrivateKey \ -certfile apphost1.example.com_cert.der \ -keyfile apphost1.example.com_key.der \ -keyfilepass keypassword \ -storetype pkcs12 \ -keystore CustomIdentityKeystore.pkcs12 \ -storepass keystorepassword \ -alias apphost1.example.com \ -keypass keypassword
-
Command to import an SSL certificate (a trusted certificate) into the trusted keystore:
Syntax
keytool -import -v -noprompt -trustcacerts \ -alias <alias_for_trusted_cert> \ -file <certificate>.der \ -storetype <keystoretype> \ -keystore <customTrustKeyStore> \ -storepass <keystorepassword>
Example for Importing a CA Certificate
keytool -import -v -noprompt -trustcacerts \ -alias example_ca_cert \ -file example_ca_cert.der \ -storetype pkcs12 \ -keystore CustomTrustKeyStore.pkcs12 \ -storepass keystorepassword
Example of the Java Options for Servers to Load Custom Trust Keystore
EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Djavax.net.ssl.trustStore=/u01/oracle/config/keystores/CustomTrustKeyStore.pkcs12 -Djavax.net.ssl.trustStorePassword=<keystorepassword>" export EXTRA_JAVA_PROPERTIES
Using Third Party SSL Certificates in Oracle HTTP Servers
Here are some guidelines to use your own SSL certificates in OHS:
-
Each OHS virtual host using SSL must use a wallet that contains only one private key. This private key will be used as the OHS server’s SSL certificate. It must be issued to the hostname in which the virtual host listens (the hostname value in the “VirtualHost” directive). The private key can also include other hostnames such as Subject Alternative Name (SAN) names (for example, the value of the “ServerName” directive). The virtual host must include the SSLWallet directive pointing to this wallet.
-
Different OHS virtual hosts can use the same SSLWallet (hence, the same private key), as long as they use the same hostname in the VirtualHost directive. The port can be different.
-
OHS acts as a client when it connects to the WebLogic servers. Hence, it must trust the certificate authority that issued the WebLogic’s certificates. Use the directive WLSSLWallet in the
mod_wl_ohs.conf
file to point to the appropriate wallet that contains the WebLogic certificates’ CA cert. -
The frontend load balancer acts as a client when it connects to the OHS servers. It must trust the certificate authority that issued the certificates used by OHS. You must check your load balancer documentation to import the OHS’s CA as a trusted authority.
The following commands are useful to manage keys and wallets in OHS.
-
Command to create a wallet for OHS (orapki):
Syntax
$WEB_ORACLE_HOME/bin/orapki wallet create \ -wallet wallet \ -auto_login_only
Example
$WEB_ORACLE_HOME/bin/orapki wallet create \ -wallet /u02/oracle/config/keystores/orapki/ \ -auto_login_only
-
Command to add a private key to a wallet (orapki) from an identity keystore:
Syntax
$WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \ -wallet wallet \ -pwd pwd \ -keystore keystore \ -jkspwd keystorepassword [-aliases [alias:alias..]]
Example
$WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \ -wallet /u02/oracle/config/keystores/orapki/ \ -keystore /u02/oracle/config/keystores/customIdentityKeyStore.pkcs12 \ -jkspwd keystorepassword \ -aliases ohshost1.example.com
-
Command to add all the trusted keys to a wallet (orapki) from a trusted keystore:
Example
$WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \ -wallet /u02/oracle/config/keystores/orapki/ \ -keystore /u02/oracle/config/keystores/customTrustKeyStore.pkcs12 \ -jkspwd password
-
Command to list all the keys of a wallet (orapki):
Example
$WEB_ORACLE_HOME/bin/orapki wallet display \ -wallet /u02/oracle/config/keystores/orapki/
Enabling SSL Communication Between the Middle Tier and SSL Endpoints
It is important to understand how to enable SSL communication between the middle tier and the frontend hardware load balancer or any other external SSL endpoints that needs to be accessed by the SOA Suite WebLogic Server. For example, for external webservices invocations, callbacks, and so on.
Note:
The following steps are applicable if the hardware load balancer is configured with SSL and the frontend address of the system has been secured accordingly.
When is SSL Communication Between the Middle Tier and the Frontend Load Balancer Necessary?
In an enterprise deployment, there are scenarios where the software running on the middle tier must access the frontend SSL address of the hardware load balancer. In these scenarios, an appropriate SSL handshake must take place between the load balancer and the invoking servers. This handshake is not possible unless the Administration Server and Managed Servers on the middle tier are started by using the appropriate SSL configuration.
For example, the following examples are applicable in an Oracle SOA Suite enterprise deployment:
-
Oracle Business Process Management and SOA Composer require access to the frontend load balancer URL when they attempt to retrieve role and security information through specific web instances. Some of these invocations require not only that the LBR certificate is added to the WebLogic Server's trust store but also that the appropriate identity key certificates are created for the SOA server's listen addresses.
-
Oracle Service Bus performs invocations to endpoints exposed in the Load Balancer SSL virtual servers.
-
Oracle SOA Suite composite applications and services often generate callbacks that need to perform invocations by using the SSL address exposed in the load balancer.
-
Oracle SOA Suite composite applications and services often access external webservices using SSL.
-
Finally, when you test a SOA Web services endpoint in Oracle Enterprise Manager Fusion Middleware Control, the Fusion Middleware Control software that is running on the Administration Server must access the load balancer frontend to validate the endpoint.
Generating Certificates, Identity Store, and Truststores
Since this Enterprise Deployment Guide uses end to end SSL (except in the access to
the Database), certificates have already been generated in the different chapters using
a per-domain CA. These have been already added to the pertaining Identity Stores and a
Truststore has also been configured to include the per-domain CA. It is expected that
through the use of the different generateCerts scripts provided, appropriate
certificates exist already in these stores for the different listen addresses used by
the WebLogic servers in the domain. On top of this, when the script
generate_perdomainCACERTS-ohs.sh
is executed, it traverses all the
front-end addresses in the domain’s config.xml
and adds its pertaining
certificates to the trust store used by the domain. By adding these trust stores to the
java properties used by the WebLogic Servers in the domain
(-Djavax.net.ssl.trustStore
and
-Djavax.net.ssl.trustStorePassword
), the appropriate SSL handshake
is guaranteed when these WebLogic servers acts as client sin SSL invocations.
Importing Other External Certificates into the Truststore
Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts
Since the trust store’s path was already added to the WebLogic start scripts in the chapter where the domain was created, no additional configuration is required. Simply ensure that the new trust store (with the CAs and/or certs for the SSL endpoints added) replaces the existing one.
Configuring Roles for Administration of an Enterprise Deployment
In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group.
Each enterprise deployment consists of multiple products. Some of the products have specific administration users, roles, or groups that are used to control administration access to each product.
However, for an enterprise deployment, which consists of multiple products, you can use a single LDAP-based authorization provider and a single administration user and group to control access to all aspects of the deployment. See Creating a New LDAP Authenticator and Provisioning a New Enterprise Deployment Administrator User and Group.
To be sure that you can manage each product effectively within the single enterprise deployment domain, you must understand which products require specific administration roles or groups, you must know how to add any specific product administration roles to the single, common enterprise deployment administration group, and if necessary, you must know how to add the enterprise deployment administration user to any required product-specific administration groups.
For more information, see the following topics.
- Summary of Products with Specific Administration Roles
- Summary of Oracle SOA Suite Products with Specific Administration Groups
- Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group
- Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group
Summary of Products with Specific Administration Roles
The following table lists the Fusion Middleware products that have specific administration roles, which must be added to the enterprise deployment administration group (SOA Administrators
), which you defined in the LDAP Authorization Provider for the enterprise deployment.
Use the information in the following table and the instructions in Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group to add the required administration roles to the enterprise deployment Administration group.
Product | Application Stripe | Administration Role to be Assigned |
---|---|---|
Oracle Web Services Manager |
wsm-pm |
policy.updater |
SOA Infrastructure |
soa-infra |
SOAAdmin |
Oracle Service Bus |
Service_Bus_Console |
MiddlewareAdministrator |
Enterprise Scheduler Service |
ESSAPP |
ESSAdmin |
Oracle B2B |
b2bui |
B2BAdmin |
Oracle MFT |
mftapp |
MFTAdmin |
Oracle MFT |
mftes |
MFTESAdmin |
Summary of Oracle SOA Suite Products with Specific Administration Groups
Table 19-2 lists the Oracle SOA Suite products that need to use specific administration groups.
For each of these components, the common enterprise deployment Administration user must be added to the product-specific Administration group; otherwise, you won't be able to manage the product resources by using the enterprise manager administration user that you created in Provisioning an Enterprise Deployment Administration User and Group.
Use the information in Table 19-2 and the instructions in Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group to add the required administration roles to the enterprise deployment Administration group.
Table 19-2 Oracle SOA Suite Products with a Product-Specific Administration Group
Product | Product-Specific Administration Group |
---|---|
Oracle Business Activity Monitoring |
BAMAdministrator |
Oracle Business Process Management |
Administrators |
Oracle Service Bus Integration |
IntegrationAdministrators |
MFT |
OracleSystemGroup |
Note:
MFT requires a specific user, namely OracleSystemUser, to be added to the central LDAP. This user must belong to the OracleSystemGroup group. You must add both the user name and the user group to the central LDAP to ensure that MFT job creation and deletion work properly.Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group
For products that require a product-specific administration role, use the following procedure to add the role to the enterprise deployment administration group:
Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group
For products with a product-specific administration group, use the following procedure to add the enterprise deployment administration user (weblogic_soa
to the group. This allows you to manage the product by using the enterprise manager administrator user:
Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment
The Oracle WebLogic persistent store framework provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence.
For example, the JMS subsystem stores persistent JMS messages and durable subscribers, and the JTA Transaction Log (TLOG) stores information about the committed transactions that are coordinated by the server but may not have been completed. The persistent store supports persistence to a file-based store or to a JDBC-enabled database. Persistent stores’ high availability is provided by server or service migration. Server or service migration requires that all members of a WebLogic cluster have access to the same transaction and JMS persistent stores (regardless of whether the persistent store is file-based or database-based).
For an enterprise deployment, Oracle recommends using JDBC persistent stores for transaction logs (TLOGs) and JMS.
This section analyzes the benefits of using JDBC versus File persistent stores and explains the procedure for configuring the persistent stores in a supported database. It needs to be noted that the configuration wizard steps provided in the different chapters in this book will already create JDBC persistent stores for the components used. Use the manual steps below for custom stores or for transitioning to JDBC stores from file stores.
Products and Components that use JMS Persistence Stores and TLOGs
Determining which installed FMW products and components utilize persistent stores can be done through the WebLogic Server Console in the Domain Structure navigation under DomainName > Services > Persistent Stores. The list indicates the name of the store, the store type (FileStore and JDBC), and the target of the store. The stores listed that pertain to MDS are outside the scope of this chapter and should not be considered.
Component/Product | JMS Stores | TLOG Stores |
---|---|---|
B2B |
Yes |
Yes |
BAM |
Yes |
Yes |
BPM |
Yes |
Yes |
ESS |
No |
No |
MFT |
Yes |
Yes |
OSB |
Yes |
Yes |
SOA |
Yes |
Yes |
WSM |
No |
No |
JDBC Persistent Stores vs. File Persistent Stores
Oracle Fusion Middleware supports both database-based and file-based persistent stores for Oracle WebLogic Server transaction logs (TLOGs) and JMS. Before you decide on a persistent store strategy for your environment, consider the advantages and disadvantages of each approach.
Note:
Regardless of which storage method you choose, Oracle recommends that for transaction integrity and consistency, you use the same type of store for both JMS and TLOGs.
About JDBC Persistent Stores for JMS and TLOGs
When you store your TLOGs and JMS data in an Oracle database, you can take advantage of the replication and high availability features of the database. For example, you can use Oracle Data Guard to simplify cross-site synchronization. This is especially important if you are deploying Oracle Fusion Middleware in a disaster recovery configuration.
Storing TLOGs and JMS data in a database also means that you do not have to identity a specific shared storage location for this data. Note, however, that shared storage is still required for other aspects of an enterprise deployment. For example, it is necessary for Administration Server configuration (to support Administration Server failover), for deployment plans, and for adapter artifacts, such as the File and FTP Adapter control and processed files.
If you are storing TLOGs and JMS stores on a shared storage device, then you can protect this data by using the appropriate replication and backup strategy to guarantee zero data loss, and you potentially realize better system performance. However, the file system protection is always inferior to the protection provided by an Oracle Database.
For more information about the potential performance impact of using a database-based TLOGs and JMS store, see Performance Considerations for TLOGs and JMS Persistent Stores.
Parent topic: JDBC Persistent Stores vs. File Persistent Stores
Performance Considerations for TLOGs and JMS Persistent Stores
One of the primary considerations when you select a storage method for Transaction Logs and JMS persistent stores is the potential impact on performance. This topic provides some guidelines and details to help you determine the performance impact of using JDBC persistent stores for TLOGs and JMS.
Performance Impact of Transaction Logs Versus JMS Stores
For transaction logs, the impact of using a JDBC store is relatively small, because the logs are very transient in nature. Typically, the effect is minimal when compared to other database operations in the system.
On the other hand, JMS database stores can have a higher impact on performance if the application is JMS intensive. For example, the impact of switching from a file-based to database-based persistent store is very low when you use the SOA Fusion Order Demo (a sample application used to test Oracle SOA Suite environments), because the JMS database operations are masked by many other SOA database invocations that are much heavier.
Factors that Affect Performance
There are multiple factors that can affect the performance of a system when it is using JMS DB stores for custom destinations. The main ones are:
-
Custom destinations involved and their type
-
Payloads being persisted
-
Concurrency on the SOA system (producers on consumers for the destinations)
Depending on the effect of each one of the above, different settings can be configured in the following areas to improve performance:
-
Type of data types used for the JMS table (using raw versus lobs)
-
Segment definition for the JMS table (partitions at index and table level)
Impact of JMS Topics
If your system uses Topics intensively, then as concurrency increases, the performance degradation with an Oracle RAC database will increase more than for Queues. In tests conducted by Oracle with JMS, the average performance degradation for different payload sizes and different concurrency was less than 30% for Queues. For topics, the impact was more than 40%. Consider the importance of these destinations from the recovery perspective when deciding whether to use database stores.
Impact of Data Type and Payload Size
When you choose to use the RAW or SecureFiles LOB data type for the payloads, consider the size of the payload being persisted. For example, when payload sizes range between 100b and 20k, then the amount of database time required by SecureFiles LOB is slightly higher than for the RAW data type.
More specifically, when the payload size reach around 4k, then SecureFiles tend to require more database time. This is because 4k is where writes move out-of-row. At around 20k payload size, SecureFiles data starts being more efficient. When payload sizes increase to more than 20k, then the database time becomes worse for payloads set to the RAW data type.
One additional advantage for SecureFiles is that the database time incurred stabilizes with payload increases starting at 500k. In other words, at that point it is not relevant (for SecureFiles) whether the data is storing 500k, 1MB or 2MB payloads, because the write is asynchronous, and the contention is the same in all cases.
The effect of concurrency (producers and consumers) on the queue’s throughput is similar for both RAW and SecureFiles until the payload sizes reach 50K. For small payloads, the effect on varying concurrency is practically the same, with slightly better scalability for RAW. Scalability is better for SecureFiles when the payloads are above 50k.
Impact of Concurrency, Worker Threads, and Database Partioning
Concurrency and worker threads defined for the persistent store can cause contention in the RAC database at the index and global cache level. Using a reverse index when enabling multiple worker threads in one single server or using multiple Oracle WebLogic Server clusters can improve things. However, if the Oracle Database partitioning option is available, then global hash partition for indexes should be used instead. This reduces the contention on the index and the global cache buffer waits, which in turn improves the response time of the application. Partitioning works well in all cases, some of which will not see significant improvements with a reverse index.
Parent topic: JDBC Persistent Stores vs. File Persistent Stores
Using JDBC Persistent Stores for TLOGs and JMS in an Enterprise Deployment
This section explains the guidelines to use JDBC persistent stores for transaction logs (TLOGs) and JMS. It also explains the procedures to configure the persistent stores in a supported database.
Note:
Remember that the steps provided for setting up the different components in this EDG (using the configuration wizard) is already configured in JDBC persistent stores for them. Use the following steps for custom persistent stores or when reconfiguring from file stores to JDBC stores (migration of messages from file to JDBC is out of the scope of this EDG).- Recommendations for TLOGs and JMS Datasource Consolidation
To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores. - Roadmap for Configuring a JDBC Persistent Store for TLOGs
The following topics describe how to configure a database-based persistent store for transaction logs. - Roadmap for Configuring a JDBC Persistent Store for JMS
The following topics describe how to configure a database-based persistent store for JMS. - Creating a User and Tablespace for TLOGs
Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database. - Creating a User and Tablespace for JMS
Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database. - Creating GridLink Data Sources for TLOGs and JMS Stores
Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store. - Assigning the TLOGs JDBC Store to the Managed Servers
If you are going to accomplish data source consolidation, you will reuse the<PREFIX>_WLS
tablespace andWLSRuntimeSchemaDataSource
for the TLOG persistent store. Otherwise, ensure that you create the tablespace and user in the database, and you have created the datasource before you assign the TLOG store to each of the required Managed Servers. - Creating a JDBC JMS Store
After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the WebLogic Remote Console to create the store. - Assigning the JMS JDBC store to the JMS Servers
After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers. - Creating the Required Tables for the JMS JDBC Store
The final step in using a JDBC persistent store for JMS is to create the required JDBC store tables. Perform this task before you restart the Managed Servers in the domain.
Recommendations for TLOGs and JMS Datasource Consolidation
To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores.
Oracle recommends you to reuse the WLSRuntimeSchemaDataSource
as is
for TLOGs and JMS persistent stores under non-high workloads and consider increasing the
WLSRuntimeSchemaDataSource
pool size. Reuse of datasource forces to
use the same schema and tablespaces, and so the PREFIX_WLS_RUNTIME
schema in the PREFIX_WLS
tablespace is used for both TLOGs and JMS
messages.
-
High contention in the DataSource can cause persistent stores to fail if no connections are available in the pool to persist JMS messages.
-
High Contention in the DataSource can cause issues in transactions if no connections are available in the pool to update transaction logs.
For these cases, use a separate datasource for TLOGs and stores and a separate datasource for the different stores. You can still reuse the PREFIX_WLS_RUNTIME
schema but configure separate custom datasources to the same schema to solve the contention issue.
Roadmap for Configuring a JDBC Persistent Store for TLOGs
The following topics describe how to configure a database-based persistent store for transaction logs.
Note:
Steps 1 and 2 are optional. To accomplish data source consolidation and connection
usage reduction, you can reuse PREFIX_WLS
tablespace and
WLSRuntimeSchemaDataSource
as described in Recommendations for TLOGs and JMS Datasource Consolidation.
Roadmap for Configuring a JDBC Persistent Store for JMS
The following topics describe how to configure a database-based persistent store for JMS.
Note:
Steps 1 and 2 are optional. To accomplish data source consolidation and connection
usage reduction, you can reuse PREFIX_WLS
tablespace and
WLSRuntimeSchemaDataSource
as described in Recommendations for TLOGs and JMS Datasource Consolidation.
Creating a User and Tablespace for TLOGs
Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database.
Creating a User and Tablespace for JMS
Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database.
Creating GridLink Data Sources for TLOGs and JMS Stores
Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store.
For an enterprise deployment, you should use GridLink data sources for your TLOGs and JMS stores. To create a GridLink data source:
Assigning the TLOGs JDBC Store to the Managed Servers
If you are going to accomplish data source consolidation, you will reuse the
<PREFIX>_WLS
tablespace and
WLSRuntimeSchemaDataSource
for the TLOG persistent store. Otherwise,
ensure that you create the tablespace and user in the database, and you have created the
datasource before you assign the TLOG store to each of the required Managed
Servers.
- Log into the Oracle WebLogic Remote Console.
- In the Edit Tree, navigate to Environment > Servers.
- Click the name of the Managed Server.
- Select the Services > JTA tab.
- Enable Transaction Log Store in JDBC.
- In the Data Source menu, select
WLSSchemaRuntimeDatasource to accomplish data source
consolidation. The
<PREFIX>_WLS
tablespace will be used for TLOGs. - In the Transaction Log Prefix Name field, specify a prefix name to form a unique JDBC TLOG store name for each configured JDBC TLOG store.
- Click Save.
- Repeat step 2 to step 7 for each additional managed server.
- To activate these changes, commit the changes in the shopping cart.
Creating a JDBC JMS Store
After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the WebLogic Remote Console to create the store.
Assigning the JMS JDBC store to the JMS Servers
After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers.
- Log into the WebLogic Remote Console.
- Navigate to the Edit Tree.
- In the structure tree, expand Services > Messaging > JMS Servers.
- Click the name of the JMS Server that you want to use the persistent store.
- In the Persistent Store property, select the JMS persistent store you created.
- Click Save.
- Repeat Step 3 to Step 6 for each of the additional JMS Servers in the cluster.
- To activate these changes, commit changes in the shopping cart.
About JDBC Persistent Stores for Web Services
By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services.
-
Reliable Messaging
-
Make Connection
-
SecureConversation
-
Message buffering
You also have the option to use a JDBC persistence store in your WebLogic Server web service, instead of the default store. For information about web service persistence, see Managing Web Service Persistence.
Best Configuration Practices When Using RAC and Gridlink Datasources
Oracle recommends that you use GridLink data sources when you use an Oracle RAC database. If you follow the steps described in the Enterprise Deployment guide, the datasources will be configured as GridLink.
GridLink datasources provide dynamic load balancing and failover across the nodes in an Oracle Database cluster, and also receive notifications from the RAC cluster when nodes are added or removed. For more information about GridLink datasources, see Using Active GridLink Data Sources in Administering JDBC Data Sources for Oracle WebLogic Server.
Here is a summary of the best practices when using GridLink to connect to the RAC database:
- Use a database service (defined with srvctl) different from the default database
service
In order to receive and process notifications from the RAC database, the GridLink needs to connect to a database service (defined with
srvctl
) instead to a default database service. These services monitor the status of resources in the database cluster and generate notifications when the status changes. A database service is used in Enterprise Deployment guide, created and configured as described in Creating Database Services. - Use the long format database connect string in the datasources
When Gridlink datasources are used, the long format database connect string must be used. The Configuration Wizard does not set the long format string, it sets the short format instead. You can modify it manually later to set the long format. To update the datasources:
- Connect to the WebLogic Server Console and navigate to Domain Structure > Services > Datasources.
- Select a datasource, click the Configuration tab, and then click the Connection Pool tab.
- Within the JDBC URL, change the URL from
jdbc:oracle:thin:[SCAN_VIP]:[SCAN_PORT]/[SERVICE_NAME]
tojdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=[SCAN_VIP])(PORT=[SCAN_PORT])))(CONNECT_DATA=(SERVICE_NAME=[SERVICE_NAME])))
For example:jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan-address)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=soaedg.example.com)))
- Use auto-ons
The ONS connection list is automatically provided from the database to the driver. You can leave the ONS Nodes list empty in the datasources configuration.
- Test Connections On Reserve
Verify that the Test Connections On Reserve is checked in the datasources.
Eventhough the GridLink datasources receive FAN events when a RAC instances becomes unavailable, it is a best practice to enable the Test Connections On Reserve in the datasource and ensure that the connection returned to the application is good.
- Seconds to Trust an Idle Pool Connection
For a maximum efficiency of the test, you can also set Seconds to Trust an Idle Pool Connection to 0, so the connections are always verified. Setting this value to zero means that all the connections returned to the application will be tested. If this parameter is set to 10, the result of the previous test will be valid for 10 seconds and if a connection is reused before the lapse of 10 seconds, the result will still be valid.
- Test Frequency
Verify that the Test Frequency parameter value in the datasources is not 0. This is the number of seconds a WebLogic Server instance waits between attempts when testing unused connections. The default value of 120 is normally enough.
Using TNS Alias in Connect Strings
You can create an alias to map the URL information instead of specifying long
database connection strings in the jdbc connection pool of a datasource. The connection
string information is stored in a tnsnames.ora
file with an associated
alias name. This alias is used in the connect string of the connection pool.
The following example is of a connect string using tns alias.
jdbc:oracle:thin:@soaedg_alias
The tnsnames.ora
file contains the following details.
soaedg_alias =
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=soaedgdb-scan)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=soaedg.example.com))
)
You must specify the oracle.net.tns_admin
property in the datasource
configuration to point to a specific tnsnames.ora
file. For example,
<property><name>oracle.net.tns_admin</name><value>/u01/oracle/config/domains/fmw1412edg/config/tnsadmin</value></property></properties>
This is the Maximum Availability and Enterprise Deployment recommended approach for JDBC urls. It simplifies JDBC configurations, facilitates DB configuration aliasing in disaster protection scenarios, and makes database connection changes more dynamic. For more information, see Use a TNS Alias Instead of a DB Connection String in Administering JDBC Data Sources for Oracle WebLogic Server.
In Oracle Fusion Middleware 14.1.2, you can use a new type of deployment module to manage
the tnsnames.ora
files, wallet files, and keystore and truststore files
associated with a database connection. These are called DBClientData modules. For more
information, see What Are DBClientData Modules in
Administering JDBC Data Sources for Oracle WebLogic Server. In this
EDG, DBClientData type of module is used to maintain the database client information.
However, wallets and SSL configuration is not used to access the database so the
DBClientData module contains only the appropriate tnsnames.ora
.
The following steps are required to use a TNS alias in the different Datasources used by FMW and WLS schemas:
-
Create a
tnsnames.ora
with the pertaining alias and mapping URLs used in the connection pools. Copy the connect string from one of the existing datasource configuration files. For example,Note:
This is an example using the short jdbc URL.[oracle@soahost1~]$ grep url /u01/oracle/config/domains/soaedgdomain/config/jdbc/opss-datasource-jdbc.xml <url>jdbc:oracle:thin:@drdbrac12a-scan.dbsubnet.vcnlon80.oraclevcn.com:1521/soaedg.example.com</url> [oracle@soahost1~]$
Use the information in the connect string to add a long URL entry to a
tnsnames.ora
file. Use an alias name that identifies your connection. Notice that in order to deploy thetsnnames.ora
as DBCLient module the location of the deployment module needs to be two levels down under the domain config directory if it resides on the WLS Administration Server node. The file can also be created in the node that runs the WebLogic Remote Console and can also be uploaded (as an application ear or war file).[oracle@soahost1~]$ cat /u01/oracle/config/tnsadmin/tnsnames.ora soaedg_alias = (DESCRIPTION= (ADDRESS_LIST= (LOAD_BALANCE=ON) (ADDRESS=(PROTOCOL=TCP)(HOST= drdbrac12a-scan.dbsubnet.vcnlon80.oraclevcn.com)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME=soaedg.example.com)) )
-
Deploy the directory containing the
tnsnames.ora
as a DBClientData module.-
Access the domain provider in the WebLogic Remote Console.
-
Click Edit Tree.
-
Click Environment > Deployments >Database Client Data Directories.
-
Click New.
-
Enter a name for the dbclient directory deployment. For example,
dbclientdata_modulename
.If the directory containing the
tnsnames.ora
file resides on your local computer, uncheck the Upload checkbox. -
Click Create.
-
Click Save.
The cart on the top right part of the screen will display full with a yellow bag inside.
-
Click the Cart icon and select Commit Changes.
This will create a
tnsnames/dbclient
module under domain dir/u01/oracle/config/domains/soaedgdomain/config/ dbclientdata/dbclientdata_modulename
.You can also perform the deployment of a database client module using the
deploy
command in wlst.
-
-
Update the different Datasources and fmwconfig files to use the alias instead of the explicit URLS.
Note:
To update a datasource to use the tns alias, the datasource configuration needs to include both a pointer to thetsnames.ora
file and the alias itself in the jdbc URL.You must perform the following steps to include a pointer to the
tnsnames.ora
file include the propertyoracle.net.tns_admin
in the datasource properties.-
Access the domain provider in the WebLogic Remote Console.
-
Click Edit Tree.
-
Click Services > Datasources > Datasource_name.
-
In the navigation tree on the left, select Properties for the precise Datasource.
-
Click New.
-
Enter
oracle.net.tns_admin
as the property name. -
Click Create.
-
In the next screen with the property details, enter as value the directory for the
dbclientdata_modulename
that is/u01/oracle/config/domains/soaedgdomain/config/ dbclientdata/dbclientdata_modulename
in the example above. -
Click Save.
The cart on the top right part of the screen will display full with a yellow bag inside.
-
In the navigation tree on the left, click the Datasource name.
-
Select the Connection Pool tab.
-
In the URL, replace the URL with the alias syntax as shown below:
jdbc:oracle:thin:@soaedg_alias
-
Click Save.
-
Click the Cart icon and select Commit Changes.
If you check the datasource configuration file, it should reflect the following under the
<jdbc-driver-params> <properties>
entries:<property> <name>oracle.net.tns_admin</name> <value>/u01/oracle/config/domains/soaedgdomain/config/dbclientdata/dbclientdata_modulename</value> </property>
The datasource configuration file should reflect as JDBC URL under
<jdbc-driver-params>
as shown below:<url>jdbc:oracle:thin:@soaedg_alias</url>
-
-
To update the FMW jps config to use the tns alias, the
domain_path/config/fmwconfig/jps-config.xml
anddomain_path/config/fmwconfig/jps-config-jse.xml
files need to be updated and both a pointer to thetsnames.ora
file and the alias itself must be included in the jdbc url, that is replace the information in the propertySet for the DB with the updated URL and the tnsadmin pointer.<property name="oracle.net.tns_admin" value="/u01/oracle/config/domains/soaedgdomain/config/dbclientdata/dbclientdata_modulename "/> <property name="jdbc.url" value="jdbc:oracle:thin:@soaedg_alias "/>
Restart the Administration Server for all the changes to be applied.
Alternatively, you can use the
https://github.com/oracle-samples/maa/tree/main/1412EDG/fmw1412_change_to_tns_alias.sh
script instead of the steps 1, 2, 3 and 4 to deploy the corresponding DBClientData
module and replace all urls in the jdbc and jps configuration with the pertaining
alias.
However, using the script is only recommended when all domain extensions have been completed and all the required datasources are present in the domain configuration because the script is configured to exit if an existing tnsadmin already exists in the configuration files. This behavior is intentional to avoid conflicts with other DBClient modules in the domain.
The recommended approach is to configure your domain to suit your functional needs (SOA, OSB, SOA+OSB, SOA+BPMN and so on as described in the Flow Charts and Road Maps for Implementing the Primary Oracle SOA Suite Enterprise Topologies section in the About the Oracle SOA Suite Enterprise Deployment Topology chapter). After your domain is complete and working, use the script to make the TNS alias change. Ensure you read the script’s instructions in its header for its correct execution.
Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle SOA Suite enterprise deployment.
Note:
Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.
For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:
Table 19-4 lists the static artifacts to back up in a typical Oracle SOA Suite enterprise deployment.
Table 19-4 Static Artifacts to Back Up in the Oracle SOA Suite Enterprise Deployment
Type | Host | Tier |
---|---|---|
Database Oracle home |
DBHOST1 and DBHOST2 |
Data Tier |
Oracle Fusion Middleware Oracle home |
WEBHOST1 and WEBHOST2 |
Web Tier |
Oracle Fusion Middleware Oracle home |
SOAHOST1 and SOAHOST2 (or NAS Filer) |
Application Tier |
Installation-related files |
WEBHOST1, WEHOST2, and shared storage |
N/A |
Table 19-5 lists the runtime artifacts to back up in a typical Oracle SOA Suite enterprise deployment.
Table 19-5 Run-Time Artifacts to Back Up in the Oracle SOA Suite Enterprise Deployment
Type | Host | Tier |
---|---|---|
Administration Server domain home (ASERVER_HOME) |
SOAHOST1 (or NAS Filer) |
Application Tier |
Application home (APPLICATION_HOME) |
SOAHOST1 (or NAS Filer) |
Application Tier |
Oracle RAC databases |
DBHOST1 and DBHOST2 |
Data Tier |
Scripts and Customizations |
Per host |
Application Tier |
Deployment Plan home (DEPLOY_PLAN_HOME) |
SOAHOST1 (or NAS Filer) |
Application Tier |
OHS/OTD Configuration directory |
WEBHOST1 and WEBHOST2 |
Web Tier |
Online Domain Run-Time Artifacts Backup/Recovery Example
This section describes an example procedure to implement a backup of the WebLogic domain artifacts. This approach can be used during the EDG configuration process, for example, before extending the domain to add a new component.
This example has the following features:
- App tier Runtime Artifacts are backed up/recovered in this example:
Artifact Host Tier Administration Server domain home (ASERVER_HOME)
SOAHOST1 (or NAS Filer)
Application Tier
Application home (APPLICATION_HOME)
SOAHOST1 (or NAS Filer)
Application Tier
Deployment Plan home (DEPLOY_PLAN_HOME)
SOAHOST1 (or NAS Filer)
Application Tier
Runtime artifacts (adapter control files) (ORACLE_RUNTIME)
SOAHOST1 (or NAS Filer)
Application Tier
Scripts and Customizations
Per host
Application Tier
- This backup procedure is suitable for cases when a major configuration change is
done to the domain (that is, domain extension). If something goes wrong, or if you
make incorrect selections, you can restore the domain configuration to the earlier
state.
Database backup/restore is not mandatory for this sample procedure, but steps to backup/restore the database are included as optional.
Artifact Host Tier Oracle RAC database (optional)
Oracle RAC database (optional)
Data Tier
- Operating system tools are used in this example. Some of the run-time artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, do the backup and recovery of these volumes from the NAS filer directly rather than from the application servers.
- Managed servers are running during the backup. MSERVER_HOME is not backed up and pack/unpack procedure is used later to recover MSERVER_HOME. Therefore, managed server lock files are not included in the backup.
- AdminServer can be running during the backup if
.lok
files are excluded from the backup. To avoid an inconsistent backup, do not make any configuration changes until the backup is complete. To ensure that no changes are made in the WebLogic Server domain, you can lock the WebLogic Server configuration.Note:
Excluding these:AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
AdminServer/tmp/AdminServer.lok
Back Up the Domain Run-Time Artifacts
To backup the domain runtime artifacts, perform the following steps:
Parent topic: Online Domain Run-Time Artifacts Backup/Recovery Example
Restore the Domain Run-Time Artifacts
Parent topic: Online Domain Run-Time Artifacts Backup/Recovery Example
Configuration and Management Tasks for an Oracle SOA Suite Enterprise Deployment
These are some of the key configuration and management tasks that you likely need to perform on an Oracle SOA Suite enterprise deployment.
Deploying Oracle SOA Suite Composite Applications to an Enterprise Deployment
Oracle SOA Suite applications are deployed as composites, consisting of different kinds of Oracle SOA Suite components. SOA composite applications include the following:
-
Service components such as Oracle Mediator for routing, BPEL processes for orchestration, BAM processes for orchestration (if Oracle BAM Suite is also installed), human tasks for workflow approvals, spring for integrating Java interfaces into SOA composite applications, and decision services for working with business rules.
-
Binding components (services and references) for connecting SOA composite applications to external services, applications, and technologies.
These components are assembled into a single SOA composite application.
When you deploy an Oracle SOA Suite composite application to an Oracle SOA Suite enterprise deployment, be sure to deploy each composite to a specific server or cluster address and not to the load balancer address (soa.example.com
).
Deploying composites to the load balancer address often requires direct connection from the deployer nodes to the external load balancer address. As a result, you have to open additional ports in the firewalls.
For more information about Oracle SOA Suite composite applications, see the following sections in Administering Oracle SOA Suite and Oracle Business Process Management Suite:
Using Shared Storage for Deployment Plans and SOA Infrastructure Applications Updates
When you redeploy a SOA infrastructure application or resource adapter within the SOA cluster, the deployment plan along with the application bits should be accessible to all servers in the cluster.
SOA applications and resource adapters are installed using nostage deployment mode. Because the administration sever does not copy the archive files from their source location when the nostage deployment mode is selected, each server must be able to access the same deployment plan.
To ensure deployment plan location is available to all servers in the domain, use the Deployment Plan home location described in File System and Directory Variables Used in This Guide and represented by the DEPLOY_PLAN_HOME variable in the Enterprise Deployment Workbook.
Managing Database Growth in an Oracle SOA Suite Enterprise Deployment
When the amount of data in the Oracle SOA Suite database grows very large, maintaining the database can become difficult, especially in an Oracle SOA Suite enterprise deployment where potentially many composite applications are deployed.
See the following sections in Administering Oracle SOA Suite and Oracle Business Process Management Suite:
Managing the JMS Messages in a SOA Server
There are several procedures to manage JMS messages in a SOA server. You may need to perform these procedures in some scenarios, for example, to preserve the messages during a scale-in operation.
This section explains some of these procedures in detail.
Draining the JMS Messages from a SOA Server
The process of draining the JMS messages helps you clear out the messages from a particular WebLogic server. A basic approach to drain stores consists of stopping the message production in the appropriate JMS Servers and allowing the applications to consume the messages.
This procedure, however, is application dependent, and could take an unpredictable amount of time. As an alternative, general instructions are provided here for saving the current messages from their current JMS destinations and, when/if required, importing them into a different server.
The draining procedure is useful in scale-in/down scenarios, where the size of the cluster is reduced by removing one or more servers. You can ensure that no messages are lost by draining the messages from the server that you delete, and then importing them into another server in the cluster.
You can also use this procedure in some disaster recovery maintenance scenarios, when the servers are started in a secondary location by using an Snapshot Standby database. In this case, you may need to drain the messages from the domain before starting it in the secondary location to avoid their consumption in the standby domain when you start the domain (otherwise, duplicate executions could take place). You cannot import messages in this scenario.
Parent topic: Managing the JMS Messages in a SOA Server
Importing the JMS Messages into a SOA Server
Messages that have been previously exported can be imported in another or the same member of the JMS destination. This procedure is used in scale-in/down scenarios, to import the messages from the server that you want to remove, to another member in the cluster.
-
Import messages in a queue:
-
In the WebLogic Remote Console, open the Monitoring Tree.
-
Navigate to Dashboards and click the JMS Destinations dashboard.
-
Select the queue where you want to import messages.
-
In the Messages tab, select Import to import the messages of this destination.
-
Repeat the steps for each queue destination.
-
-
Import messages in a topic:
-
In the WebLogic Remote Console, open the Monitoring Tree.
-
Navigate to Dashboards and click the JMS Destinations dashboard.
-
Choose the topic member where you want to import the messages.
-
In the topic, expand the durable subscribers and select the one where you want to import the messages.
-
Click Show Messages and Import. Select the file with the messages of this subscriber.
-
Repeat the steps for each subscriber in the topic where you have to import messages.
-
Parent topic: Managing the JMS Messages in a SOA Server