13 Common Configuration and Management Tasks for an Enterprise Deployment
The configuration and management tasks that may need to be performed on the enterprise deployment environment are detailed in this section.
- Setting the Memory Parameters
The initial startup parameter, which defines the memory usage, is insufficient. When the memory settings is insufficient, you may experience a delay when you log in to/em
or the log in might fail. You must increase the value of this parameter to provide sufficient memory usage. - Verifying Manual Failover of the Administration Server
In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from BIHOST1 and BIHOST2 are detailed in the following sections. - Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer
It is important to understand how to enable SSL communication between the middle tier and the hardware load balancer. - Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle Analytics Server enterprise deployment. - Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment
The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence. - About JDBC Persistent Stores for Web Services
By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services. - Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle Analytics Server enterprise deployment.
Setting the Memory Parameters
The initial startup parameter, which defines the memory usage, is insufficient. When the memory settings is insufficient, you may experience a delay when you log in to /em
or the log in might fail. You must increase the value of this parameter to provide sufficient memory usage.
To change the memory allocation setting, do the following:
Verifying Manual Failover of the Administration Server
In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from BIHOST1 and BIHOST2 are detailed in the following sections.
Assumptions:
-
The Administration Server is configured to listen on ADMINVHN, and not on localhost or on any other host’s address.
For more information about the ADMINVHN virtual IP address, see Reserving the Required IP Addresses for an Enterprise Deployment.
-
These procedures assume that the Administration Server domain home (ASERVER_HOME) has been mounted on both host computers. This ensures that the Administration Server domain configuration files and the persistent stores are saved on the shared storage device.
-
The Administration Server is failed over from BIHOST1 to BIHOST2, and the two nodes have these IPs:
-
BIHOST1: 100.200.140.165
-
BIHOST2: 100.200.140.205
-
ADMINVHN : 100.200.140.206. This is the Virtual IP where the Administration Server is running, assigned to a virtual sub-interface (for example, eth0:1), to be available on BIHOST1 or BIHOST2.
-
-
Oracle WebLogic Server and Oracle Fusion Middleware components have been installed in BIHOST2 as described in the specific configuration chapters in this guide.
Specifically, both host computers use the exact same path to reference the binary files in the Oracle home.
- Failing Over the Administration Server to a Different Host
The following procedure shows how to fail over the Administration Server to a different node (BIHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine). - Validating Access to the Administration Server on BIHOST2 Through Oracle HTTP Server
If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs. - Configuring Roles for Administration of an Enterprise Deployment
In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group. - Failing the Administration Server Back to BIHOST1
After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.
Failing Over the Administration Server to a Different Host
The following procedure shows how to fail over the Administration Server to a different node (BIHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine).
This procedure assumes you’ve configured a per domain Node Manager for the enterprise topology. See About the Node Manager Configuration in a Typical Enterprise Deployment
To fail over the Administration Server to a different host:
-
Stop the Administration Server.
-
Stop the Node Manager in the Administration Server domain directory (ASERVER_HOME).
-
Migrate the ADMINVHN virtual IP address to the second host:
-
Run the following command as root on BIHOST1 to check the virtual IP address at its CIDR:
ip addr show dev ethX
Where,
X
is the current interface used by ADMINVHN.For example:ip addr show dev eth0
-
Run the following command as root on BIHOST1 (where X:Y is the current interface used by ADMINVHN):
ip addr del ADMINVHN/CIDR dev eth
X
:Y
Where,
X
:Y
is the current interface used by ADMINVHN.For example:ip addr del 100.200.140.206/24 dev eth0:1
-
Run the following command as root on BIHOST2:
ip addr add ADMINVHN/CIDR dev ethX label ethX:Y
Where,
X
:Y
is the current interface used by ADMINVHN.For example:ip addr add 100.200.140.206/24 dev eth0 label eth0:1
Note:
Ensure that the CIDR representing the netmask and interface to be used to match the available network configuration in BIHOST2.
The name of the network interface device may something other than ethX, especially on systems with redundant bonded interfaces.
-
-
Update the routing tables using
arping
, for example:arping -b -A -c 3 -I eth0 100.200.140.206
-
Start the Node Manager in the Administration Server domain home on BIHOST2.
-
Start the Administration Server on BIHOST2.
-
Test that you can access the Administration Server on BIHOST2 as follows:
-
Ensure that you can access the Oracle WebLogic Server Administration Console using the following URL:
http://ADMINVHN:7001/console
-
Check that you can access and verify the status of components in Fusion Middleware Control using the following URL:
http://ADMINVHN:7001/em
-
Parent topic: Verifying Manual Failover of the Administration Server
Validating Access to the Administration Server on BIHOST2 Through Oracle HTTP Server
If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs.
From the load balancer, access the following URLs to ensure that you can access the Administration Server when it is running on BIHOST2:
-
http://admin.example.com/console
This URL should display the WebLogic Server Administration console.
-
http://admin.example.com/em
This URL should display Oracle Enterprise Manager Fusion Middleware Control.
Parent topic: Verifying Manual Failover of the Administration Server
Configuring Roles for Administration of an Enterprise Deployment
In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group.
Each enterprise deployment consists of multiple products. Some of the products have specific administration users, roles, or groups that are used to control administration access to each product.
However, for an enterprise deployment, which consists of multiple products, you can use a single LDAP-based authorization provider and a single administration user and group to control access to all aspects of the deployment. See Creating a New LDAP Authenticator and Provisioning a New Enterprise Deployment Administrator User and Group.
To be sure that you can manage each product effectively within the single enterprise deployment domain, you must understand which products require specific administration roles or groups, you must know how to add any specific product administration roles to the single, common enterprise deployment administration group, and if necessary, you must know how to add the enterprise deployment administration user to any required product-specific administration groups.
For more information, see the following topics.
Parent topic: Verifying Manual Failover of the Administration Server
Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group
For products with a product-specific administration group, use the following procedure to add the enterprise deployment administration user (weblogic_bi
to the group. This allows you to manage the product by using the enterprise manager administrator user:
Failing the Administration Server Back to BIHOST1
After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.
Parent topic: Verifying Manual Failover of the Administration Server
Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer
It is important to understand how to enable SSL communication between the middle tier and the hardware load balancer.
Note:
The following steps are applicable if the hardware load balancer is configured with SSL and the front-end address of the system has been secured accordingly.
- When is SSL Communication Between the Middle Tier and Load Balancer Necessary?
- Generating Self-Signed Certificates Using the utils.CertGen Utility
- Creating an Identity Keystore Using the utils.ImportPrivateKey Utility
- Creating a Trust Keystore Using the Keytool Utility
- Importing the Load Balancer Certificate into the Truststore
- Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts
- Configuring Node Manager to Use the Custom Keystores
- Configuring WebLogic Servers to Use the Custom Keystores
When is SSL Communication Between the Middle Tier and Load Balancer Necessary?
In an enterprise deployment, there are scenarios where the software running on the middle tier must access the front-end SSL address of the hardware load balancer. In these scenarios, an appropriate SSL handshake must take place between the load balancer and the invoking servers. This handshake is not possible unless the Administration Server and Managed Servers on the middle tier are started by using the appropriate SSL configuration.
Generating Self-Signed Certificates Using the utils.CertGen Utility
This section describes the procedure to create self-signed certificates on BIHOST1. Create certificates for every app-tier host by using the network name or alias of each host.
The directory where keystores and trust keystores are maintained must be on shared storage that is accessible from all nodes so that when the servers fail over (manually or with server migration), the appropriate certificates can be accessed from the failover node. Oracle recommends that you use central or shared stores for the certificates used for different purposes (for example, SSL set up for HTTP invocations). See the information on filesystem specifications for the KEYSTORE_HOME location provided in About the Recommended Directory Structure for an Enterprise Deployment.
For information on using trust CA certificates instead, see the information about configuring identity and trust in Administering Security for Oracle WebLogic Server.
About Passwords
The passwords used in this guide are used only as examples. Use secure passwords in a production environment. For example, use passwords that include both uppercase and lowercase characters as well as numbers.
To create self-signed certificates:
Creating an Identity Keystore Using the utils.ImportPrivateKey Utility
This section describes how to create an Identity Keystore on BIHOST1.example.com.
In previous sections you have created certificates and keys that reside on shared storage. In this section, the certificate and private keys created earlier for all hosts and ADMINVHN are imported into a new Identity Store. Make sure that you use a different alias for each of the certificate and key pair imported.
Note:
The Identity Store is created (if none exists) when you import a certificate and the corresponding key into the Identity Store by using the utils.ImportPrivateKey
utility.
Creating a Trust Keystore Using the Keytool Utility
To create the Trust Keystore on BIHOST1.example.com:
Importing the Load Balancer Certificate into the Truststore
For the SSL handshake to act properly, the load balancer's certificate must be added to the WLS servers truststore. To add a load balancer’s certificate:
Note:
The need to add the load balancer certificate to the WLS server truststore applies only to self-signed certificates. If the load balancer certificate is issued by a third-party CA, you have to import the public certificates of the root and the intermediate CAs into the truststore.
Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts
setDomainEnv.sh
script is provided by Oracle WebLogic Server and is used to start the Administration Server and the Managed Servers in the domain. To ensure that each server accesses the updated trust store, edit the setDomainEnv.sh
script in each of the domain home directories in the enterprise deployment.
Configuring Node Manager to Use the Custom Keystores
To configure the Node Manager to use the custom keystores, add the following lines to the end of the nodemanager.properties
files located both in ASERVER_HOME
/nodemanager
and MSERVER_HOME
/nodemanager
directories in all nodes:
KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=Identity KeyStore CustomIdentityKeyStorePassPhrase=Identity KeyStore Passwd CustomIdentityAlias=Identity Key Store Alias CustomIdentityPrivateKeyPassPhrase=Private Key used when creating Certificate
Make sure to use the correct value for CustomIdentityAlias
for Node Manager's listen address. For example, in the BIHOST1 MSERVER_HOME, use the alias BIHOST1 and in the ASERVER_HOME on BIHOST1, use the alias ADMINVHN according to the steps in Creating an Identity Keystore Using the utils.ImportPrivateKey Utility.
Example for BIHOST1: KeyStores=CustomIdentityAndCustomTrust CustomIdentityKeyStoreFileName=KEYSTORE_HOME/appIdentityKeyStore.jks CustomIdentityKeyStorePassPhrase=password CustomIdentityAlias=BIHOST1 CustomIdentityPrivateKeyPassPhrase=password
The passphrase entries in the nodemanager.properties
file are encrypted when you start Node Manager. For security reasons, minimize the time the entries in the nodemanager.properties
file are left unencrypted. After you edit the file, restart Node Manager as soon as possible so that the entries are encrypted.
Note:
TheCustomIdentityAlias
value will need to be corrected every time the domain is extended after this configuration is performed. An unpack operation will replace the CustomIdentityAlias
with the Administration Server's value when the domain configuration is written.
Configuring WebLogic Servers to Use the Custom Keystores
To configure the identity and trust keystores:
Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle Analytics Server enterprise deployment.
Note:
Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.
For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:
Table 13-1 lists the static artifacts to back up in a typical Oracle Analytics Server enterprise deployment.
Table 13-1 Static Artifacts to Back Up in the Oracle Analytics Server Enterprise Deployment
Type | Host | Tier |
---|---|---|
Database Oracle home |
DBHOST1 and DBHOST2 |
Data Tier |
Oracle Fusion Middleware Oracle home |
WEBHOST1 and WEBHOST2 |
Web Tier |
Oracle Fusion Middleware Oracle home |
BIHOST1 and BIHOST2 (or NAS Filer) |
Application Tier |
Installation-related files |
WEBHOST1, WEHOST2, and shared storage |
N/A |
Table 13-2 lists the runtime artifacts to back up in a typical Oracle Analytics Server enterprise deployment.
Table 13-2 Run-Time Artifacts to Back Up in the Oracle Analytics Server Enterprise Deployment
Type | Host | Tier |
---|---|---|
Administration Server domain home (ASERVER_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Application home (APPLICATION_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Oracle RAC databases |
DBHOST1 and DBHOST2 |
Data Tier |
Scripts and Customizations |
Per host |
Application Tier |
Deployment Plan home (DEPLOY_PLAN_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Singleton Data Directory (SDD) |
BIHOST1 (or NAS filer) |
Application Tier |
Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment
The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence.
For example, the JMS subsystem stores persistent JMS messages and durable subscribers, and the JTA Transaction Log (TLOG) stores information about the committed transactions that are coordinated by the server but may not have been completed. The persistent store supports persistence to a file-based store or to a JDBC-enabled database. Persistent stores’ high availability is provided by server or service migration. Server or service migration requires that all members of a WebLogic cluster have access to the same transaction and JMS persistent stores (regardless of whether the persistent store is file-based or database-based).
For an enterprise deployment, Oracle recommends using JDBC persistent stores for transaction logs (TLOGs) and JMS.
This section analyzes the benefits of using JDBC versus File persistent stores and explains the procedure for configuring the persistent stores in a supported database. If you want to use File persistent stores instead of JDBC stores, the procedure for configuring them is also explained in this section.
Products and Components that use JMS Persistence Stores and TLOGs
Determining which installed FMW products and components utilize persistent stores can be done through the WebLogic Server Console in the Domain Structure navigation under DomainName > Services > Persistent Stores. The list indicates the name of the store, the store type (FileStore and JDBC), and the target of the store. The stores listed that pertain to MDS are outside the scope of this chapter and should not be considered.
Component/Product | JMS Stores | TLOG Stores |
---|---|---|
B2B |
Yes |
Yes |
BAM |
Yes |
Yes |
BPM |
Yes |
Yes |
ESS |
No |
No |
HC |
Yes |
Yes |
Insight |
Yes |
Yes |
MFT |
Yes |
Yes |
OSB |
Yes |
Yes |
SOA |
Yes |
Yes |
WSM |
No |
No |
Component/Product | JMS Stores | TLOG Stores |
---|---|---|
OAM |
No |
No |
OIM |
Yes |
Yes |
JDBC Persistent Stores vs. File Persistent Stores
Oracle Fusion Middleware supports both database-based and file-based persistent stores for Oracle WebLogic Server transaction logs (TLOGs) and JMS. Before you decide on a persistent store strategy for your environment, consider the advantages and disadvantages of each approach.
Note:
Regardless of which storage method you choose, Oracle recommends that for transaction integrity and consistency, you use the same type of store for both JMS and TLOGs.
About JDBC Persistent Stores for JMS and TLOGs
When you store your TLOGs and JMS data in an Oracle database, you can take advantage of the replication and high availability features of the database. For example, you can use Oracle Data Guard to simplify cross-site synchronization. This is especially important if you are deploying Oracle Fusion Middleware in a disaster recovery configuration.
Storing TLOGs and JMS data in a database also means that you do not have to identity a specific shared storage location for this data. Note, however, that shared storage is still required for other aspects of an enterprise deployment. For example, it is necessary for Administration Server configuration (to support Administration Server failover), for deployment plans, and for adapter artifacts, such as the File and FTP Adapter control and processed files.
If you are storing TLOGs and JMS stores on a shared storage device, then you can protect this data by using the appropriate replication and backup strategy to guarantee zero data loss, and you potentially realize better system performance. However, the file system protection is always inferior to the protection provided by an Oracle Database.
For more information about the potential performance impact of using a database-based TLOGs and JMS store, see Performance Considerations for TLOGs and JMS Persistent Stores.
Parent topic: JDBC Persistent Stores vs. File Persistent Stores
Performance Considerations for TLOGs and JMS Persistent Stores
One of the primary considerations when you select a storage method for Transaction Logs and JMS persistent stores is the potential impact on performance. This topic provides some guidelines and details to help you determine the performance impact of using JDBC persistent stores for TLOGs and JMS.
Performance Impact of Transaction Logs Versus JMS Stores
For transaction logs, the impact of using a JDBC store is relatively small, because the logs are very transient in nature. Typically, the effect is minimal when compared to other database operations in the system.
On the other hand, JMS database stores can have a higher impact on performance if the application is JMS intensive.
Factors that Affect Performance
There are multiple factors that can affect the performance of a system when it is using JMS DB stores for custom destinations. The main ones are:
-
Custom destinations involved and their type
-
Payloads being persisted
-
Concurrency on the SOA system (producers on consumers for the destinations)
Depending on the effect of each one of the above, different settings can be configured in the following areas to improve performance:
-
Type of data types used for the JMS table (using raw versus lobs)
-
Segment definition for the JMS table (partitions at index and table level)
Impact of JMS Topics
If your system uses Topics intensively, then as concurrency increases, the performance degradation with an Oracle RAC database will increase more than for Queues. In tests conducted by Oracle with JMS, the average performance degradation for different payload sizes and different concurrency was less than 30% for Queues. For topics, the impact was more than 40%. Consider the importance of these destinations from the recovery perspective when deciding whether to use database stores.
Impact of Data Type and Payload Size
When you choose to use the RAW or SecureFiles LOB data type for the payloads, consider the size of the payload being persisted. For example, when payload sizes range between 100b and 20k, then the amount of database time required by SecureFiles LOB is slightly higher than for the RAW data type.
More specifically, when the payload size reach around 4k, then SecureFiles tend to require more database time. This is because 4k is where writes move out-of-row. At around 20k payload size, SecureFiles data starts being more efficient. When payload sizes increase to more than 20k, then the database time becomes worse for payloads set to the RAW data type.
One additional advantage for SecureFiles is that the database time incurred stabilizes with payload increases starting at 500k. In other words, at that point it is not relevant (for SecureFiles) whether the data is storing 500k, 1MB or 2MB payloads, because the write is asynchronized, and the contention is the same in all cases.
The effect of concurrency (producers and consumers) on the queue’s throughput is similar for both RAW and SecureFiles until the payload sizes reach 50K. For small payloads, the effect on varying concurrency is practically the same, with slightly better scalability for RAW. Scalability is better for SecureFiles when the payloads are above 50k.
Impact of Concurrency, Worker Threads, and Database Partioning
Concurrency and worker threads defined for the persistent store can cause contention in the RAC database at the index and global cache level. Using a reverse index when enabling multiple worker threads in one single server or using multiple Oracle WebLogic Server clusters can improve things. However, if the Oracle Database partitioning option is available, then global hash partition for indexes should be used instead. This reduces the contention on the index and the global cache buffer waits, which in turn improves the response time of the application. Partitioning works well in all cases, some of which will not see significant improvements with a reverse index.
Parent topic: JDBC Persistent Stores vs. File Persistent Stores
Using JDBC Persistent Stores for TLOGs and JMS in an Enterprise Deployment
This section explains the guidelines to use JDBC persistent stores for transaction logs (TLOGs) and JMS. It also explains the procedures to configure the persistent stores in a supported database.
- Recommendations for TLOGs and JMS Datasource Consolidation
To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores. - Roadmap for Configuring a JDBC Persistent Store for TLOGs
The following topics describe how to configure a database-based persistent store for transaction logs. - Roadmap for Configuring a JDBC Persistent Store for JMS
The following topics describe how to configure a database-based persistent store for JMS. - Creating a User and Tablespace for TLOGs
Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database. - Creating a User and Tablespace for JMS
Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database. - Creating GridLink Data Sources for TLOGs and JMS Stores
Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store. - Assigning the TLOGs JDBC Store to the Managed Servers
If you are going to accomplish data source consolidation, you will reuse the<PREFIX>_WLS
tablespace andWLSSchemaDatasource
for the TLOG persistent store. Otherwise, ensure that you create the tablespace and user in the database, and you have created the datasource before you assign the TLOG store to each of the required Managed Servers. - Creating a JDBC JMS Store
After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the Administration Console to create the store. - Assigning the JMS JDBC store to the JMS Servers
After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers. - Creating the Required Tables for the JMS JDBC Store
The final step in using a JDBC persistent store for JMS is to create the required JDBC store tables. Perform this task before you restart the Managed Servers in the domain.
Recommendations for TLOGs and JMS Datasource Consolidation
To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores.
Oracle recommends you to reuse the WLSSchemaDatasource
as is for TLOGs and JMS persistent stores under non-high workloads and consider increasing the WLSSchemaDatasource
pool size. Reuse of datasource forces to use the same schema and tablespaces, and so the PREFIX_WLS_RUNTIME
schema in the PREFIX_WLS
tablespace is used for both TLOGs and JMS messages.
-
High contention in the DataSource can cause persistent stores to fail if no connections are available in the pool to persist JMS messages.
-
High Contention in the DataSource can cause issues in transactions if no connections are available in the pool to update transaction logs.
For these cases, use a separate datasource for TLOGs and stores and a separate datasource for the different stores. You can still reuse the PREFIX_WLS_RUNTIME
schema but configure separate custom datasources to the same schema to solve the contention issue.
Roadmap for Configuring a JDBC Persistent Store for TLOGs
The following topics describe how to configure a database-based persistent store for transaction logs.
Note:
Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS
tablespace and WLSSchemaDatasource
as described in Recommendations for TLOGs and JMS Datasource Consolidation.
Roadmap for Configuring a JDBC Persistent Store for JMS
The following topics describe how to configure a database-based persistent store for JMS.
Note:
Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS
tablespace and WLSSchemaDatasource
as described in Recommendations for TLOGs and JMS Datasource Consolidation.
Creating a User and Tablespace for TLOGs
Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database.
Creating a User and Tablespace for JMS
Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database.
Creating GridLink Data Sources for TLOGs and JMS Stores
Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store.
For an enterprise deployment, you should use GridLink data sources for your TLOGs and JMS stores. To create a GridLink data source:
Assigning the TLOGs JDBC Store to the Managed Servers
If you are going to accomplish data source consolidation, you will reuse the <PREFIX>_WLS
tablespace and WLSSchemaDatasource
for the TLOG persistent store. Otherwise, ensure that you create the tablespace and user in the database, and you have created the datasource before you assign the TLOG store to each of the required Managed Servers.
- Log in to the Oracle WebLogic Server Administration Console.
- In the Change Center, click Lock and Edit.
- To configure the TLOG of a Managed Server, in the Domain Structure tree:
- For static clusters: expand Environment, then Servers, and then click the name of the Managed Server.
- For dynamic cluster: expand Environment, then Cluster, and Server Templates. Click the name of the server template.
- Select the Configuration > Services tab.
- Under Transaction Log Store, select JDBC from the Type menu.
- From the Data Source menu, select WLSSchemaDatasource to accomplish data source consolidation. The
<PREFIX>_WLS
tablespace will be used for TLOGs. - In the Prefix Name field, specify a prefix name to form a unique JDBC TLOG store name for each configured JDBC TLOG store
- Click Save.
- To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
Creating a JDBC JMS Store
After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the Administration Console to create the store.
Assigning the JMS JDBC store to the JMS Servers
After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers.
- Log in to the Oracle WebLogic Server Administration Console.
- In the Change Center, click Lock and Edit.
- In the Domain Structure tree, expand Services, then Messaging, and then JMS Servers.
- Click the name of the JMS Server that you want to use the persistent store.
- From the Persistent Store menu, select the JMS persistent store you created earlier.
- Click Save.
- Repeat steps 3 to 6 for each of the additional JMS Servers in the cluster.
- To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
Using File Persistent Stores for TLOGs and JMS in an Enterprise Deployment
Configuring TLOGs File Persistent Store in a Shared Folder
Oracle WebLogic Server uses the transaction logs to recover from system crashes or network failures.
Configuring TLOGs File Persistent Store in a Shared Folder with a Static Cluster
To set the location for the default persistence stores for each managed server in a static cluster, complete the following steps:
-
Log into the Oracle WebLogic Server Administration console:
ADMINVHN:7001/console
Note:
If you have already configured web tier, use
http://admin.example.com/console
. -
In the Change Center section, click Lock & Edit.
-
For each of the Managed Servers in the cluster:
-
In the Domain Structure window, expand the Environment node, and then click the Servers node.
The Summary of Servers page appears.
-
Click the name of the server (represented as a hyperlink) in the Name column of the table.
The settings page for the selected server appears and defaults to the Configuration tab.
-
On the Configuration tab, click the Services tab.
-
In the Default Store section of the page, enter the path to the folder where the default persistent stores stores its data files.
For the enterprise deployment, use the ORACLE_RUNTIME directory location. This subdirectory serves as the central, shared location for transaction logs for the cluster. See File System and Directory Variables Used in This Guide.
For example:
ORACLE_RUNTIME/domain_name/cluster_name/tlogs
In this example, replace ORACLE_RUNTIME with the value of the variable for your environment. Replace domain_name with the name you assigned to the domain. Replace cluster_name with the name of the cluster you just created.
-
Click Save.
-
-
Complete step 3 for all servers in the SOA_Cluster.
-
Click Activate Changes.
Note:
You validate the location and the creation of the transaction logs later in the configuration procedure.
Configuring TLOGs File Persistent Store in a Shared Folder with a Dynamic Cluster
To set the location for the default persistence stores for a dynamic cluster, update the server template:
-
Log into the Oracle WebLogic Server Administration Console:
ADMINVHN:7001/console
Note:
If you have already configured web tier, use
http://admin.example.com/console
. -
In the Change Center section, click Lock & Edit.
-
Navigate to the server template for the cluster:
-
In the Domain Structure window, expand the Environment and Clusters nodes, and then click the Server Templates node.
The Summary of Server Templates page appears.
-
Click the name of the server template (represented as a hyperlink) in the Name column of the table.
The settings page for the selected server template appears and defaults to the Configuration tab.
-
On the Configuration tab, click the Services tab.
-
In the Default Store section of the page, enter the path to the folder where the default persistent stores stores its data files.
For the enterprise deployment, use the ORACLE_RUNTIME directory location. This subdirectory serves as the central, shared location for transaction logs for the cluster. See File System and Directory Variables Used in This Guide.
For example:
ORACLE_RUNTIME/domain_name/cluster_name/tlogs
In this example, replace ORACLE_RUNTIME with the value of the variable for your environment. Replace domain_name with the name that you assigned to the domain. Replace cluster_name with the name of the cluster you just created.
-
Click Save.
-
-
Click Activate Changes.
Note:
You validate the location and the creation of the transaction logs later in the configuration procedure.
Validating the Location and Creation of the Transaction Logs
After the WLS_SERVER_TYPE1 and WLS_SERVER_TYPE2 Managed Servers are up and running, based on the steps that you performed in Configuring TLOGs File Persistent Store in a Shared Folder with a Static Cluster, verify that the following transaction log directory and transaction logs are created as expected:
ORACLE_RUNTIME/domain_name/OSB_Cluster/tlogs
-
_WLS_WLS_SERVER_TYPE1000000.DAT
-
_WLS_WLS_SERVER_TYPE2000000.DAT
About JDBC Persistent Stores for Web Services
By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services.
-
Reliable Messaging
-
Make Connection
-
SecureConversation
-
Message buffering
You also have the option to use a JDBC persistence store in your WebLogic Server web service, instead of the default store. For information about web service persistence, see Managing Web Service Persistence.
Performing Backups and Recoveries for an Enterprise Deployment
It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle Analytics Server enterprise deployment.
Note:
Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.
For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:
Table 13-1 lists the static artifacts to back up in a typical Oracle Analytics Server enterprise deployment.
Table 13-3 Static Artifacts to Back Up in the Oracle Analytics Server Enterprise Deployment
Type | Host | Tier |
---|---|---|
Database Oracle home |
DBHOST1 and DBHOST2 |
Data Tier |
Oracle Fusion Middleware Oracle home |
WEBHOST1 and WEBHOST2 |
Web Tier |
Oracle Fusion Middleware Oracle home |
BIHOST1 and BIHOST2 (or NAS Filer) |
Application Tier |
Installation-related files |
WEBHOST1, WEHOST2, and shared storage |
N/A |
Table 13-2 lists the runtime artifacts to back up in a typical Oracle Analytics Server enterprise deployment.
Table 13-4 Run-Time Artifacts to Back Up in the Oracle Analytics Server Enterprise Deployment
Type | Host | Tier |
---|---|---|
Administration Server domain home (ASERVER_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Application home (APPLICATION_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Oracle RAC databases |
DBHOST1 and DBHOST2 |
Data Tier |
Scripts and Customizations |
Per host |
Application Tier |
Deployment Plan home (DEPLOY_PLAN_HOME) |
BIHOST1 (or NAS Filer) |
Application Tier |
Singleton Data Directory (SDD) |
BIHOST1 (or NAS filer) |
Application Tier |