WebLogic Server Performance and Tuning
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
WebLogic Server only performs as well as the applications running on it. To quote the authors of Mastering BEA WebLogic Server: Best Practices for Building and Deploying J2EE Applications: "Good application performance starts with good application design. Overly-complex or poorly-designed applications will perform poorly regardless of the system-level tuning and best practices employed to improve performance." In other words, a poorly designed application can create unnecessary bottlenecks. For example, resource contention could be a case of bad design, rather than inherent to the application domain.
This section discusses some methods to determine the bottlenecks that can impede your application's performance:
This section is a quick reference for using the OptimizeItTM and JProbeTM profilers with WebLogic Server.
A profiler is a performance analysis tool that allows you to reveal hot spots in the application that result in either high CPU utilization or high contention for shared resources. For a list of common profilers, see Performance Analysis Tools.
The JProbe Suite is a family of products that provide the capability to detect performance bottlenecks, find and fix memory leaks, perform code coverage, and other metrics.
The JProbe website provides a technical white paper, "Using Sitraka JProbe and BEA WebLogic Server", which describes how developers can analyze code with any of the JProbe Suite tools running inside BEA WebLogic Server.
The Optimizeit Profiler from Borland is a performance debugging tool for Solaris and Windows platforms.
Borland provides detailed J2EE Integration Tutorials for the supported versions of Optimizeit Profiler that work with WebLogic Server.
Most performance gains or losses in a database application are determined by how the application is designed. The number and location of clients, size and structure of DBMS tables and indexes, and the number and types of queries all affect application performance.
For more information on optimizing your applications for JDBC and tuning WebLogic JDBC connection pools, see:
There are a number of design choices that impact performance of JMS applications. Some others include reliability, scalability, manageability, monitoring, user transactions, message driven bean support, and integration with an application server. In addition, there are WebLogic JMS extensions and features have a direct impact on performance.
For more information on optimizing your applications for JMS and tuning WebLogic JMS, see:
Tuning WebLogic Server EJBs describes how to tune WebLogic Server Enterprise Java Beans to match your application needs.
There are some performance issues you should be aware of when you program your WebLogic Web services:
jaxp.properties
file (JAXP API). BEA recommends setting the properties on the command line to avoid unnecessary file operations at runtime and improve performance and resource usage
As a general rule, you should optimize your application so that it does as little work as possible when handling session persistence and sessions. You should also design a session management strategy that suits your environment and application.
Weblogic Server offers five session persistence mechanisms that cater to the differing requirements of your application. The session persistence mechanisms are configurable at the Web application layer. Which session management strategy you choose for your application depends on real-world factors like HTTP session size, session life cycle, reliability, and session failover requirements. For example, a Web application with no failover requirements could be maintained as a single memory-based session; whereas, a Web application with session fail-over capability could be maintained as replicated sessions or JDBC-based sessions, based on their life cycle and object size.
In terms of pure performance, in-memory session persistence is a better overall choice when compared to JDBC-based persistence for session state. According to the authors of Session Persistence Performance in BEA WebLogic Server 7.0: "While all session persistence mechanisms have to deal with the overhead of data serialization and deserialization, the additional overhead of the database interaction impacts the performance of the JDBC-based session persistence and causes it to under-perform compared with the in-memory replication." However, in-memory-based session persistence requires the use of WebLogic clustering, so it isn't an option in a single-server environment.
On the other hand, an environment using JDBC-based persistence does not require the use of WebLogic clusters and can maintain the session state for longer periods of time in the database. One way to improve JDBC-based session persistence is to optimize your code so that it has as high a granularity for session state persistence as possible. Other factors that can improve the overall performance of JDBC-based session persistence are: the choice of database, proper database server configuration, JDBC driver, and the JDBC connection pool configuration.
For more information on managing session persistence, see:
Configuring how WebLogic Server manages sessions is a key part of tuning your application for best performance. Consider the following:
For more information, see "Setting Up Session Management" in Assembling and Configuring Web Applications.
You can fine-tune an application's access to execute threads (and thereby optimize or throttle its performance) by using multiple execute queues in WebLogic Server. However, keep in mind that unused threads represent significant wasted resources in a Weblogic Server system. You may find that available threads in configured execute queues go unused, while tasks in other queues sit idle waiting for threads to become available. In such a situation, the division of threads into multiple queues may yield poorer overall performance than having a single, default execute queue.
Default WebLogic Server installations are configured with a default execute queue, weblogic.kernel.Default
, which is used by all applications running on the server instance. You may want to configure additional queues to:
Be sure to monitor each execute queue to ensure proper thread usage in the system as a whole. See Tuning the Default Execute Queue Threads for general information about optimizing the number of threads.
An execute queue represents a named collection of execute threads that are available to one or more designated servlets, JSPs, EJBs, or RMI objects. An execute queue is represented in the domain config.xml
file as part of the Server
element. For example, an execute queue named CriticalAppQueue
with four execute threads appears in the config.xml
file as follows:
...
<Server
Name="examplesServer"
ListenPort="7001"
NativeIOEnabled="true"/>
<ExecuteQueue Name="default"
ThreadCount="15"/>
<ExecuteQueue Name="CriticalAppQueue"
ThreadCount="4"/>
...
</Server>
To configure a new execute queue using the Administration Console:
Queue Length
: Always leave the Queue Length at the default value of 65536 entries. The Queue Length specifies the maximum number of simultaneous requests that the server can hold in the queue. The default of 65536 requests represents a very large number of requests; outstanding requests in the queue should rarely, if ever reach this maximum value.If the maximum Queue Length is reached, WebLogic Server automatically doubles the size of the queue to account for the additional work. Note, however, that exceeding 65536 requests in the queue indicates a problem with the threads in the queue, rather than the length of the queue itself; check for stuck threads or an insufficient thread count in the execute queue.
Queue Length Threshold Percent
: The percentage (from 1-99) of the Queue Length size that can be reached before the server indicates an overflow condition for the queue. All actual queue length sizes below the threshold percentage are considered normal; sizes above the threshold percentage indicate an overflow. When an overflow condition is reached, WebLogic Server logs an error message and increases the number of threads in the queue by the value of the Threads Increase attribute to help reduce the workload.By default, the Queue Length Threshold Percent value is 90 percent. In most situations, you should leave the value at or near 90 percent, to account for any potential condition where additional threads may be needed to handle an unexpected spike in work requests. Keep in mind that Queue Length Threshold Percent must not be used as an automatic tuning parameter—the threshold should never trigger an increase in thread count under normal operating conditions.
Thread Count
: The number of threads assigned to this queue. If you do not need to use more than 15 threads (the default) for your work, do not change the value of this attribute. (For more information, see Should You Modify the Default Thread Count?)Threads Increase
: The number of threads WebLogic Server should add to this execute queue when it detects an overflow condition. If you specify zero threads (the default), the server changes its health state to "warning" in response to an overflow condition in the thread, but it does not allocate additional threads to reduce the workload. Note: If WebLogic Server increases the number of threads in response to an overflow condition, the additional threads remain in the execute queue until the server is rebooted. Monitor the error log to determine the cause of overflow conditions, and reconfigure the thread count as necessary to prevent similar conditions in the future. Do not use the combination of Threads Increase and Queue Length Threshold Percent as an automatic tuning tool; doing so generally results in the execute queue allocating more threads than necessary and suffering from poor performance due to context switching.
Threads Maximum
: The maximum number of threads that this execute queue can have; this value prevents WebLogic Server from creating an overly high thread count in the queue in response to continual overflow conditions. By default, the Threads Maximum is set to 400.Thread Priority
: The priority of the threads associated with this queue. By default, the Thread Priority is set to 5.You can assign a servlet or JSP to a configured execute queue by identifying the execute queue name in the initialization parameters. Initialization parameters appear within the init-param
element of the servlet's or JSP's deployment descriptor file, web.xml
. To assign an execute queue, enter the queue name as the value of the wl-dispatch-policy
parameter, as in the example:
<servlet>
<servlet-name>MainServlet</servlet-name>
<jsp-file>/myapplication/critical.jsp</jsp-file>
<init-param>
<param-name>wl-dispatch-policy</param-name>
<param-value>CriticalAppQueue</param-value>
</init-param>
</servlet>
See "Initializing a Servlet" in Programming WebLogic HTTP Servlets for more information about specifying initialization parameters in web.xml
.
To assign an EJB object to a configured execute queue, use the new dispatch-policy
element in weblogic-ejb-jar.xml
. For more information, see the weblogic-ejb-jar.xml Deployment Descriptor.
While you can also set the dispatch policy through the appc
compiler -dispatchPolicy
flag, BEA strongly recommends you use the deployment descriptor element instead. This way, if the EJB is recompiled, during deployment for example, the setting will not be lost.
To assign an RMI object to a configured execute queue, use the -dispatchPolicy
option to the rmic
compiler. For example:
![]() ![]() |
![]() |
![]() |