17 Monitoring WebCenter Portal Performance
Note:
Oracle WebCenter Portal has deprecated the support for Jive features (announcements and discussions/discussion forums). Hence, Jive features are not available in 14.1.2 instances.
Permissions:
To perform the tasks in this chapter, you must be granted the WebLogic Server Admin
, Operator
, or Monitor
role through the Oracle WebLogic Server Administration Console.
See also Understanding Administrative Operations, Roles, and Tools.
17.1 Understanding Oracle WebCenter Portal Performance Metrics
Through Fusion Middleware Control, administrators can monitor the performance and availability of all the components, tools, and services that make up WebCenter Portal, as well as the application as a whole. To access Oracle WebCenter Portal metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
To make best use of the information displayed it is important that you understand how performance metrics are calculated and what they mean. All Oracle WebCenter Portal's performance metrics are listed and described here for your reference. Some applications (such as Oracle WebCenter Portal) might use the full range of social networking, personal productivity, and collaboration metrics listed, while others may only use one or more of these features.
This section includes the following topics:
17.1.1 Understanding Oracle WebCenter Portal Metric Collection
Performance metrics are automatically enabled for Oracle WebCenter Portal and display in Fusion Middleware Control. You do not need to set options or perform any extra configuration to collect performance metrics for WebCenter Portal. If you encounter a problem, such as, an application running slowly or hanging, you can find out more about the problem by investigating performance metrics, in real-time, through Fusion Middleware Control.
This section describes the different ways Oracle WebCenter Portal collects and presents metric data:
17.1.1.1 Metric Collection: Since Startup
At any given time, real-time metrics are available for the duration for which the WebLogic Server hosting WebCenter Portal is up and running. Real-time metrics that are collected or aggregated since the startup of the container are displayed on Oracle WebCenter Portal metric pages under the heading Since Startup. These metrics provide data aggregated over the lifetime of the WebLogic Server. The aggregated data enables you to understand overall system performance and compare the performance of recent requests shown in Recent History.
For example, consider WebCenter Portal deployed on a managed server that was started 4 hours ago. During that time, WebCenter Portal serviced 10,000 portlet requests with a total response time of 500, 000 ms. For this scenario, Since Startup metrics for portlets show:
-
Since Startup: Invocations (count) - 10000
-
Since Startup: Average Time (ms) - 50
Note:
Metric collection starts afresh after the container is restarted. Data collected before the restart becomes unavailable.
17.1.1.2 Metric Collection: Recent History
In addition to Since Startup metrics, Oracle WebCenter Portal reports metrics for requests serviced in the last 10 to 15 minutes as Recent History metrics. To do this, Oracle WebCenter Portal takes regular snapshots of real time metrics at an internal frequency. These metric snapshots are used to calculate the "delta" time spent performing service requests in the last 10 to 15 minutes and this data displays as Recent History metrics. Since Recent History metrics only aggregate data for the last 10-15 minutes, this information is useful if you want to investigate ongoing performance/availability issues.
If you compare Recent Metrics to Since Startup metrics you can gauge how the system characteristics have changed, compared to overall system availability/performance.
For example, consider a system that has been up and running for 2 days. During that time, Oracle WebCenter Portal recorded that the total time spent servicing 100, 000 portlet requests was 5 000 000 ms. The system starts to experience performance issues, that is, in the last 10-15 minutes, 100 portlet requests took a total time of 3 000 000 ms. In this scenario, the average response time reported "Since Startup" is quite low and would not indicate a performance issue (5 000 000ms/100 000 = 50ms). However, the same Recent History metric is considerably higher (3000000ms/100 = 30 seconds) which immediately tells the administrator that performance degraded recently. A quick comparison of "Recent History" with the corresponding "Since Startup" metric can clearly show whether or not the recent metric data is normal and in this case shows there is currently a problem with the system.
Recent History metrics can also help you prioritize which areas to investigate and which areas you can ignore when performance issues arise. For example, if an ongoing performance issue is reported and Recent History metrics for a particular component shows a value of 0, it indicates that the component has not been used in the last 10-15 minutes. Similarly, if the "Average Response Time" value is small and the "Invocation" count is low, the component may not be contributing to the performance problem. In such cases, administrators can investigate other areas.
Typically, Recent History shows data for the most recent 10-15 minutes. However, there are situations when the data does not reflect the last 10-15 minutes:
-
If the WebLogic Server has just started up, and has been running for less than 10-15 minutes, then Recent History shows data for the duration for which the server has been up and running.
-
If one or more tools or services are not accessed for an extended period of time, then older metric snapshots slowly age out. In such cases, metric data is no longer available for the last 10-15 minutes so Recent History metrics cannot calculate the delta time spent in performing service requests that occurred in last 10-15 minutes. When this happens, the Recent History data can show the same values as the Since Startup metrics. When the tool or service is used again, metric snapshots for it resume. After enough recent data is available, the Recent History metrics again start to display metrics for the last 10-15 minutes.
Most live environments are not idle for extended periods, so recent metric collection is rarely suspended due to inactivity. However, if you have a test environment that is used intermittently or not used for a while, you might notice recent metric collection stop temporarily, as described here.
17.1.1.3 Metric Collection: Last 'N' Samples
Since Startup and Recent History metrics calculate performance over a specific duration, and show aggregated metrics for that duration. In addition to these, Oracle WebCenter Portal collects and reports per-request performance information for a range of key WebCenter Portal metrics. Such metrics allow you to look at the success and response time of each request individually, without considering previous requests. Out-of-the-box, the last 100 samples are used to calculate key metric performance/availability but you can increase or decrease the sample set to suit your installation.
For example, if 10 out of the last 100 page requests failed, page availability is calculated as 90%. If you reduce the sample set to 50 and 10 pages fail, page availability is reported to be 80%.
The examples show how the sample set size can effect the performance reports. The value you select is up to you but if you increase the number of samples, consider the additional memory requirements since the last 'N' metric samples are maintained in memory. Oracle recommends a few hundred samples at most.
To change the number of samples used to report key performance metrics in your installation, see Configuring the Number of Samples Used to Calculate Key Performance Metrics.
To find out more about Oracle WebCenter Portal's key performance metrics and thresholds, refer to Understanding the Key Performance Metrics.
17.1.2 Understanding the Key Performance Metrics
Diagnosing the availability and performance of WebCenter Portal typically requires that you look at various important metrics across multiple components such as the JVM, the WebLogic Server, as well as the application.
To help you quickly identity and diagnose issues that can impact WebCenter Portal performance, Oracle WebCenter Portal collects the last 'N' samples for a range of "key performance metrics" and exposes them in Fusion Middleware Control. To access key performance metric information for your application, see Viewing Performance Metrics Using Fusion Middleware Control.
Thresholds determine when a performance alert or warning is triggered. Allowing you to set threshold values that represent suitable boundaries for your Oracle WebCenter Portal system, ensures that you obtain relevant performance alerts in Enterprise Manager Fusion Middleware Control. When key performance metrics are "out of bounds" with respect to their configured thresholds they are easy to find in Fusion Middleware Control as they appear color-coded. For more information about thresholds, see Customizing Key Performance Metric Thresholds and Collection.
You do not need to specifically set thresholds for metrics, such as "availabilty", that report success or failure.
Oracle WebCenter Portal allows you to manage warning thresholds for the key performance metrics described in Table 17-1:
Table 17-1 Key Performance Metric Collection
Component | Key Performance Metric | Metric Sampling |
---|---|---|
WebCenter Portal |
Active Sessions |
1 sample every X minutes |
WebCenter Portal - Pages |
Page Response Time |
Per Request |
WebCenter Portal - Portlets |
Portlet Response Time |
Per Request |
JVM |
CPU Usage |
1 sample every X minutes |
JVM |
Heap Usage |
1 sample every X minutes |
JVM |
Garbage Collection Rate |
1 sample every X minutes |
JVM |
Average Garbage Collection Time |
1 sample every X minutes |
WebLogic Server |
Active Execute Threads |
1 sample every X minutes |
WebLogic Server |
Execute Threads Idle Count |
1 sample every X minutes |
WebLogic Server |
Hogging Execute Threads |
1 sample every X minutes |
WebLogic Server |
Open JDBC Sessions |
1 sample every X minutes |
Oracle WebCenter Portal captures end-user requests for pages and portlets, and a metric sample is collected for each request. For example, if user A accesses page X, both the availability of page X (success/fail metric) and the response time of the request is captured by Oracle WebCenter Portal. Metric samples that take longer than a configured metric alert threshold or fail, show "red" in Fusion Middleware Control to immediately alert administrators when issues arise.
Other metrics, such as JVM and WebLogic Server metrics, are collected at a pre-defined frequency. Out-of-the-box, the sample frequency is 1 sample every 5 minutes but you can customize this value if required. For details, see Configuring the Frequency of WebLogic Server Health Checks.
The total number of samples that Oracle WebCenter Portal collects is configurable too, as described in Configuring the Number of Samples Used to Calculate Key Performance Metrics. The default sample set is 100 samples. Since there is a memory cost to maintain metric samples, do not specify an excessive number of samples; Oracle recommends a few hundred at most.
Oracle WebCenter Portal's key performance metrics are specifically selected to help administrators quickly identity and diagnose common issues that can impact WebCenter Portal performance. You can view all key performance metric data from your application's home page in Fusion Middleware Control.
17.1.3 Using Key Performance Metric Data to Analyze and Diagnose System Health
If you monitor WebCenter Portal regularly, you will learn to recognize trends as they develop and prevent performance problems in the future. The best place to start is your application's home page in Enterprise Manager Fusion Middleware Control. The home page displays status, performance, availability, and other key metrics for the various components, tools, and services that make up your application, as well as the WebLogic Server on which the application is deployed.
If you are new to Oracle WebCenter Portal, use the information in this section to better understand how to use the information displayed through Fusion Middleware Control to identify and diagnose issues.
Figure 17-1 presents high-level steps for monitoring the out-of-the-box application WebCenter Portal.
Figure 17-1 Analyzing System Health for WebCenter Portal - Main Steps

Description of "Figure 17-1 Analyzing System Health for WebCenter Portal - Main Steps"
Note:
-
Steps 4 applies only if your application utilizes the portlets feature.
-
Bar charts appear grey if a feature is not used.
-
Line charts require at least 3 data points before they start to show data.
Table 17-2 Analyzing System Health - Step by Step
Step | Description |
---|---|
Navigate to the home page for WebCenter Portal |
Use Enterprise Manager Fusion Middleware Control to monitor the performance of your portal application. The best place to start is your application's home page. See Navigating to the Home Page for WebCenter Portal. |
1 Check CPU and heap memory usage |
Overall performance deteriorates when CPU or memory usage is too high so its important that you always look at the CPU and memory metrics before looking at any other Oracle WebCenter Portal-specific metric. Check the Recent CPU and Memory Usage charts to see the current usage trend:
Next Step: If the charts indicate that CPU and memory usages are normal, verify the health of the WebLogic Server. |
2 Verify the health of WebLogic Server |
Look in the WebLogic Server Metrics region:
The actions you take next depend on the metric data. For example, if there are hogging threads, you can take thread dumps. If JDBC connections are exceeding limits, you can analyze further for connection leaks. If the garbage collection rate is exceeding limits, you can take heap dumps, an so on. For details, see Understanding WebLogic Server Metrics and Troubleshooting Oracle WebCenter Portal Performance Issues. Out-of-bound metrics show "red" in charts and "orange" in the Health Metrics table. Examine all occurrences of such situations by scanning the diagnostic logs. In-memory information is limited to "N" metric samples, but the logs store much more historical information about how often a problem is happening, as well as additional contextual information, such as which user. Here is sample message: [ Tip: You can use Fusion Middleware Control to locate all messages of this type by searching the message type, message code, and other string pattern details. See Viewing and Configuring Log Information. By default, a warning thresholds is only set for CPU Usage but you can configure thresholds for other key WebLogic Server metrics, such as Heap Memory Usage. See Configuring Thresholds for Key Metrics. Look at diagnostics logs for errors, failures, and any configuration or network issues. If an issue relates to another backend server, such as, WebCenter Content and SOA, verify the JVM/WebLogic Server health (CPU, heap, threads, and so on) for those managed servers too. Similarly, investigate WebLogic Server health for other managed servers in your WebCenter Portal installation such as Next Step: If the charts indicate that WebLogic Server is performing within thresholds, verify the health of your WebCenter Portal application. |
3 Monitor page performance |
Look at the WebCenter Portal Metrics section at the top of the home page. Review the page availability/performance charts to see whether page requests are currently responding as expected. Drill down to more detail to investigate issues relating to recent page requests. Use the Sort Ascending/Descending arrows for the Time and Page Name columns to see whether a pattern is emerging for a specific page or set of pages, or whether performance spikes appear to be more random. Out-of-bound metrics show "red" in charts and "orange" in the Page Metrics table. For details, see Understanding Page Request Metrics. Examine all occurrences of such situations by scanning the diagnostic logs. In-memory information is limited to "N" metric samples, but the logs store much more historical information about how often a problem is happening, as well as additional contextual information, such as which user. Here is sample message: [ Tip: You can use Fusion Middleware Control to locate all messages of this type by searching the message type, message code, and other string pattern details. See Viewing and Configuring Log Information. Identify individual pages that are not performing. For details, see How to Identify Slow Pages. Navigate to the "Overall Page Metrics" page to see how this page has performed historically (since startup, and last 10-15 minutes). Has it always been slow? For pages that are failing, see How to Troubleshoot Slow Page Requests. |
4. Monitor portlet performance |
Look at the WebCenter Portal Metrics section at the top of the home page. Review the portlet availability/performance charts to see whether portlets are currently performing as expected. Drill down to more detail to investigate issues relating to recent portlet requests. Out-of-bound metrics show "red" in charts and "orange" in the Portlet Metrics table. For details, see Understanding Portlet Producer Metrics. Out-of-bound conditions are also logged in managed server diagnostic logs so you can examine all historical events, that is, more that the most recent sample set that is held in memory. For example: [ Identify individual portlets or portlet producers that are not performing as expected. Navigate to the "Overall Service Metrics" page, and then select Portlet Producers or Portlets to see how these portlets/portlet producers have performed historically (since startup, and last 10-15 minutes). Has performance deteriorated recently or always been slow? If portlet performance is normally within thresholds:
Next Step: If the charts indicate that portlet requests are performing within thresholds, verify the performance of your LDAP server. |
5. Monitor LDAP server performance |
Look at the LDAP metrics in the Security section on the home page. When the server first starts up the cache hit ratio is zero and typically increases above 90% as the system warms up. For more information, see Understanding Security Metrics. Typically, the average LDAP lookup time is only a few milliseconds. If lookups are taking a long time there maybe a problem with the LDAP server or network relate issue. If you want to measure the response time from the LDAP server for a simple bind operation, run the command: Next Step: If your LDAP server is performing within thresholds, investigate other areas. |
6. Monitor individual tools and services |
Look at the WebCenter Portal Services section at the bottom of the home page. For details, see Understanding Tool and Service Metrics. Quickly see if a particular tool or service is "Down" or "Unknown". Refer to Troubleshooting Common Issues with Tools and Services for guidance on possible causes and actions. Sort the table by Average Time or Invocations to prioritize which tool or service to focus on. Click a name to navigate to the "Overall Service Metrics" page. Compare Since Startup and Recent History metrics to see if performance deteriorated recently or always been slow. |
17.1.4 Understanding Some Common Performance Issues and Actions
If an Oracle WebCenter Portal metric is out-of-bounds, do the following:
-
Check system resources, such as memory, CPU, network, external processes, or other factors. See Troubleshooting WebCenter Portal.
-
Check other metrics to see if the problem is system-wide or only in a particular tool or service.
-
If the issue is related to a particular tool or component, then check if the back-end server is down or overloaded.
-
If the WebLogic Server has been running for a long time, compare the Since Startup metrics with the Recent History metrics to determine if performance has recently deteriorated, and if so, by how much.
-
When the status of a tool or service is Down or some operations do not work, then validate, test, and ping the back-end server through direct URLs. For details, refer to the "Testing Connection" section in the relevant chapter. For a list of chapters, see Administering Tools and Services.
When you reconfigure connections to tools and services you must always restart the managed server on which the WebCenter Portal application is deployed to pick up the changes. If key connection attributes change, such as a server's host/port details, connectivity to the server may be lost and the service may become unavailable until you reconfigure the connection and restart the managed server.
Note:
You can customize the threshold at which some key performance metrics trigger out-of-bound conditions. See Customizing Key Performance Metric Thresholds and Collection.
17.1.5 Understanding Page Request Metrics
You can monitor the availability and performance of page requests for WebCenter Portal through Fusion Middleware Control. You can monitor recent page data and historical (overall) page data.
This section includes the following information:
Note:
The page request metrics discussed in this section are different from the page operation metrics discussed in Page Operation Metrics. Page operation metrics monitor page related operations such as creating pages. Whereas the page request metrics described here monitor individual page view/display requests (do not include page edit operations).
17.1.5.1 Understanding Full Page and Partial Page Metrics
Performance data is collected for full page and partial page requests. Full page metrics do not include partial page metrics.
Partial page requests display only portions of the page. Therefore, you can monitor the performance of pages within a page. Partial page refresh behavior is called partial page rendering (PPR). PPR allows only certain components on a page to be rerendered without the need to refresh the entire page. A common scenario is when an output component displays what a user has chosen or entered in an input component. Similarly, a command link or button can cause another component on the page to be rerendered without refreshing the entire page.
Partial page rendering of individual components on a page only increases partial page metrics and does not cause any change in full page metrics. For example, a calendar refresh on a page increases partial page invocations by 1, but full page invocations remain unchanged.
For more information about PPR, see Rerendering Partial Page Content in Developing Web User Interfaces with Oracle ADF Faces.
17.1.5.2 Recent Page Metrics
Recent page availability and performance metrics are summarized on the home page for WebCenter Portal (Figure 17-2 and Table 17-3). The page availability/performance charts show at a glance if page requests are slower than expected or failing.
Note:
To access the home page, see Navigating to the Home Page for WebCenter Portal.
The Page Availability and Page Performance charts report availability and performance over the last 'N' page requests (by default, 'N' is 100). The time range starts with the earliest page/portlet request time and ends with the current time. See Configuring the Number of Samples Used to Calculate Key Performance Metrics.
The % value on the right shows the percentage of page requests that responded within a specific time limit. The percentage is calculated using information from the last 'N' page requests. For example, if 'N' is 100, and if 3 of the last 100 page requests exceeded the page response threshold, page performance is shown as 97%.
The bar chart status (green/red) does not change over time until the status changes, so the % performance value and the visual green/red ratio do not always match up. For example, consider a scenario where the first 5 page requests are "out of bounds", the system is idle (no page requests) for 9 hours, and then there are 95 "good" page requests within an hour. In this instance the chart displays 90% red (9 hours) and 10% green (1 hour) but the % performance value shows 95% ('N' is 100 and 95 samples out of 100 are "good"). The mismatch occurs because the bar charts plot uniformly over time, whereas page requests are not usually uniformly distributed over time.
Figure 17-2 Recent Page Summary on the WebCenter Portal Home Page

Description of "Figure 17-2 Recent Page Summary on the WebCenter Portal Home Page"
If the chart indicates issues or incidents, click the Page Availability or Page Performance link to navigate to more detailed information to diagnose the issue further (see Figure 17-3 and Table 17-3).
Use the information on the Recent Page Metrics page (Figure 17-3) to troubleshoot recent page performance issues. The page availability/performance charts at the top of the page show "red" if page requests are slower than expected or failing.
Note:
Out-of-the-box, the page response threshold is 10, 000ms so pages taking longer than 10, 000ms to respond show "red" in the chart. If this threshold is not suitable for your installation you can change the threshold value. See Customizing Key Performance Metric Thresholds and Collection.
The charts report availability/performance over the last 'N' page requests. The time range starts with the earliest page request time and ends with the time of the last page request.
Use the information in the table to identify slow pages, that is, the name of the page and the portal to which the page belongs.
To diagnose page response issues, refer to the advice in "Step 3. Monitor page performance" in Table 17-2.
Table 17-3 Recent Page Request Metrics
Metric | Description |
---|---|
Availability |
Indicates page availability over the last 'N' page requests:
|
Performance |
Indicates page performance over the last 'N' page requests:
|
Date Time |
Date and time page requested. |
Page Name |
Name of the page requested. |
Portal Name |
Name of the portal in which the page is stored. |
Partial Page Refresh |
Indicates whether the page request refreshed the whole page ( |
Status |
Indicates whether the page request was successful (Success) or failed (Failure). Failure displays in orange text. |
Time (ms) |
Time taken to refresh the page (full or partial), in milliseconds. If the time exceeds the predefined page response threshold, the value displays in "orange". |
17.1.5.3 Overall Page Metrics
Historical performance metrics associated with page activity are also available as shown in Figure 17-4 and described in Table 17-4. This page displays metrics for both full and partial page requests and you can filter the data displayed to suit your requirements.
Note:
To access these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
The table at the top of this page summarizes the status and performance of individual pages. Use the table to quickly see which pages are available, and to review their individual and relative performances.
Statistics become available when a page is created and are updated every time someone accesses and uses the page.
Note:
Metrics for pages in the Home portal are not included.
Table 17-4 Page Request Metrics - Full Page and Partial Page
Field | Description |
---|---|
Display Options |
Filter the data displayed in the table:
The top five pages display in the chart. |
Page Name |
Names of pages that match your filter criteria (if any). If you do not specify filter criteria, all the pages are listed. |
Portal Name |
Names of portals that match your filter criteria (if any). If you do not specify filter criteria, pages from all portals are listed. |
Invocations |
Total number of page invocations per minute (full or partial): - Since Startup - Last 15 Minutes |
Average Time (ms) |
Average time (in ms) to display the page (full or partial): - Since Startup - Last 15 Minutes |
Maximum Time (ms) |
Maximum time taken to display a page (full or partial): |
Errors (Only for full page) |
Number of errors that occurred for a page per minute. |
Successful Invocations (Only for full page) |
Percentage of page invocations that succeeded: - Since Startup - Last 15 Minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why page requests are failing. See Viewing and Configuring Log Information. |
Pages per Minute |
Number of times the page is accessed per minute, also referred to as page throughput: - Since Startup - Last 15 Minutes |
Overall Page Request Metrics - Graphs
Use the graphs below the table to see, at a glance:
-
Invocations - Graph showing the most popular or least used pages, that is, pages recording the most or least invocations.
-
Page Throughput - Graph showing the average number of pages accessed per minute. Use this graph to identify pages with high (or low) hit rates.
-
Errors - Graph showing the number of errors. Use this graph to compare error rates.
-
Average Processing Time - Graph showing the average page response time (in milliseconds). Use this graph to identify pages with the best (or worst) performance.
To compare a different set of pages:
-
Specify the appropriate filtering criteria in the Page Name Filter.
-
Select one or more pages in the table, and then click Display in Chart.
17.1.6 Understanding Portlet Producer Metrics
You can monitor the availability and performance of all the portlets and portlet producers used by WebCenter Portal through Fusion Middleware Control. You can monitor recent and historical (overall) portlet data. The following topics describe the metrics that are available:
17.1.6.1 Recent Portlet Metrics
Recent portlet availability and performance metrics are summarized on the home page for WebCenter Portal (Figure 17-5 and Table 17-5). The portlet availability/performance charts show at a glance if portlet requests are slower than expected or failing.
Note:
To access the home page, see Navigating to the Home Page for WebCenter Portal.
The Portlet Availability and Portlet Performance charts report availability and performance over the last 'N' portlet requests (by default, 'N' is 100). The time range starts with the earliest page/portlet request time and ends with the current time. See Configuring the Number of Samples Used to Calculate Key Performance Metrics.
The % value on the right shows the percentage of portlet requests that responded within a specific time limit. The percentage is calculated using information from the last 'N' portlet requests. For example, if 'N' is 100, and if 25 of the last 100 portlet requests exceeded the portlet response threshold, portlet performance is shown as 75%. For more information, see Table 17-5.
The bar chart status (green/red) does not change over time until the status changes, so the % performance value and the visual green/red ratio do not always match up. An explanation for this is provided in Recent Page Metrics and the same applies to the portlet charts.
Figure 17-5 Recent Portlet Metric Summary on the WebCenter Portal Home Page

Description of "Figure 17-5 Recent Portlet Metric Summary on the WebCenter Portal Home Page"
If the chart indicates issues or incidents, click the Portlet Availability or Portlet Performance link navigate to more detailed information to diagnose the issue further (Figure 17-6 and Table 17-5).
Use the information on this page to troubleshoot recent portlet performance issues. The portlet availability/performance charts at the top of the page show "red" if portlet requests are slower than expected or failing.
Note:
Out-of-the-box, the portlet response threshold is 10, 000ms so portlets taking longer than 10, 000ms to respond show "red" in the chart. If this threshold is not suitable for your installation you can change the threshold value. For more information, see Customizing Key Performance Metric Thresholds and Collection.
The charts report availability/performance over the last 'N' portlet requests. The time range starts with the earliest portlet request time and ends with the time of the last portlet request.
Use the information in the table to identify slow portlets. You can determine the name of the portlet and the producer to which the portlets belongs.
To diagnose portlet issues, refer to the advice in Step 5. Monitor portlet performance in Table 17-2.
Table 17-5 Recent Portlet Metrics
Metric | Description |
---|---|
Portlet Availability |
Indicates portlet availability over the last 'N' portlet requests:
|
Portlet Performance |
Indicates portlet performance over the last 'N' portlet requests:
|
Date Time |
Date and time of the portlet request. |
Portlet Name |
Name of the portlet requested. |
17.1.6.2 Overall Portlet Producer Metrics
Historical performance metrics are also available for portlet producers used by WebCenter Portal, as shown in Figure 17-7. The information displayed on this page is described in the following tables:
Note:
To access these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-6 Portlet Producers - Summary
Metric | Description |
---|---|
Status |
The current status of portlet producers used in the application:
|
Successful Invocations (%) |
The percentage of portlet producer invocations that succeeded: - Since Startup - Last 15 Minutes Any request that fails will impact availability. This includes application-related failures such as timeouts and internal errors, and also client/server failures such as requests returned with response codes HTTP4xx or HTTP5xx, responses with a bad content type, and SOAP faults, where applicable. If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
The number of portlet producer invocations per minute: - Since Startup - Last 15 Minutes This metric measures each application-related portlet request and therefore, due to cache hits, errors, or timeouts on the application, this total may be higher than the number of actual HTTP requests made to the producer server. |
Average Time (ms) |
The average time taken to make a portlet request, regardless of the result: - Since Startup - Last 15 Minutes |
Table 17-7 Portlet Producer - Detail
Metric | Description |
---|---|
Most Popular Producers |
The number of invocations per producer (displayed on a chart). The highest value on the chart indicates which portlet producer is used the most. The lowest value indicates which portlet producer is used the least. |
Response Time |
The average time each portlet producer takes to process producer requests since WebCenter Portal started up (displayed on a chart). The highest value on the chart indicates the worst performing portlet producer. The lowest value indicates which portlet producer is performing the best. |
Producer Name |
The name of the portlet producer being monitored. Click the name of a portlet producer to pop up more detailed information about each portlet that the application uses. See Table 17-9. |
Status |
The current status of each portlet producer:
|
Producer Type |
The portlet producer type: Web or WSRP
|
Successful Invocations (%) |
The percentage of producer invocations that succeeded: - Since Startup - Last 15 Minutes |
Invocations |
The number of invocations, per producer: - Since Startup - Last 15 Minutes By sorting the table on this column, you can find the most frequently accessed portlet producer in WebCenter Portal. |
Average Time (ms) |
The average time taken to make a portlet request, regardless of the result: - Since Startup - Last 15 Minutes Use this metric to detect non-functional portlet producers. If you use this metric with the Invocations metric, then you can prioritize which producer to focus on. |
Maximum Time (ms) |
The maximum time taken to process producer requests: - Successes - HTTP200xx response code - Re-directs - HTTP300xx response code - Client Errors - HTTP400xx response code - Server Errors - HTTP500xx response code |
17.1.6.3 Overall Portlet Metrics
Historical performance metrics are available for individual portlets used by WebCenter Portal, as shown in Figure 17-8. The information displayed on this page is described in the following tables:
Note:
To access these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-8 Portlets - Summary
Metric | Description |
---|---|
Status |
The current status of portlets used in WebCenter Portal:
|
Successful Invocations (%) |
The percentage of portlet invocations that succeeded: - Since Startup - Last 15 Minutes Any request that fails will impact availability. This includes application-related failures such as timeouts and internal errors, and also client/server errors. If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
The number of portlet invocations per minute: - Since Startup - Last 15 Minutes This metric measures each application-related portlet request and therefore, due to cache hits, errors, or timeouts on the application, this total may be higher than the number of actual HTTP requests made to the portlet producer. |
Average Time (ms) |
The average time taken to process operations associated with portlets, regardless of the result: - Since Startup - Last 15 Minutes |
Table 17-9 Portlet - Detail
Metric | Description |
---|---|
Most Popular Portlets |
The number of invocations per portlet (displayed on a chart). The highest value on the chart indicates which portlet is used the most. The lowest value indicates which portlet is used the least. |
Response Time |
The average time each portlet takes to process requests since WebCenter Portal started up (displayed on a chart). The highest value on the chart indicates the worst performing portlet. The lowest value indicates which portlet is performing the best. |
Portlet Name |
The name of the portlet being monitored. |
Status |
The current status of each portlet:
|
Producer Name |
The name of the portlet producer through which the portlet is accessed. |
Producer Type |
The portlet producer type: Web or WSRP
|
Successful Invocations (%) |
The percentage of portlet invocations that succeeded: - Since Startup - Last 15 Minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
The number of invocations, per portlet: - Since Startup - Last 15 Minutes By sorting the table on this column, you can find the most frequently accessed portlet in WebCenter Portal. |
Average Time (ms) |
The average time each portlet takes to process requests, regardless of the result: - Since Startup - Last 15 Minutes Use this metric to detect non-performant portlets. If you use this metric with the Invocations metric, then you can prioritize which portlet to focus on. |
Maximum Time (ms) |
The maximum time taken to process portlet requests: - Successes - HTTP200xx - Redirects - HTTP300xx - Client Errors - HTTP400xx - Server Errors - HTTP500xx The breakdown of performance statistics by HTTP response code can help you identify which factors are driving up the total average response time. For example, failures due to portlet producer timeouts would adversely affect the total average response time. |
Table 17-10 Portlet - HTTP Response Code Statistics
Metric | Description |
---|---|
Portlet Name |
The name of the portlet being monitored. |
Invocations Count - Successes - Redirects - Client Errors - Server Errors |
The number of invocations, by type (HTTP response code): - Since Startup - Last 15 Minutes See Table 17-11. |
Average Time (ms) - Successes - Redirects - Client Errors - Server Errors |
The average time each portlet takes to process requests: - Since Startup - Last 15 Minutes Use this metric to detect non-functional portlets. If you use this metric with the Invocations metric, then you can prioritize which portlet to focus on. |
Table 17-11 HTTP Response Codes
HTTP Response and Error Code | Description |
---|---|
200 -Successful Requests |
Portlet requests that return any HTTP2xx response code, or which were successful without requiring an HTTP request to the remote producer, for example, a cache hit. |
300 -Unresolved Redirections |
Portlet requests that return any HTTP3xx response code. |
400 -Unsuccessful Request Incomplete |
Portlet requests that return any HTTP4xx response code. |
500 -Unsuccessful Server Errors |
Portlet requests that failed for any reason, including requests that return HTTP5xx response codes, or which failed due to a application-related error, timeout, bad content type response, or SOAP fault. |
17.1.7 Understanding WebLogic Server Metrics
Recent WebLogic Server performance is summarized on the home page for WebCenter Portal (Figure 17-9 and Table 17-12). If the chart indicates issues or incidents, you can navigate to more detailed information to diagnose the issue further.
Note:
To access the home page, see Navigating to the Home Page for WebCenter Portal.
Figure 17-9 Recent WebLogic Server Metric Summary on the Home Page

Description of "Figure 17-9 Recent WebLogic Server Metric Summary on the Home Page"
The charts report results from the last WebLogic Server 100 health checks. By default, metrics are recorded every five minutes so data collected over the last 8 hours can display here. If the server started up recently, the chart displays data from the time the server started to the current time.
Note:
If required, you can customize the metric collection frequency to better suit your installation. For details, see Customizing Key Performance Metric Thresholds and Collection.
Table 17-12 Recent WebLogic Server Metrics on the Home Page
Metric | Description |
---|---|
Health |
Summarizes recent WebLogic Server health as reported by the Oracle WebLogic Server self-health monitoring feature. This metric considers recent server health, thread health, and JDBC health:
|
Incidents |
Number of times WebLogic Server metrics exceed threshold settings (that is, metrics such as CPU usage, memory usage, thread count, number of JDBC connections, session metrics, and so on). For example, if the metric data set contains 2 incidents where thread count exceeded the predefined threshold and the number of JDBC connections exceeded the threshold limit 3 times, then the number of incidents displayed is 5. When the number of incidents is greater than 0, an icon with a red cross displays. Click the Incidents link to drill down to the Recent WebLogic Server Metrics Page (Figure 17-9) and examine the Health Metrics table to diagnose the incidents further. |
You can click Health or Incidents to drill down to the Recent WebLogic Server Metrics Page (Figure 17-9). The metrics displayed on this page are described in the following topics:
Figure 17-10 Recent WebLogic Server Metrics Page

Description of "Figure 17-10 Recent WebLogic Server Metrics Page"
17.1.7.1 WebLogic Server Metrics Section
Metric | Description |
---|---|
General |
|
Up Since |
Date and time the server last started up. |
State |
Current lifecycle state of this server.For example, a server can be in a RUNNING state in which it can receive and process requests or in an ADMIN state in which it can receive only administrative requests. |
Health |
Health status of the server, as reported by the Oracle WebLogic Server self-health monitoring feature. For example, the server can report if it is overloaded by too many requests, if it needs more memory resources, or if it will soon fail for other reasons. |
CPU Usage (%) |
Percentage of the CPU currently in use by the Java Virtual Machine (JVM). This includes the load that the JVM is placing on all processors in the host computer. For example, if the host uses multiple processors, the value represents a snapshot of the average load on all the processors. |
Heap Usage (MB) |
Size of the memory heap currently in use by the Java Virtual Machine (JVM), in megabytes. |
Java Vendor |
Name of the company that provided the current Java Development Kit (JDK) on which the server is running. |
Java Version |
Version of the JDK on which the current server is running. |
Performance |
|
Garbage Collection Rate (per min) |
Rate (per minute) at which the Java Virtual Machine (JVM) is invoking its garbage-collection routine. By default, this metric shows the rate recorded in the last five minutes. See Configuring the Frequency of WebLogic Server Health Checks. |
Average Garbage Collection Time (ms) |
Average length of time (ms) the Java Virtual Machine spent in each run of garbage collection. The average shown is for the last five minutes. By default, this metric shows the average over the last five minutes. See Configuring the Frequency of WebLogic Server Health Checks. |
Active Execute Threads |
Number of active execute threads in the pool. |
Execute Threads Idle Count |
Number of idle threads in the pool. This count does not include standby threads or stuck threads. The count indicates threads that are ready to pick up new work when it arrives. |
Hogging Execute Threads |
Number of threads that are being held by a request right now. These threads will either be declared as stuck after a configured timeout or return to the pool. The self-tuning mechanism backfills if necessary. |
Active Sessions |
Number of active sessions for the application. |
Open JDBC Sessions |
Number of JDBC connections currently open. |
Incidents |
Number of times WebLogic Server metrics exceed threshold settings (that is, metrics such as CPU usage, memory usage, thread count, number of JDBC connections, session metrics, and so on). For example, if the metric data set contains 2 incidents where thread count exceeded the predefined threshold and the number of JDBC connections exceeded the threshold limit 3 times, then the number of incidents displayed is 5. When the number of incidents is greater than 0, an icon with a red cross displays. |
Health |
Summarizes recent health status, as reported by the Oracle WebLogic Server self-health monitoring feature. The Health charts report results from the last 100 performance checks. By default, metrics are recorded every five minutes so data collected over the last 500 minutes displays. If the server started up recently, the chart displays data from the time the server started to the current time.
|
WebLogic Server |
Reports recent WebLogic Server health checks. For example, if 10 out of the last 100 WebLogic Server health checks failed (not "OK"), WebLogic Server health is shown as 90%. |
Thread |
Reports recent thread health checks. For example, if 10 out of the last 100 WebLogic Server health checks report a thread health status other than "OK", WebLogic Server thread health is shown as 90% Some example thread health failures include:
|
JDBC |
Reports recent JDBC health checks. For example, the server can report too many JDBC connection requests. If 10 out of the last 100 WebLogic Server health checks report a JDBC health status other than "OK", WebLogic Server JDBC health is shown as 90%. |
17.1.7.2 Recent CPU and Memory Usage Section
This graph charts CPU and memory utilization for the Java Virtual machine over the the last 100 health checks.The time range starts with the earliest health check and ends with the time of the last health check.
From this performance graph, you will be able to tell how much of the memory/CPU configured for the virtual machine is actually being used and whether the trend is increasing. This might reveal to you that the applications running inside that virtual machine need more memory than the virtual machine has been assigned and that adding more memory to the virtual machine -- assuming that there is sufficient memory at the host level -- might improve performance. Similarly, you can assess whether additional CPU resources are required.
Metric | Description |
---|---|
CPU Usage (%) |
Percentage of the CPU currently in use by the Java Virtual Machine (JVM). This includes the load that the JVM is placing on all processors in the host computer. For example, if the host uses multiple processors, the value represents a snapshot of the average load on all the processors. |
Heap Usage (MB) |
Size of the memory heap currently in use by the Java Virtual Machine (JVM), in megabytes. |
17.1.7.3 Recent Session and Thread Usage Section
This graph charts the number of active sessions and active threads recorded over the last 100 health checks.The time range starts with the earliest health check and ends with the time of the last health check.
The number of active sessions and threads should rise and fall with the load on your system. If the graph shows a sudden rise or the number of sessions or threads keep increasing, investigate the issue further to understand what triggered the change in behavior.
Metric | Description |
---|---|
Active Sessions |
Number of active sessions for the application. |
Active Thread |
Number of active threads for the application. |
17.1.7.4 Recent JDBC Usage Section
This graph charts the number of open JDBC sessions recorded over the last 100 health checks. The time range starts with the earliest health check and ends with the time of the last health check.
The Current Active Connection Count metric across all the data sources belonging to the server are used to calculate the overall open JDBC session count displayed here.
Use this chart to determine the number of JDBC sessions being used and to see whether the system is leaking JDBC resources. You can use the information in this chart to assess whether JDBC configuration or the connection pool size needs to be adjusted.
17.1.7.5 Health Metrics Section
This table displays data from the last one hundred (100) WebLogic Server health metrics collected, as reported by the Oracle WebLogic Server self-health monitoring feature.
Metric | Description |
---|---|
Date Time |
Date and time of the WebLogic Server health check. |
Server Health |
Sever health status, as reported by the Oracle WebLogic Server self-health monitoring feature. Successful health checks return "OK". Unsuccessful health checks report various failures, for example, the server can report if it is overloaded by too many requests, if it needs more memory resources, or if it will soon fail for other reasons. For more information, see Configure health monitoring in Oracle WebLogic Server Administration Console online help. |
Thread Health |
Thread health status, as reported by the Oracle WebLogic Server self-health monitoring feature. Successful health checks return "OK". Unsuccessful thread checks report various failures, for example, if all the threads in the default queue become stuck, server health state changes to "CRITICAL". If all threads in For more information, see Configure health monitoring in Oracle WebLogic Server Administration Console online help. |
JDBC Health |
JDBC health status, as reported by the Oracle WebLogic Server self-health monitoring feature. Successful health checks return "OK". Unsuccessful JDBC checks report various failures, for example, if the server reports too many JDBC connection requests or that more memory resources are required, server health state changes to "WARNING". For more information, see Configure health monitoring in Oracle WebLogic Server Administration Console online help. |
Server CPU (%) |
If you are using the Oracle JRockit JDK, this metric shows the percentage of the CPU currently in use by the Java Virtual Machine (JVM). This includes the load that the JVM is placing on all processors in the host computer. For example, if the host uses multiple processors, the value represents a snapshot of the average load on all the processors. |
Heap Usage (MB) |
Total heap memory (in MB) currently in use by the JVM. |
Average Garbage Collection Time (ms) |
Average length of time (ms) the Java Virtual Machine spent in each run of garbage collection. The average shown is for the last five minutes. By default, this metric shows the average over the last five minutes. See Configuring the Frequency of WebLogic Server Health Checks. |
Garbage Collection Rate (per min) |
Rate (per minute) at which the Java Virtual Machine (JVM) is invoking its garbage-collection routine. By default, this metric shows the rate recorded in the last five minutes. See Configuring the Frequency of WebLogic Server Health Checks. |
Active Sessions |
Number of active sessions for the application. |
Active Execute Threads |
Number of active execute threads in the pool. |
Execute Threads Idle Count |
Number of idle threads in the pool. This count does not include standby threads or stuck threads. The count indicates threads that are ready to pick up new work when it arrives. |
Hogging Thread Count |
Number of threads that are being held by a request right now. These threads will either be declared as stuck after a configured timeout or return to the pool. The self-tuning mechanism backfills if necessary. |
Open JDBC Connections |
Number of JDBC connections currently open. |
17.1.8 Understanding Security Metrics
Some key security-related performance metrics are displayed for WebCenter Portal on the home page (Figure 17-11 and Table 17-13).
Note:
To access the home page, see Navigating to the Home Page for WebCenter Portal.
Figure 17-11 Security Metrics on the Home Page

Description of "Figure 17-11 Security Metrics on the Home Page"
If you compare Since Startup metrics with Recent History metrics you can determine whether performance has recently deteriorated, and if so, by how much.
Table 17-13 Security Metrics
Metric | Description |
---|---|
LDAP Cache Hit Ratio (%) |
Percentage of LDAP searches that result in a cache hit. |
Average LDAP Lookup Time (ms) |
Average time to complete an LDAP search request: - Since Startup - Last 15 MinutesFoot 1 If LDAP searches are taking too long, its most likely an issue on the LDAP server that is causing slow response times. |
Footnote 1
The last 10-15 minutes of data is used to calculate recent performance metrics. For details, seeUnderstanding Oracle WebCenter Portal Metric Collection .
17.1.9 Understanding Page Response and Load Metrics
The page response chart on your application's home page (Figure 17-11) shows you how quickly WebLogic Server is responding to page requests and how many requests are being processed (its load).
The average page processing time (in ms) for all portals, is calculated over a 15 minute period. The number of invocations per minute is also displayed to help you determine whether the average page processing time is increasing or decreasing. If slower page processing times are due to a large number of users accessing the system, an increase in invocations per minute will display on the graph. If the number of users has not increased (the invocations per minute graph is not increasing or fluctuating), then slower page processing times are most likely due to machine resource issues or lack of JVM resources (low memory, contention for database connections, and so on).
Click Table View to see detailed response and load values, recorded at 5 minute intervals.
Note:
To access the home page, see Navigating to the Home Page for WebCenter Portal.
Figure 17-12 Page Response Metrics on the Home Page

Description of "Figure 17-12 Page Response Metrics on the Home Page"
If you compare Since Startup metrics with Recent History metrics (last 15 minutes), you can determine whether performance has recently deteriorated, and if so, by how much.
17.1.10 Understanding Portal Metrics
(WebCenter Portal only) You can view live performance metrics for individual portals through Fusion Middleware Control, as shown in Figure 17-13. The metrics displayed on this page are described in Table 17-14 and Metrics Common to all Tools and Services.
Note:
Metrics for the Home portal are not included.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
The table at the top of this page summarizes the status and performance of individual portals. Use the table to quickly see which portals are up and running, and to review their individual and relative performances.
Statistics become available when a portal is created and are updated every time a member accesses and uses the portal.
You can filter the data displayed in the following ways:
-
Portal Name Filter - Enter a full or partial search term, and then press Enter to refresh the list with all portals for which a match is found in the display name. To display metrics for all portals, clear the search term and press Enter again.
-
Maximum Rows - Restrict the total number of portals displayed in the table.
-
Display - Display metrics for the most accessed portals, the slowest portals, or the portals experiencing the most errors. Depending on you selection, the table orders portals by:
- Number of Invocations (most accessed portals)
- Average Page Processing Time (slowest portals)
- Number of Errors (portals with most errors)
-
Duration - Display metric information collected since startup or in the last 15 minutes (Recent History).
The top five portals display in the chart.
Table 17-14 Portal Metrics
Metric | Description |
---|---|
Name |
Names of portals that match your filter criteria (if any). If you do not specify filter criteria, all the portals are listed. |
Status |
Current status of each portal:
|
Invocations |
Total number of portal invocations: - Since Startup - Last 15 Minutes |
Errors |
Number of errors recorded. |
Successful Invocations (%) |
Percentage of portal invocations that succeeded: - Since Startup - Last 15 Minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why portal requests are failing. See Viewing and Configuring Log Information. |
Page Throughput |
The average number of pages processed per minute for each portal: - Since Startup - Last 15 Minutes |
Average Time (ms) |
The average time (in ms) to display pages in the portal: - Since Startup - Last 15 Minutes |
Maximum Time (ms) |
Maximum time taken to display a page in the portal. |
Minimum Time (ms) |
Minimum time taken to display a page in the portal. |
Portal Metrics - Graphs
Use the graphs below the table to see information about portals:
-
Invocations - Graph showing the most active/popular portals, that is, portals recording the most invocations.
-
Page Throughput - Graph showing the average number of pages accessed per minute for each portal. Use this graph to identify portals with high (or low) page hit rates.
-
Average Processing Time - Graph showing the average page response time (in milliseconds). Use this graph to identify portals with the best (or worst) page performance.
-
Errors - Graph showing which portals are reporting the most errors. Use this graph to compare error rates.
To compare a different set of portals:
-
Specify the appropriate filtering criteria.
-
Select one or more portals in the table, and then click Display in Chart.
17.1.11 Understanding Tool and Service Metrics
This section includes the following topics:
17.1.11.1 Metrics Common to all Tools and Services
Fusion Middleware Control provides capabilities to monitor performance of tools and services used in WebCenter Portal in the following ways:
-
Services summary: Summary of performance metrics for each tool or service used in WebCenter Portal. Table 17-15 lists tools and services that use common performance metrics and Table 17-16 describes the common metrics.
-
Most popular operations and response time for individual operations. Table 17-17 describes these metrics.
-
Per operation metrics: Performance metrics for individual operations. Table 17-15 lists common performance metrics used to monitor performance of individual operations. Table 17-17 describes these metrics.
Table 17-15 Common Metrics for Tools and Services
Tool or Service | Services Summary (Since Startup and Last 15 Minutes) | Per Operation Metrics (Since Startup and Last 15 Minutes) |
---|---|---|
Announcements |
The performance metrics include:
|
The performance metrics include:
|
SOA Server |
The performance metrics include:
|
Not applicable |
Discussion Forums |
The performance metrics include:
|
The performance metrics include:
|
External Applications |
The performance metrics include:
|
The performance metrics include:
|
Events |
The performance metrics include:
|
The performance metrics include:
|
Import/Export |
The performance metrics include:
|
The performance metrics include:
|
Lists |
The performance metrics include:
|
The performance metrics include:
|
|
The performance metrics include:
|
The performance metrics include:
|
Notes |
The performance metrics include:
|
The performance metrics include:
|
Pages |
The performance metrics include:
|
The performance metrics include:
|
People Connections |
The performance metrics include:
|
The performance metrics include:
|
RSS |
The performance metrics include:
|
Not available |
Search |
The performance metrics include:
|
The performance metrics include:
|
Table 17-16 describes metrics used for monitoring performance of all operations.
Table 17-16 Description of Common Metrics - Summary (All Operations)
Metric | Description |
---|---|
Status |
The current status of the tool or service:
If a particular tool or service is "Down" or "Unknown", refer to Troubleshooting Common Issues with Tools and Services for guidance on possible causes and actions. |
Successful Invocations (%) |
Percentage of service invocations that succeeded. Successful Invocations (%) equals the number of successful invocations divided by the invocation count: - Since Startup - Last 15 Minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
Number of service invocations per minute: - Since Startup - Last 15 Minutes This metric provides data on how frequently a particular tool or service is being invoked for processing of operations. Comparing this metric across services can help determine the most frequently used tools and services in the application. |
Average Time (ms) |
The average time taken to process operations associated with a tool or service. This metric can be used with the Invocations metric to assess the total time spent in processing operations. - Since Startup - Last 15 Minutes Use this metric to determine the overall performance of tools and services. If this metric is out-of-bounds (the average time for operations is increasing or higher than expected), click individual names to view more detailed metric data. |
Table 17-17 describes metrics used to monitor performance of each operation performed by a tool, service or component.
Table 17-17 Description of Common Metrics - Per Operation
Metric | Description |
---|---|
Most Popular Operations |
The number of invocations per operation (displayed on a chart). The highest value on the chart indicates which operation is used the most. The lowest value indicates which operation is used the least. |
Response Time |
The average time to process operations associated with a service since WebCenter Portal started up (displayed on a chart). The highest value on the chart indicates the worst performing operation. The lowest value indicates which operation is performing the best. |
Operation |
The operation being monitored. See Metrics Specific to a Particular Tool or Service. |
Invocations |
The number of invocations, per operation: - Since Startup - Last 15 Minutes This metric provides data on how frequently a particular tool or service is being invoked for processing of operations. Comparing this metric across services can help determine the most frequently used service in the application. |
Average Time (ms) |
The average time taken to process each operation: - Since Startup* - Recent History *This information is also displayed on the Response Time chart. |
Maximum Time (ms) |
The maximum time taken to process each operation. |
17.1.11.2 Metrics Specific to a Particular Tool or Service
This section describes per operation metrics for all tools, services and components. This section includes the following topics:
To access live performance metrics for WebCenter Portal, see Viewing Performance Metrics Using Fusion Middleware Control.
17.1.11.2.1 BPEL Worklist Metrics
Performance metrics associated with worklists are described in Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
17.1.11.2.2 Content Repository Metrics
Performance metrics associated with documents and Content Presenter (Figure 17-14 and Figure 17-15) are described in the following tables:
Figure 17-15 Content Repository Metrics - Per Operation

Description of "Figure 17-15 Content Repository Metrics - Per Operation"
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-18 Content Repository - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Download |
Downloads one or more documents from a content repository. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Upload |
Uploads one or more documents to a content repository. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Search |
Searches for documents stored in a content repository. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Login |
Establishes a connection to the content repository and authenticates the user. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete |
Deletes one or more documents stored in a content repository. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
List Folders |
Lists folders stored in a content repository. This operation is specific to Content Presenter. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Get Items |
Displays items, such as a document or image stored in a content repository. This operation is specific to Content Presenter. |
For specific causes, see Content Repository (Documents and Content Presenter) - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Table 17-19 Content Repository Metrics - Summary (All Repositories)
Metric | Description |
---|---|
Status |
The current status of document tool:
|
Successful Invocations (%) |
The percentage of document invocations that succeeded (Upload, Download, Search, Login, Delete): - Since Startup - Last 15 Minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
The number of document invocations per minute (Upload, Download, Search, Login, Delete): - Since Startup - Last 15 Minutes This metric provides data on how frequently a particular tool or service is being invoked for processing of operations. Comparing this metric across services can help determine the most frequently used tool or service in the application. |
Average Time (ms) |
The average time taken to process operations associated with documents (Upload, Download, Search, Login, Delete): - Since Startup - Last 15 Minutes |
Most Popular Operations |
The number of invocations per operation (displayed on a chart). The highest value on the chart indicates which operation is used the most. The lowest value indicates which operations is used the least. |
Response Time |
The average time to process operations associated with documents since WebCenter Portal started up (displayed on a chart). The highest value on the chart indicates the worst performing operation. The lowest value indicates which operations is performing the best. |
Download Throughput (bytes per second) |
The rate at which documents are downloaded. |
Upload Throughput (bytes per second) |
The rate at which documents are uploaded. |
Table 17-20 Content Repository Metrics - Operation Summary Per Repository
Metric | Description |
---|---|
Status |
The current status of the content repository:
|
Successful Invocations (%) |
The percentage of document invocations that succeeded (Upload, Download, Search, Login, Delete) for this content repository: - Since Startup - Last 15 minutes If Successful Invocations (%) is below 100%, check the diagnostic logs to establish why service requests are failing. See Viewing and Configuring Log Information. |
Invocations |
The number of document invocations per minute (Upload, Download, Search, Login, Delete) for this content repository: - Since Startup - Last 15 minutes This metric provides data on how frequently a particular tool or service is being invoked for processing of operations. Comparing this metric across tools and services can help determine the most frequently used tools and services in the application. |
Average Time (ms) |
The average time taken to process operations associated with documents (Upload, Download, Search, Login, Delete) for this content repository: - Since Startup - Last 15 minutes |
Bytes Downloaded |
The volume of data downloaded from this content repository. |
Download Throughput (bytes per second) |
The rate at which documents are downloaded from this content repository. |
Bytes Uploaded |
The volume of data uploaded to this content repository. |
Upload Throughput (bytes per second) |
The rate at which documents are uploaded to this content repository. |
Maximum Time (ms) |
The maximum time to process operations associated with documents (Upload, Download, Search, Login, Delete) for this content repository. |
Table 17-21 Content Repository Metrics - Operation Detail Per Repository
Metric | Description |
---|---|
Invocations |
The number of invocations per document operation (Upload, Download, Search, Login, Delete): - Since Startup - Last 15 minutes This metric provides data on how frequently a particular service is being invoked for processing of operations. Comparing this metric across services can help determine the most frequently used services in the application. |
Average Processing Time (ms) |
The average time taken to process each operation associated with documents (Upload, Download, Search, Login, Delete): - Since Startup - Last 15 minutes |
17.1.11.2.3 Events Metrics
Performance metrics associated with events are described in Table 17-22 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-22 Events - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Create Event |
Creates a portal event in the WebCenter Portal's repository. |
For specific causes, see Events - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Update Event |
Updates a portal event stored in the WebCenter Portal's repository. |
For specific causes, see Events - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete Event |
Deletes a portal event from the WebCenter Portal's repository. |
For specific causes, see Events - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
List Event |
Retrieves a list of events from the WebCenter Portal's repository. |
For specific causes, see Events - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Search Event |
Searches for terms within event text. |
For specific causes, see Events - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.4 External Application Metrics
Performance metrics associated with external applications are described in Table 17-23 and Metrics Common to all Tools and Services.
Figure 17-18 External Application Metrics - Per Operation

Description of "Figure 17-18 External Application Metrics - Per Operation"
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-23 External Applications - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Fetch Credentials |
Retrieves credentials for an external application. |
For specific causes, see External Applications - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Store Credentials |
Stores user credentials for an external application. |
For specific causes, see External Applications - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Fetch External Application |
Retrieves an external application. |
For specific causes, see External Applications - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Automated Logins |
Logs a WebCenter Portal user in to an external application (using the automated login feature). |
For specific causes, see External Applications - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.5 Import and Export Metrics
Performance metrics associated with import and export (Figure 17-19) are described in Table 17-24 and Metrics Common to all Tools and Services. These metrics apply to WebCenter Portal only.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-24 Import/Export - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Export |
Exports an entireWebCenter Portal application. |
For specific causes, see Import and Export - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Import |
Imports an entire WebCenter Portal application. |
For specific causes, see Import and Export - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.6 List Metrics
(WebCenter Portal only) Performance metrics associated with lists (Figure 17-20) are described in Table 17-25 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-25 Lists- Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Create List |
Creates a list in the user session. The Save Data operation commits new lists to the MDS repository. |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Copy List |
Copies a list and its data in the user session. The Save Data operation commits copied lists and list data to the MDS repository and the WebCenter Portal's repository (the database where list data is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete List |
Deletes a list and its data in the user session. The Save Data operation commits list changes to the MDS repository and the WebCenter Portal's repository (the database where list data is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Create Row |
Creates row of list data in the user session. The Save Data operation commits list data changes to the WebCenter Portal's repository (the database where list data is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Update Row |
Updates row of list data in the user session. The Save Data operation commits list data changes to the WebCenter Portal's repository (the database where list data is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete Row |
Deletes row of list data in the user session. The Save Data operation commits list data changes to the WebCenter Portal's repository (the database where list data is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Search |
Retrieves a list by its ID from the Metadata repository. |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Save Data |
Saves all changes to lists and list data (in the user session) to the Metadata Services repository and the WebCenter Portal's repository (the database where list information is stored). |
For specific causes, see Lists - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.7 Mail Metrics
Performance metrics associated with mail (Figure 17-21) are described in Table 17-26 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-26 Mail - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Login |
Logs a WebCenter Portal user into the mail server that is hosting mail services. |
For specific causes, see Mail - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Logout |
Logs a WebCenter Portal user out of the mail server that is hosting mail services. |
For specific causes, see Mail - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Receive |
Receives a mail. |
For specific causes, see Mail - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Send |
Sends a mail. |
For specific causes, see Mail - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Search |
Searches for mail that contains a specific term. |
For specific causes, see Mail - Issues and Actions. For information on common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.8 Note Metrics
Performance metrics associated with notes (Figure 17-22) are described in Table 17-27 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-27 Notes - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Create |
Creates a personal note. The Save Changes operation commits new notes to the MDS repository. |
For specific causes, see Notes - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Update |
Updates a personal note. The Save Changes operation commits note updates to the MDS repository. |
For specific causes, see Notes - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Find |
Retrieves a note from the MDS repository. |
For specific causes, see Notes - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete |
Deletes a note from the MDS repository. |
For specific causes, see Notes - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.9 Page Operation Metrics
Performance metrics associated with the page operations (Figure 17-23) are described in Table 17-28 and Metrics Common to all Tools and Services.
Note:
The page operation metrics discussed in this section are different from the page request metrics discussed in Understanding Page Request Metrics. Page operation metrics monitor page related operations such as creating pages. Whereas the page request metrics monitor individual page view/display requests (do not include page edit operations).
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-28 Page Service - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Create |
Creates a page in WebCenter Portal. |
For specific causes, see Page Services - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Copy |
Copies a page. |
For specific causes, see Page Services - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Delete |
Deletes a page. |
For specific causes, see Page Services - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Search |
Searches for pages that contain a specific term. |
For specific causes, see Page Services - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.10 People Connection Metrics
Performance metrics associated with people connections are described in Table 17-29 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-29 People Connections - Operations Monitored
Operation | Description | Performance Issues - User Action |
---|---|---|
Get Profiles |
Retrieves profiles of a user. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Get Activities |
Retrieves the activities based on the user filter options. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Publish Activities |
Publishes an activity in the user session and saves it in WebCenter Portal. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Get Messages |
Retrieves the messages of the user. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Get Feedback |
Retrieves the feedback of the user. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
Get Connections |
Retrieves the connections of users. |
For specific causes, see People Connections - Issues and Actions. For common causes, see Understanding Some Common Performance Issues and Actions. |
17.1.11.2.11 RSS News Feed Metrics
Performance metrics associated with RSS news feeds (Figure 17-25) are described in Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
17.1.11.2.12 Search Metrics
Performance metrics associated with search (Figure 17-26) are described in Table 17-30 and Metrics Common to all Tools and Services.
To monitor these metrics through Fusion Middleware Control, see Viewing Performance Metrics Using Fusion Middleware Control.
Table 17-30 Search - Search Sources
Operation | Description |
---|---|
Announcements |
Announcement text is searched. |
Documents |
Contents in files and folders are searched. |
Discussion Forums |
Forums and topics are searched. |
WebCenter Portal |
Contents saved in a portal, such as links, lists, notes, tags, and events are searched. |
Portal Events |
Portal events are searched. |
Links |
Objects to which links have been created are searched (for example, announcements, discussion forum topics, documents, and events). |
Lists |
Information stored in lists is searched. |
Notes |
Notes text, such as reminders, is searched. |
Elasticsearch |
Contents from discussions, tag clouds, notes, and other tools and services are searched. |
Pages |
Contents added to application, personal, public, wiki, and blog pages are searched. |
17.1.11.3 Troubleshooting Common Issues with Tools and Services
This section describes issues that you may have with individual tools and services and suggests actions you can take to address those issue.
This section includes the following topics:
17.1.11.3.1 Content Repository (Documents and Content Presenter) - Issues and Actions
If you are experiencing problems with documents service and the status is Down, check the diagnostic logs to establish why this service is unavailable. Also, do one of the following:
-
For Content Server (Oracle WebCenter Content), verify that the back-end server is up and running.
-
For Content Server, verify that the socket connection is open for the client for which the service is not functioning properly. Check the list of IP addresses that are allowed to communicate with the Content Server through the Intradoc Server Port (IP Address Filter). For details, see Using Fusion Middleware Control to Modify Internet Configuration in Oracle Fusion Middleware Administering Oracle WebCenter Content.
-
(Functional check) Check logs on the back-end server. For Content Server, go to Content Server > Administration > Log files > Content Server Logs.
-
(Functional check) Search for entries in the diagnostic log where the module name starts with
oracle.vcr
,oracle.webcenter.content
,oracle.webcenter.doclib
, andoracle.stellent
. Specifically, the diagnostics log for the managed server on which WebCenter Portal is deployed located at:DOMAIN_HOME/servers/managed_server_name/logs/<managed_server>-diagnostic.logs
For example, the diagnostics log for WebCenter Portal is named
See Viewing and Configuring Log Information.WC_Portal
-diagnostic.log.
17.1.11.3.2 External Applications - Issues and Actions
If you are experiencing problems with the External Applications service and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
Credential store is not configured for the application.
-
Credential store that is configured, for example Oracle Internet Directory, is down or not responding.
17.1.11.3.3 Events - Issues and Actions
If you are experiencing problems with events (portal events or personal events) and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
WebCenter Portal's repository is not available (the database where event information is stored).
-
Network connectivity issues exist between the application and the WebCenter Portal's repository.
-
Connection configuration information associated with events is incorrect or no longer valid.
17.1.11.3.4 Import and Export - Issues and Actions
If you are experiencing import and export problems and the status is Down, check the diagnostic logs to establish why this service is unavailable.
17.1.11.3.5 Lists - Issues and Actions
If you are experiencing problems with lists and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
MDS repository or WebCenter Portal's repository, in which the data associated with lists is stored, is not available.
-
Network connectivity issues exist between the application and the repository.
17.1.11.3.6 Mail - Issues and Actions
If you are experiencing problems with mail and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
Mail server is not available.
-
Network connectivity issues exist between the application and the mail server.
-
Connection configuration information associated with mail server is incorrect or no longer valid.
17.1.11.3.7 Notes - Issues and Actions
If you are experiencing problems with notes, check if the MDS repository is unavailable or responding slowly (the repository where note information is stored).
17.1.11.3.8 Page Services - Issues and Actions
If you are experiencing problems with the page editing services and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
WebCenter Portal's repository is not available (the database where page information is stored).
-
Network connectivity issues exist between the application and the WebCenter Portal's repository.
17.1.11.3.9 Portlets and Producers - Issues and Actions
If you are experiencing problems with a portlet producer and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
Portlet producer server is down or not responding.
-
Connection configuration information associated with the portlet producer is incorrect or no longer valid.
-
Producer requests are timing out.
-
There may be a problem with a particular producer, or the performance issue is due to a specific portlet(s) from that producer.
17.1.11.3.10 People Connections - Issues and Actions
If you are experiencing problems with people connections and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
The service is down or not responding.
-
WebCenter Portal's repository is not available (the database where people connection information is stored).
-
Network connectivity issues exist between the application and the WebCenter Portal's repository.
17.1.11.3.11 RSS News Feeds - Issues and Actions
If you are experiencing problems with RSS news feeds and the status is Down, check the diagnostic logs to establish why this service is unavailable. Some typical causes of failure include:
-
RSS services are not available.
-
A service being searched for activity data has failed, for example:
-
Unable to get discussions or announcement data - check the performance of discussions and announcements.
-
Unable to get list data - check the performance of lists.
-
17.1.11.3.12 Search - Issues and Actions
If you are facing problems with search (a service executor) and the status is Down, check the diagnostic logs to establish why this executor is unavailable. Some typical causes of failure include:
-
The repository of the executor is not available.
-
Network connectivity issues exist between the application and the repository of the executor.
-
Connection configuration information associated with the executor is incorrect or no longer valid.
-
Content repositories being searched is currently unavailable.
17.2 Viewing Performance Metrics Using Fusion Middleware Control
Fusion Middleware Control monitors a wide range of performance metrics for WebCenter Portal.
Administrators can monitor the performance and availability of all the components and services that make up WebCenter Portal, and the application as a whole. These detailed metrics will help diagnose performance issues and, if monitored regularly, you will learn to recognize trends as they develop and prevent performance problems in the future.
Some key performance metrics display on the WebCenter Portal home page (Figure 17-27).
The charts at the top of the page enable you to see at a glance whether the WebCenter Portal application is performing as expected or running slowly. You can drill down to more detailed metrics to troubleshoot problem areas and take corrective action. For guidance on what to look out for, see Using Key Performance Metric Data to Analyze and Diagnose System Health.
This section describes how to navigate around WebCenter Portal metric pages and includes the following topics:
17.2.1 Monitoring Recent Performance Metrics for WebCenter Portal
To see how well WebCenter Portal or a particular portal is currently performing:
17.2.2 Monitoring Portal Metrics
To access performance metrics for portals created in WebCenter Portal:
17.2.4 Monitoring Service Metrics for WebCenter Portal
To access service metrics for the WebCenter Portal application:
17.3 Customizing Key Performance Metric Thresholds and Collection
This section includes the following topics:
17.3.1 Understanding Customization Options for Key Performance Metrics
You can fine-tune how Oracle WebCenter Portal collects and reports key performance metrics to best suit your installation in several ways:
-
Customize warning thresholds for key performance metrics
For example, you can specify that in your installation, page response times greater then 15 seconds must trigger a warning message and report an "out-of-bounds" condition in DMS. Out-of-bound conditions also display "red" in performance charts to notify you that there is an issue.
For more information, see: Configuring Thresholds for Key Metrics.
-
Customize how many samples to collect for key performance metrics
If the default sample size (100) is too large or too small for your installation you can configure a more suitable value.
For more informations, see Configuring the Number of Samples Used to Calculate Key Performance Metrics.
-
Customize health check frequency
If your installation demands a more aggressive schedule you can check the system health more often. The default health check frequency is 5 minutes.
For details, see Configuring the Frequency of WebLogic Server Health Checks.
See also, Editing Thresholds and Collection Options for WebCenter Portal.
17.3.2 Understanding Default Metric Collection and Threshold Settings
You can configure metric collection options and metric threshold settings for WebCenter Portal through the metric_properties.xml
file. The default settings are shown in Example 17-1 and highlighted bold.
Note:
All time thresholds are specified in milliseconds. Memory sizes are specified in bytes and CPU usage is specified as a percentage.
Example 17-1 Default Metric Collection and Threshold Settings (metric_properties.xml)
<registry> <global_setting> <thread_config> <thread component_type="oracle_webcenter" interval="5"/> </thread_config> <health_check_config> <health_check name="wlsHealthCheck" enabled="true" collect="1"/> </health_check_config> <metric_config> <metric name="pageResponseTime" type="time" threshold="10000" comparator="gt"/> <metric name="portletResponseTime" type="time" threshold="10000" comparator="gt"/> <metric name="wlsCpuUsage" type="number" threshold="80" comparator="gt"/> <metric name="wlsGcTime" type="number" threshold="undef" comparator="gt"/> <metric name="wlsGcInvPerMin" type="number" threshold="undef" comparator="gt"/> <metric name="wlsActiveSessions" type="number" threshold="undef" comparator="gt"/> <metric name="wlsExecuteIdleThreadCount" type="number" threshold="undef" comparator="gt"/> <metric name="wlsActiveExecuteThreads" type="number" threshold="undef" comparator="gt"/> <metric name="wlsHoggingThreadCount" type="number" threshold="0" comparator="gt"/> <metric name="wlsOpenJdbcConn" type="number" threshold="undef" comparator="gt"/> <metric name="wlsHeapSizeCurrent" type="number" threshold="undef" comparator="gt"/> /metric_config> <custom_param_config> <custom_param name="downloadTimeThreshold" value="500"/> <custom_param name="downloadThroughputThreshold" value="1024"/> <custom_param name="uploadTimeThreshold" value="3000"/> <custom_param name="uploadThroughputThreshold" value="180"/> </custom_param_config> /global_setting> </registry>
For descriptions of all the settings in this file, refer to the following tables:
For information on how to modify the default settings, see Customizing Key Performance Metric Thresholds and Collection.
17.3.3 Configuring Thresholds for Key Metrics
You can customize the default warning thresholds for some key performance metrics to make them more suitable for your Oracle WebCenter Portal installation. Table 17-31 lists key performance metrics you can configure and their default thresholds (if any).
Out-of-the-box, thresholds are only pre-configured for page response (more than 10 seconds), portlet response (more than 10 seconds), and CPU usage (over 80%).
Note:
The value undef
means that a threshold is not defined.
You can change for threshold for any of the metrics listed in Table 17-31. For example, by default, pages that take longer than 10 seconds to display trigger a warning message, report an "out-of-bounds" condition in DMS, and show "red" in performance charts to immediately notify you when page responses are too slow. Some portal applications might consider 5 seconds to be an acceptable response time, in which case you can change the threshold to 5, 000 (ms) so that your performance charts only show "red" if there really is a problem for you.
Table 17-31 Configurable Metric Thresholds
Metric Name | Description | Default Threshold Value | Comparator |
---|---|---|---|
|
Number of milliseconds to render a page. |
10,000 ms |
gt |
|
Number of milliseconds to render a portlet. |
10,000 ms |
gt |
|
Percentage CPU usage of the WebLogic Server's JVM. |
80% |
gt |
|
Average length of time (ms) the JVM spent in each run of garbage collection. The average shown is for the last five minutes. |
undef |
gt |
|
Rate (per minute) at which the JVM is invoking its garbage-collection routine. The rate shown is for the last five minutes. |
undef |
gt |
|
Number of active sessions on WebLogic Server. |
undef |
gt |
|
Number of execute idle threads on WebLogic Server |
undef |
gt |
|
Number of active execute threads on WebLogic Server. |
undef |
gt |
|
Number of hogging threads on WebLogic Server. |
undef |
gt |
|
Number of open JDBC connections on WebLogic Server. |
undef |
gt |
|
JVM's current heap size on WebLogic Server. |
undef |
gt |
Metric thresholds are configured in metrics_properties.xml
using the format:
<metric_config> <metric name="<metric_name>" type="<number/time/string>" threshold="<value>" comparator="gt/lt/eq>"/> ... </metric_config>
Table 17-31 describes each parameter.
Table 17-32 Key Performance Metric Threshold Configuration
<Metric> Parameter | Configurable | Description |
---|---|---|
|
No |
Name of the metric. The metric name must exactly match the DMS sensor name as listed in Table 17-31. |
|
Yes |
Specifies whether the metric is a |
|
Yes |
(Only applies when Specifies a numeric threshold value. If specified, you must also specify a For example, if portlet response times greater than 5 seconds are considered out-of-bounds:
Note: Time must be specified in milliseconds. |
|
Yes |
Specify one of
|
To edit one or more metric thresholds, follow the steps in Editing Thresholds and Collection Options for WebCenter Portal.
17.3.4 Configuring the Frequency of WebLogic Server Health Checks
Out-of-the-box, the general health of the WebLogic Server on which WebCenter Portal is deployed is checked every 5 minutes and the results are reported on the Understanding WebLogic Server Metrics page.
If your installation demands a more aggressive schedule you can check the system health more often.
Health check frequency is configured in metrics_properties.xml
using the format:
<thread_config>
<thread component_type="oracle_webcenter" interval="<value>"/>
</thread_config>
Table 17-33 describes each parameter.
Table 17-33 Health Check Frequency Configuration
<thread> Parameter | Default Value | Configurable | Description |
---|---|---|---|
|
|
No |
For Oracle WebCenter Portal, the component_type is always |
|
|
Yes |
Specifies the interval between health checks, in minutes. For example:
|
To change the frequency, follow the steps in Editing Thresholds and Collection Options for WebCenter Portal.
17.3.5 Configuring the Number of Samples Used to Calculate Key Performance Metrics
Oracle WebCenter Portal collects and reports recent performance for several key performance metrics (page, portlet, and WebLogic Server) based on a fixed number of data samples. Out-of-the-box, the last 100 samples of each metric type are used to calculate these key performance metrics, that is, 100 samples for page metrics, 100 samples for portlet metrics, and so on.
You can increase or decrease the sample set to suit your installation. If you decide to increase the number of samples you must consider the additional memory cost of doing so, since all the key performance metrics samples are maintained in memory. Oracle recommends that you specify a few hundred at most. See Understanding Oracle WebCenter Portal Metric Collection.
Note:
Since all "out-of-bounds" metrics are recorded in the managed server's diagnostic log, you can always scan the logs at a later date or time to see what happened in the past, that is, beyond the 'N' metric samples that are temporarily held in memory.
The server startup property WC_HEALTH_MAX_COLLECTIONS
determines the number of metric samples collected by Oracle WebCenter Portal. If the property is not specified, 100 samples are collected.
To customize the number of samples collected for key performance metrics:
17.3.6 Editing Thresholds and Collection Options for WebCenter Portal
To change metric thresholds and collection criteria for WebCenter Portal:
-
Copy the XML snippet in Example 17-1 and save it to a text file named
metric_properties.xml
. -
Edit metric collection parameters and/or metric thresholds in
metric_properties.xml
, as required.Note:
You must consider your machine resources, as well as the system topology and configuration when choosing suitable thresholds for your Oracle WebCenter Portal installation. As each installation is different, most metrics do not have default or recommended threshold settings.
A description of all the settings and their defaults (if any) are described in the following tables:
-
Copy the updated
metric_properties.xml
file to:-
Your DOMAIN_HOME.
-
Another suitable directory.
-
-
Configure the server startup argument
WEBCENTER_METRIC_PROPERTIES
to point to the full path of the properties file:-
Log in to WebLogic Server Administration Console.
-
Navigate to the managed server on which your application is deployed.
For WebCenter Portal, navigate to Environment, then Servers, and then
WC_Portal
. -
Click the Server Start tab.
-
In the Arguments text area, enter the
WEBCENTER_METRIC_PROPERTIES
argument and specify the full path of the properties file.For example:
-DWEBCENTER_METRIC_PROPERTIES=/scratch/mythresholds/metric_properties.xml
Note:
If you only specify the file name, Oracle WebCenter Portal looks for this file in your DOMAIN_HOME.
Separate multiple arguments with a space. For example:
-DWC_HEALTH_MAX_COLLECTIONS=200 -DWEBCENTER_METRIC_PROPERTIES=/scratch/mythresholds/metric_properties.xml
-
Restart the managed server.
-
17.4 Diagnosing and Resolving Performance Issues with Oracle WebCenter Portal
The performance metrics described in this chapter enable you to quickly assess the current status and performance of WebCenter Portal from Fusion Middleware Control. When performance is slow, further investigations may be required for you to fully diagnose and fix the issue. For guidance, see Using Key Performance Metric Data to Analyze and Diagnose System Health.
Some common performance issues and actions are described in this chapter:
For more detailed troubleshooting tips relating to performance, see Troubleshooting WebCenter Portal.
17.5 Tuning Oracle WebCenter Portal Performance
See Oracle WebCenter Portal Performance Tuning in Tuning Performance for information on tuning WebCenter Portal. For example, how to tune the system limit (open-files-limit), JDBC data sources, JVM arguments, session timeouts, page timeouts, connection timeouts, concurrency timeouts, caching, and more.
17.6 Monitoring Performance Using WebCenter Portal Performance Pack
WebCenter Portal Performance Pack is a performance diagnostics tool that can be integrated seamlessly into the development phase to get the most out of your deployment. It is available as an add-on as part of Oracle WebCenter Portal. Using WebCenter Portal Performance Pack you can quickly identify and address critical performance bottlenecks in your application. For information, see About WebCenter Portal Performance Pack in Using Oracle WebCenter Portal Performance Pack.
17.7 Improving Data Caching Performance
To enhance performance and scalability, WebCenter Portal uses Coherence by default for its data caching solution. However, the Oracle Coherence license included in WebCenter Portal is restricted, which means that by default a Local caching scheme without any distributed data caching is supported. In a High-Availability (HA) environment deployment, the cached entries are not shared across JVMs/machines.
You can however, use the distributed mode for better performance in a clustered environment if you have Coherence or WebLogic Suite licensing. This section guides you on how to set up distributed cache and override WebCenter Portal's default caching configuration to improve performance, provided you have the appropriate license.
This section contains the following topics:
Note:
For more information about configuring coherence, see Configuring and Managing Coherence Clusters in Administering Clusters for Oracle WebLogic Server.
17.7.1 Summary of Coherence Cache Types
The basic types of cache modes provided by Coherence are outlined in Table 17-34.
Table 17-34 Basic Cache Types
Cache Name | Description |
---|---|
Distributed |
Data is partitioned among all the machines of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one or more unique machines within a cluster. Distributed caches are the most commonly used caches in Coherence. |
Replicated |
Data is fully replicated to every member in the cluster. This cache offers the fastest "read" performance with linear performance scalability for "reads," but poor scalability for "writes" (because "writes" must be processed by every member in the cluster). Because data is replicated to all machines, adding servers does not increase aggregate cache capacity. |
Optimistic |
Similar to the replicated cache, but without any concurrency control. This implementation offers higher write throughput than a replicated cache. It also allows using an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have different store content than that held by another cluster member. |
Near |
A near cache is a hybrid cache; typically fronts a distributed cache or a remote cache with a local cache. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data access, while enabling concurrency and ensuring coherency and fail-over, effectively combining the best attributes of replicated and partitioned caches. |
Local |
A local cache is a cache that is local to (completely contained within) a particular cluster node. While it is not a clustered service, the Coherence local cache implementation is often used in combination with various clustered cache services. |
For more information about the types of caches provided by Coherence, see Introduction to Coherence Caches in Developing Applications with Oracle Coherence guide.
17.7.2 Default Coherence Caches in WebCenter Portal
The default user-configurable Coherence cache entries for WebCenter Portal are shown in Table 17-35.
Table 17-35 Default Coherence Caches in WebCenter Portal
Cache Name | Purpose | Default Coherence Configuration |
---|---|---|
|
Cache for Application Space |
|
|
Cache for Space Properties |
|
|
Cache for Generic Site Resources |
|
|
Cache for People Profile |
|
|
Doc lib caches (Provisioned and configured) |
|
|
Cache for Page definitions |
|
The properties of the default Coherence configuration shown in Table 17-35 are described as follows:
Default Configuration | Eviction Policy | High Units | Expiration Delay |
---|---|---|---|
WebCenter_12HourCache |
Hybrid |
1000 |
12 hours |
WebCenter_60MinuteCache |
Hybrid |
1000 |
1 hour |
Where:
-
High Units is the maximum number of units that can be placed in the cache before pruning occurs
-
Hybrid Eviction Policy chooses which entries to evict based on the combination (weighted score) of how often and how recently they were accessed. Those entries that are accessed least frequently and those that were not accessed for the longest period are evicted first.
-
Expiration Delay specifies the amount of time from the last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store. Expired values are periodically discarded from the cache.
Coherence can be deployed with a standalone application, as an application server library or part of a Java EE module within an EAR
or WAR
file or also within the WebLogic Server context.