Deploying Portal Applications
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
This document covers the intricacies of deploying your WebLogic Portal application into a production environment. There are a number of different options for configuring your production portal application and domain. This document outlines some of your options but focuses primarily on the best practices for production environments, including using a enterprise-quality database and a clustered environment for redundancy and scalability.
This document contains the following sections:
To bring your portal online in a production environment, it is first necessary to prepare your portal application. Typical preparation steps include modifying deployment descriptors for product, building the enterprise archive (EAR) with all its pre-compiled classes, and determining if you want to compress that EAR into an archive or leave it exploded.
Similar to any J2EE application, a portal application has a number of deployment descriptors that you may want to tune for your production environment.
Within the Portal application is the /META-INF
directory which contains a number of deployment descriptors, including application-config.xml
, a portal-specific deployment descriptor that contains cache configuration, behavior tracking, campaign, and commerce tax settings. If these values are different for your production environment than for your existing development settings, modify this file appropriately before building the portal application.
Within any portal Web application is a /WEB-INF
directory that contains a number of deployment descriptors you may need to modify for your production environment.
web.xml
is a J2EE standard deployment descriptor. Among other settings, it has a set of elements for configuring security for the Web application. You can read more about web.xml here: http://download.oracle.com/docs/cd/E13222_01/wls/docs81/webapp/web_xml.html.webLogic.xml
is a standard WebLogic deployment descriptor for Web applications that has a number of important descriptor entries. Detailed information on this file can be found here: http://download.oracle.com/docs/cd/E13222_01/wls/docs81/webapp/weblogic_xml.html.Note: In a clustered production environment, it is important that you configure the <session-param>
descriptor element in weblogic.xml
to enable session replication to take place across the cluster. Without this setting, you will not have failover of a user's state information if a server in the cluster is stopped. You may need to add the following block to weblogic.xml
.
<session-descriptor>
<session-param>
<param-name>PersistentStoreType</param-name>
<param-value>replicated_if_clustered</param-value>
</session-param>
</session-descriptor>
By default if PersistentStoreType is not set, it defaults to disabling persistent session storage.
The other commonly modified element in weblogic.xml
for production environments is the <jsp-descriptor>
. For production, it is common modifications include:
weblogic.xml
inside the existing <jsp-descriptor>
section. For example:<jsp-param>
<param-name>precompile</param-name>
<param-value>true</param-value>
<param-name>precompileContinue</param-name>
<param-value>true</param-value>
</jsp-param>
WebLogic Workshop has a number of additional deployment descriptors that are important if you are developing Web services. Information on these can be found on dev2dev in the WebLogic Workshop Internals document under "Application Customization" at: http://dev2dev.bea.com/products/wlworkshop81/articles/wlw_internals.jsp#9.
If you are going to use a particular Java Virtual Machine (JVM) in your production environment, it is a good idea to compile the EAR application with the JDK for that JVM. You can change the JVM for your WebLogic Workshop project by going to Tools > Application Properies, selecting WebLogic Server, and specifying the path to the JDK Home (root directory) you want to use.
To deploy a portal application to a production environment, you must first build the application in WebLogic Workshop to compile necessary classes in the portal application. There are two options:
You can build your portal application from the command line using the wlwBuild command. This can make it easier for you to automate the process of building your application. See http://download.oracle.com/docs/cd/E13226_01/workshop/docs81/doc/en/workshop/reference/commands/cmdWlwBuild.html.
This section provides the steps necessary to set up a cluster across which your portal application is deployed.
To deploy a portal application into production, it is necessary to set up an enterprise-quality database instance. PointBase is supported only for the design, development, and verification of applications. It is not supported for production server deployment.
Details on configuring your production database can be found in the Database Administration Guide at http://download.oracle.com/docs/cd/E13218_01/wlp/docs81/db/index.html.
Once you have configured your enterprise database instance, it is possible to install the required database DDL and DML from the command line as described in the Database Administration Guide. A simpler option is to create the DDL and DML from the domain Configuration Wizard when configuring your production environment, as this guide will show.
When configuring your production servers or cluster with the domain Configuration Wizard, you will need to deploy some JMS queues that are required by WebLogic Workshop-generated components that are deployed at run time. To find the JMS queue names you need, open the wlw-manifest.xml
file in the portal application's /META-INF
directory.
In the file, find the JMS queue JNDI names that are the defined values in elements named <con:async-request-queue>
and <con:async-request-error-queue>
. Record the JNDI names of the JMS queues found in those definitions for use when configuring your production system.
By clustering a portal application, you can attain high-availability and scalability for that application. Use this section to help you choose which cluster configuration you want to use.
When setting up an environment to support a production instance of a portal application, the most common configuration is to use the WebLogic Recommended Basic Architecture documented here: http://download.oracle.com/docs/cd/E13222_01/wls/docs81/cluster/planning.html#1090621.
Figure 1 shows a WebLogic Portal-specific version of the recommended basic architecture.
Figure 1 WebLogic Portal single cluster architecture
Note: WebLogic Portal does not support a split configuration architecture where EJBs and JSPs are split onto different servers in a cluster. The basic architecture provides significant performance advantages over a split configuration for Portal.
Even if you will be running a single server instance in your initial production deployment, this architecture allows you to easily configure new server instances if and when needed.
A multi-clustered architecture can be used to support a zero-downtime environment when your portal application needs to be accessible 365x24. While a portal application can run indefinitely in a single cluster environment, deploying new components to that cluster or server will result is some period of time where the portal is inaccessible. This is due to the fact that while a new EAR application is being deployed to a WebLogic Server, HTTP requests cannot be handled. Redeployment of a portal application also results in the loss of existing sessions.
A multi-cluster environment involves setting up two clusters, typically a primary cluster and secondary cluster. During normal operations, all traffic is directed to the primary cluster. When some new components (such as portlets) need to be deployed, the secondary cluster is used to handle requests while the primary is updated. The process for managing and updating a multi-clustered environment is more complex than with a single cluster and is addressed in Zero Downtime Architectures on page 43. If this environment is of interest you may want to review that section now.
Figure 2 Weblogic Portal multi-cluster architecture
You should determine the network layout of your domain before building your domain with the Configuration Wizard. Determine the number of managed servers you will have in your cluster—the machines they will run on, their listen ports, and their DNS addresses. Decide if you will use WebLogic Node Manager to start the servers. For information on Node Manager, see Configuring and Managing WebLogic Server at http://download.oracle.com/docs/cd/E13222_01/wls/docs81/adminguide/nodemgr.html.
WebLogic Portal must be installed on the cluster's administration server machine and on all managed server machines.
Create your new production environment with the domain Configuration Wizard. See Creating WebLogic Configurations Using the Configuration Wizard at http://download.oracle.com/docs/cd/E13196_01/platform/docs81/confgwiz/intro.html.
This section walks you through the creation of a production cluster environment for WebLogic Portal.
In addition, you can see a demo of how a production environment is configured.
hostname:port
of the managed server listen addresses. When you are finished, click Next. Warning: Exercise caution when loading the database, as the scripts will delete any portal database objects from the database instance if they already exist. You will see a large number of SQL statements being executed. It is normal for the scripts to have a number of statements with errors on execution, because the script drops objects that may have been created before.
/META-INF/wlw-manifest.xml
file, as described in Reading the wlw-manifest.xml File on page 5. These names will be something like <WEB_APP>.queue.AsyncDispacher
and <WEB_APP>.queue.AsyncDispacher_error
. For each queue, add a new JMS Distributed Queue with the Add button. Set the Name entry and JNDI name entry to the name listed in wlw-manifest.xml
. Set the Load balancing policy and Forward delay as appropriate for your application.
A pair of queue entries exisits for each Web application (portal Web project) in your portal application. When you are finished, you should have a distributed queue for each queue. In other words, if your enterprise application has three Web applications, you should have added six distributed queues—two for each Web application.
For information on starting WebLogic Server, see "Creating Startup Scripts" at http://download.oracle.com/docs/cd/E13222_01/wls/docs81/isv/startup.html.
At this point your administration server domain has been configured using the domain Configuration Wizard. Before you start the administration server to do additional configuration work, you may want to increase the default memory size allocated to the administration server. To accomplish this, you will need to modify your startWebLogic script in the domain's root directory and change the memory arguments. For example:
set MEM_ARGS=-Xms256m %memmax%
to set MEM_ARGS=-Xms512m -Xmx512m
MEM_ARGS="-Xms256m ${memmax}"
to MEM_ARGS="-Xms512m -Xmx512m"
The exact amount of memory you should allocate to the administration server will vary based on a number of factors such as the size of your portal application, but in general 512 megabytes of memory is recommended as a default.
This section provides instructions for deploying your portal application.
In addition, you can see a demo of how to deploy a portal application.
Note: In Table 1, most components are targeted to the administration server as well as the cluster. This is required, and it is the only supported configuration. There are several application design challenges specific to clustering that WebLogic Portal solves to ensure that portal applications perform properly and optimally in a cluster environment. The targeting scheme described above is part of the solution to those design challenges.
While you need to deploy your portal application to the administration server, the administration server is not typically used to serve pages for portal applications.
Now that you have configured your domain, including defining your managed servers, you can create individual domains to be used for your cluster using the domain Configuration Wizard. This is necessary because even though the managed servers are defined in the cluster's domain, when run on a different physical machine you still need a domain directory to run the managed servers.
WebLogic Portal must be installed on all managed servers.
In addition, you can see a demo of how to create a managed server.
This information will not typically be used, because you will bind this server to the administration server using the administration server's credentials.
Once you have created a domain for a managed server, you can reuse the same domain for your other managed server on the same machine by specifying different servername parameters to your startManagedWebLogic script, or create new managed domains using the domain Configuration Wizard.
There are numerous ways to start a managed server and bind it to your administration server, including using Node Manager. For your initial setup, you may want to use the startManagedWebLogic
script in the domain root directory. You can run this script by specifying the name of the managed server for this server instance and the URL of the administration server. Before starting the script, you should edit it and give the managed server more memory than it is allocated by default. This can be done by specifying a new MEM_ARGS
setting. For example, change the memory allocation to -Xms512m -Xmx512m
.
After starting a managed server, you can browse your portal application by going to the appropriate URL on the managed server instance. To provide your users a single point of entry to your cluster, as well as support session failover, you will need to configure a proxy server.
For instructions on configuring a proxy plugin for WebLogic, see "Configure Proxy Plugins" in Using WebLogic Server Clusters at http://download.oracle.com/docs/cd/E13222_01/wls/docs81/cluster/setup.html#684345.
There are no WebLogic Portal-specific configuration tasks when setting up a proxy plug-in.
This section contains instructions for redeployment, partial redeployment, and iterative deployment of datasync data, such as user profile properties, user segments, content selectors, campaigns, discounts, and other property sets.
You can use the WebLogic Server Administration Console or weblogic.Deployer tool to redeploy an updated portal application to your production server. See "weblogic.Deployer Utility" in Deploying WebLogic Server Applications at http://download.oracle.com/docs/cd/E13222_01/wls/docs81/deployment/tools.html.
The following batch file is an example of how to use weblogic.Deployer to redeploy a portal application to production.
@echo off
echo Redeploys a Portal Web Project to a Server or Cluster
echo First Parameter is the name of the Server or Cluster
echo Second Parameter is the name of the Application
echo Third Parameter is the administrative username for the Portal Server
set SERVER=%1
set APPNAME=%2
set USERNAME=%3
echo server = %SERVER%
echo appname = %APPNAME%
echo username = %USERNAME%
java weblogic.Deployer -redeploy -username %USERNAME% -name %APPNAME% -targets %SERVER%
In certain situations you can reduce the time needed to redeploy individual pieces of a portal application by using the weblogic.Deployer tool.
If your updates are contained within a particular portal Web application, you can redeploy just that Web application and greatly reduce the time spent in redeployment. This is of use if you have new portlets and Page Flows, but no new EJBs, libraries, or modules (which are enterprise application scoped).
Because a portal Web application has a number of dependencies on WebLogic Workshop control classes, those needed to be redeployed as well. The following batch file can be used to help simplify that process. You will need to have weblogic.Deployer in your classpath, which can be added by running <BEA_HOME>/weblogic81/common/bin/commEnv
script.
@echo off
echo Redeploys a Portal Web Project to a Server or Cluster
echo First Parameter is the name of the Server or Cluster
echo Second Parameter is the name of the Application
echo Third Parameter is the name of the Portal Web Application
echo Fourth Parameter is the administrative username for the Portal Server
set SERVER=%1
set APPNAME=%2
set WEBAPPNAME=%3
set USERNAME=%4
echo server = %SERVER%
echo appname = %APPNAME%
echo webappname = %WEBAPPNAME%
echo username = %USERNAME%
set TARGETS=%APPNAME%@%SERVER%
set TARGETS=%TARGETS%,.workshop/%APPNAME%/EJB/TimerControl_-livsjc6qp6ws@%SERVER%
set TARGETS=%TARGETS%,.workshop/%APPNAME%/EJB/p13controls_k3cw9vg6497r@%SERVER%
set TARGETS=%TARGETS%,.workshop/%APPNAME%/EJB/MDBListener_-1x0154i4jz0he@%SERVER%
set TARGETS=%TARGETS%,.workshop/%APPNAME%/EJB/GenericStateless@%SERVER%
set TARGETS=%TARGETS%,.workshop/%APPNAME%/EJB/ProjectBeans@%SERVER%
java weblogic.Deployer -redeploy -username %USERNAME% -name %WEBAPPNAME% -targets %TARGETS%
This section provides instructions for updating portal application datasync data, such as user profile properties, user segments, content selectors, campaigns, discounts, and other property sets, which must be bootstrapped to the database in a separate deployment process.
Portal allows you to author a number of definition files, such as user profiles and content selectors, that must be managed carefully when moving from development to production and back.
Within WebLogic Workshop, portal definitions are created in a special Datasync Project, exposed in the WebLogic Workshop Application window as a /data
subdirectory. (On the file system, the directory exists in the application's /META-INF/data
directory.) This project can contain user profile property sets, user segments, content selectors, campaigns, discounts, catalog property sets, event property sets, and session and request property sets.
During development, all files created in the datasync project are stored in the META-INF/data directory of the portal application and exposed in WebLogic Workshop in the <portalApplication>/data
directory. To provide speedy access from runtime components to the definitions, a datasync facility provides an in-memory cache of the files. This cache intelligently polls for changes to definitions, loads new contents into memory, and provides listener-based notification services when content changes, letting developers preview datasync functionality in the development environment.
Datasync definition modifications are made not only by WebLogic Workshop developers, but also by business users and portal administrators, who can modify user segments, campaigns, placeholders, and content selectors with the Weblogic Administration Portal. In the development environment, both WebLogic Workshop and the WebLogic Administration Portal write to the files in the META-INF/data
directory.
When deployed into a production system, portal definitions often need to be modifiable using the Weblogic Administration Portal. In most production environments, the portal application will be deployed as a compressed EAR file, which limits the ability to write modifications to these files. In a production environment, all datasync assets must be loaded from the file system into the database so the application can be updated.
Figure 3 shows how the /data
directory from the updated portal application is put into a standalone JAR and bootstrapped to the database.
Figure 3 Loading updated datasync files to the database
Alternatively, some production environments deploy their portal applications as uncompressed EARs. In this case, the deployed portal application on the administration server is the primary store of datasync definitions. Work done in the WebLogic Administration Portal on any managed server is automatically synchronized with the primary store.
For both compressed and uncompressed EAR files, you can view and update datasync definitions using the Datasync Web Application.
Each portal application contains a Datasync Web Application located in datasync.war
in the application root directory. Typically, the URL to the Datasync Web application is http://<server>:<port>/<appName>DataSync. For example, http://localhost:7001/portalAppDataSync. You can also find the URL to your Datasync Web application by loading the WebLogic Server Administration Console and selecting Deployments > Applications > appName > *DataSync and clicking the Testing tab to view the URL.
The Datasync Web application allows you to view contents of the repository and upload new content, as shown in Figure 4.
Figure 4 Datasync Web application home page
Working with the Repository Browser - When working with the Data Repository Browser, you have the option to work with all the files in the repository using the icons on the left hand side of the page, or drill down into a particular sub-repository, such as the repository that contains all Property Set definitions.
View Contents - To view the contents of a repository, click on the binoculars icon to bring up the window shown in Figure 5.
Figure 5 Browsing the Datasync repository
From this list, click on a particular data item to see it's contents, as shown in Figure 6.
As you can see in the previous figure, you can view the XML data for a particular content item.
To remove content from a repository, click on the trash can icon on the left side of the page.
When the application is deployed, if the JDBC Repository is empty (no data), then the files in the EAR will be used to bootstrap (initialize) the database. The Datasync assets are stored in the following tables: DATA_SYNC_APPLICATION, DATA_SYNC_ITEM, DATA_SYNC_SCHEMA_URI, and DATA_SYNC_VERSION. The bootstrap operation by default only happens if the database is empty. When you want to do incremental updates, the Datasync Web application provides the ability to load new definitions directly into the database. This can be done as part of redeploying a portal application, or independently using a special JAR file that contains your definitions, as shown in Figure 6, Data item contents, on page 23.
Upload new contents - In the Datasync Web application, there is a button on the left side that looks like a document with 1's and 0's called Bootstrap Data. When you click this icon, the following page appears, which lets you load data into the database.
Figure 7 Uploading new datasync data
When you bootstrap, you can choose a bootstrap source, which is either your deployed portal application or a stand-alone JAR file. For example, if you have an updated portal application that you have redeployed to your production environment, you can add any new definitions it contains to your portal. Alternatively, if you have authored new definitions that you want to load independently, you can create a JAR file with just those definitions and load them at any point.
Either way, when you update the data repository, you can choose to "Overwrite ALL data in the Master Data Repository," "Bootstrap only if the Mastery Repository is empty," or "Merge with Master Data Repository-latest timestamp wins."
Bootstrapping from an EAR - If you are redeploy an existing EAR application and want to load any new definitions into the database, choose the Application Data (META-INF/data) as your bootstrap source, and then choose the appropriate Bootstrap Mode. To ensure you do not lose any information, you may want to follow the instructions in the section entitled, Pulling Definitions from Production on page 25 to create a backup first. It is not possible to bootstrap definition data from an EAR file that is not deployed.
Creating a JAR file - To bootstrap new definition files independently of updates to your portal application, you can create a JAR file that is loaded onto the server that contains the files (content selectors, campaigns, user segments, and so on) that you want to add to the production system.
To do this, you can use the jar command from your META-INF/data
directory. For example:
jar -cvf myfiles.jar *
This example will create a JAR file called myfiles.jar
that contains all the files in your data directory, in the root of the JAR file. Then, you can bootstrap information from this JAR file by choosing Jar File on Server as your data source, specifying the full physical path to the JAR file and choosing the appropriate bootstrap mode. By running this process you can upgrade all the files that are packaged in your JAR. Controlling the contents of your JAR allows you to be selective in what pieces you want to update.
When creating the JAR file, the contents of the META-INF/data
directory should be in the root of the jar file. Do not jar the files into a META-INF/data
directory in the JAR file itself.
Validating Contents - After bootstrapping data, it is a good idea to validate the contents of what you loaded by using the View functionality of the Datasync Web application.
Developers and testers may be interested in bringing definitions that are being modified in a production environment back into their development domains. As the modified files are stored in the database, Portal provides a mechanism for exporting XML from the database back into files.
One approach is to use the browse capability of the Datasync web application to view all XML stored in the database in a web browser. This information can then be cut and pasted into a file.
A better alternative is to use the DataRepositoryQuery Command Line Tool, which allows you to fetch particular files from the database using an FTP-like interface.
The DataRepositoryQuery Command Line Tool supports a basic, FTP-style interface for querying the data repository on a server.
The command line class is com.bea.p13n.management.data.DataRepositoryQuery
. In order to use it, you must have the following in your CLASSPATH: p13n_ejb.jar
, p13n_system.jar
, and weblogic.jar
.
Run the class with the argument help
to print basic usage help.
set classpath=c:\bea\weblogic81\p13n\lib\p13n_system.jar;
c:\bea\weblogic81\p13n\lib\p13n_ejb.jar;
C:\bea\weblogic81\server\lib\weblogic.jar
java com.bea.p13n.management.data.DataRepositoryQuery help
Several optional command arguments are used for connecting to the server. The default values are probably adequate for samples provided by BEA. In real deployments, the options will be necessary.
Only one of -app
or -url
may be used, as they represent two alternate ways to describe how to connect to a server.
The URL is the most direct route to the server, but it must point to the DataRepositoryQuery
servlet in the Datasync Web application. This servlet should have the URI of DataRepositoryQuery
, but you also need to know the hostname, port, and the context-root
used to deploy datasync.war
in your application. So the URL might be something like http://localhost:7001/datasync/DataRepositoryQuery
if datasync.war was deployed with a context-root
of datasync
.
The -app
option allows you to be a bit less specific. All you need to know is the hostname, port number, and the name of the application. If there is only one datasync.war
deployed, you do not even need to know the application name. The form of the -app
description is appname@host:port
, but you can leave out some pieces if they match the default of a single application on localhost port 7001.
The -app
option can be slow, as it has to perform many queries to the server, but it will print the URL that it finds, so you can use that with the -url
option on future invocations.
Assuming CLASSPATH
is set as previously described, and the default username/password of weblogic
/weblogic
is valid):
Find the application named p13nBase running on localhost port 7001:
java com.bea.p13n.management.data.DataRepositoryQuery -app p13nBase
Find the application named p13nBase running on snidely port 7501:
java com.bea.p13n.management.data.DataRepositoryQuery -app p13nBase@snidely:7501
Find the single application running on localhost port 7101:
java com.bea.p13n.management.data.DataRepositoryQuery -app @7101
Find the single application running on snidely port 7001:
java com.bea.p13n.management.data.DataRepositoryQuery -app @snidely
Find the single application running on snidely port 7501:
java com.bea.p13n.management.data.DataRepositoryQuery -app @snidely:7501
In each of the examples, the first line of output will be something like this:Using url: http://snidely:7001/myApp/datasync/DataRepositoryQuery
The easiest way to use the tool is in shell mode. To use this mode, you just invoke DataRepositoryQuery
without any arguments (other than those needed to connect as described previously).
In this mode, the tool will start a command shell (you will see a drq>
prompt) where you can interactively type commands, similar to how you would use something like ftp
.
Alternatively, you can supply a single command (and its arguments), and DataRepositoryQuery
will run that command and exit.
The HELP
command will give you help on the commands you can use. Or use HELP
command to get help on a specific command.
Commands are not case-sensitive, but arguments such as filenames and data item URIs may be. More help than what is listed above can be obtained by using HELP
command for the command you are interested in.
Where multiple URIs are allowed (indicated by uri(s)
in the help), you can use simple wildcards, or you can specify multiple URIs. The result of the command includes all matching data items.
Options in square brackets ([]
) are optional and have the following meanings:
The following example retrieves all assets from the repository as files:
java com.bea.p13n.management.data.DataRepositoryQuery -app mget
When working with a production server with an uncompressed EAR, the only difference from development mode is that there is no poller thread.
When updating definition files using the WebLogic Administration Portal, the files are updated on the administration server in the deployed uncompressed EAR directory automatically. This means that the WebLogic Administration Portal can be used from any managed server in the cluster, but the primary store always resides on the administration server. If the deployable EAR directory is read-only, the WebLogic Administration Portal cannot be used to modify files.
Making sure you are not overwriting files - When working with an uncompressed EAR file in production, special care needs to be taken when working with definition files. When you redeploy your application to your production environment, the existing definition files are replaced. If you have administrators updating definitions using the WebLogic Administration Portal, their changes will be lost upon redeploying an updated application.
Copying back to development - To prevent overwriting any changes done by administrators to definition files when redeploying a new portal application, you must first copy all the definition files from the administration server back to development manually or using the Datasync Web application.
There are a number of general concepts to think about when iteratively deploying datasync definitions into a production system. In general, adding new datasync definitions to a production system is a routine process that you can do at any time. However, removing or making destructive modifications to datasync definitions can have unintended consequences if you are not careful.
When removing or making destructive modifications to datasync definitions, you should first consider whether there are other components that are linked to those components. There are several types of bindings that might exist between definitions. For some of these bindings, it is very important to understand that they may have been defined on the production server using the WebLogic Administration Portal and may not be known by the developers.
One example of bindings is that you may have two datasync definitions bound together. An example of this is a campaign that is based on a user property defined in a user property set. If you remove the property set or the specify property, that campaign will no longer execute properly. In this case, you should update any associated datasync definitions before removing the property set or property.
A second scenario is that you have defined an entitlement rule that is bound to a datasync definition. For example, you might have locked down a portlet based on a dynamic role that checks if a user has a particular user property value. In this case, you should update that dynamic role before removing the property set or property.
A third scenario is that there are in-page bindings between datasync items and Portal JSP tags. An example is a <pz:contentSelector> tag that references a content selector. Update the content selector tag in the production environment before you remove the content selector. This is one type of binding that is only configured in WebLogic Workshop at development time rather than in the WebLogic Administration Portal.
A good guideline for developers is to not remove or make significant changes to existing datasync definitions that are in production. Instead, create new definitions with the changes that are needed. This can be accomplished by creating new versions of, for example, campaigns where there is no chance that they are being used in unanticipated ways. Additionally, do datasync bootstraps of the production system's existing datasync definitions back into development on a regular basis.
When you remove a property set, any existing values being stored locally by portal in the database will NOT be removed automatically. You need to examine the PROPERTY_KEY and PROPERTY_VALUE tables to clean up the data if desired.
The previous sections provided instructions for deploying file-based portal enterprise applications and updating datasync definitions. The WebLogic Portal Propagation Utility lets you propagate application LDAP and database data from one server to another. The Propagation Utility let's you propagate the following data:
The propagation utility is a portal Web application packaged inside an enterprise application archive (.ear). The propagation utility is deployed on both the domain containing the LDAP and database data (the source server) and the domain that will receive that data (destination server).
Figure 8 shows the propagation utility interface.
Figure 8 The WebLogic Portal Propagation Utility
The propagation utility is a self-contained application that does not require you to open or configure it using WebLogic Workshop.
The main use case for the propagation utility is moving data from a staging environment to a production environment. However, another valid use case is moving data from production back to staging in order to simulate the current production environment on staging.
In a clustered environment, propagate only from the source administration server to the destination administration server. The data is then automatically propagated from the destination administration server to the managed servers in the cluster.
Database Requirements - You must propagate between the same type of database, and you must be able to simultaneously connect to the source and destination databases. You must also have a non-transactional (non-XA) database driver for your database installed on the source server.
The following sections show you how to download, deploy, and use the propagation utility.
Contact BEA Support for the latest version of the Propagation Utility.
The propagation utility download contains the following files:
This section shows you how to set up and deploy the propagation utility. Table 2 provides an overview of the configuration necessary on both the source and destination servers. Detailed instructions follow the table.
In the following installation steps, starting with step 6., the order for setting up JDBC, JMS, and deploying the propagation application is arbitrary. You can perform those steps in any order.
To install the WebLogic Portal Propagation Utility:
pdef.zip
. Extract this archive to the domain root directory on both the source and destination servers.For example, if your destination domain directory on both servers is /myDomain
, extract pdef.zip
into that directory on both servers. The /myDomain/pdef/
directory is created automatically.
In the top-level <portal-propagation>
element of both files, change the default values of the source-data-source-name
and destination-data-source-name
attributes. Make a note of the values you enter. You will use them in later steps to configure the JDBC database connections on the source server.
<portal-propagation
xmlns="http://www.bea.com/portal/xsd/propagation/2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.bea.com/portal/xsd/propagation/2.0 portal-propagation_2_0.xsd"
source-data-source-name="srcDataSource"
destination-data-source-name="destDataSource"
insert-new-records="true"
update-existing-records="true">
Note: The engine that propagates portal database data needs a non-XA connection pool and data source on the source server for both the source and destination databases. Do not attempt to reuse the existing WebLogic Portal connection pool and data source for propagation, because the server will use a transaction manager and attempt to set up the database propagation as a distributed transaction. Propagation will then fail because the propagation engine needs to manage the transaction itself.
Your connection pools are configured. Now configure your data sources on the source server to correspond to the connection pools.
source-data-source-name
attribute you entered in the /pdef/pdef_81.xml
file. For example, enter srcDataSource
.Figure 9 Deselect Honor Global Transactions to make the data source non-XA
destination-data-source-name
attribute you entered in the /pdef/pdef_81.xml
file. For example, enter destDataSource
.Connection pool and data source setup is complete. Create connection pools and data sources on any servers that will serve as source servers. For example, set up connection pools and data sources on the production server if you want to propagate back to a staging server.
Note: You may see a list of existing JMS queues you have configured. You must still create the JMS queues for the propagation utility because it runs in its own Web application.
PortalPropagation.queue.AsyncDispatcher_error
PortalPropagation.queue.AsyncDispatcher
PortalPropagationEntApp.ear
(the directory to which you extracted the propagation utility).PortalPropagationEntApp.ear
appears in the list, select the option button next to it and click Continue.PortalPropagationEntApp.ear
.Note: Windows only - If you receive the following type of exception while trying to deploy:
java.lang.InternalError: IO error while trying to compute name from: <path>
it probably means the path to your server exceeds 254 characters. As a workaround, shorten the path to the server for the domain receiving the exception. For example, if your domain path is C:\bea\user_projects\domains\destinationDomain
, move the domain to something like C:\destinationDomain
.
Help with the Portlets - The propagation portlets are self-documented. For information on using a portlet, click the help icon on the portlet titlebar.
The Portal Library contains books, pages, layouts, portlets, desktops, and other types of portal-specific resources. Using the WebLogic Administration Portal, these resources can be created, modified, entitled, and arranged to shape the portal desktops that end users access.
Figure 10 shows an image of the portal resource tree in the WebLogic Administration Portal. The library contains the global set of portlets and other resources, while the Portals node contains instances of those resources, such as the colorsSample desktop and its pages, books, and portlets.
Figure 10 Portal resources library
Each of these resources is defined partially in the portal database so they can be easily modified at run time. The majority of resources that exist are created by an administrator, either from scratch or by creating a new desktop from an existing .portal
template file that was created in WebLogic Workshop.
However, portlets themselves are created by developers and exist initially as XML files. In production, any existing .portlet
files in a portal application are automatically read into the database so they are available to the WebLogic Administration Portal.
The following section addresses the lifecycle and storage mechanisms around portlets, since their deployment process is an important part of portal administration and management.
During development time, .portlet
files are stored as XML in any existing portal Web application in the Portal EAR. As a developer creates new .portlet
files, a file poller thread monitors changes and loads the development database with the .portlet
information.
In a production environment, .portlet
files are loaded when the portal Web application that contains them is redeployed on the administration server. This redeployment timing ensures that the content of the portlet, such as a JSP or Page Flow, is available at the same time as the .portlet
file is available in the Portal Library. The administration server is the chosen master responsible for updating the database so that there are no contention issues around every server in the production cluster trying to write the new portlet information into the database at the same time. When deploying new portlets to a production environment, target the portal application for redeployment on the administration server.
When a portlet is loaded into the database, the portlet XML is parsed and a number of tables are populated with information about the portlet, including PF_PORTLET_DEFINITION, PF_MARKUP_DEFINITION, PF_PORTLET_INSTANCE, PF_PORTLET_PREFERENCE, L10N_RESOURCE, and L10N_INTERSECTION.
PF_PORTLET_DEFINITION is the master record for the portlet and contains rows for properties that are defined for the portlet, such as the definition label, the forkable setting, edit URI, help URI, and so on. The definition label and Web application name are the unique identifying records for the portlet. Portlet definitions refer to the rest of the actual XML for the portlet that is stored in PF_MARKUP_DEF.
PF_MARKUP_DEF contains stored tokenized XML for the .portlet
file. This means that the .portlet
XML is parsed into the database and properties are replaced with tokens. For example, here is a snippet of a tokenized portlet:
<netuix:portlet $(definitionLabel) $(title) $(renderCacheable) $(cacheExpires)>
These tokens are replaced by values from the master definition table in PF_PORTLET_DEFINITION, or by a customized instance of the portlet stored in PF_PORTLET_INSTANCE.
The following four types of portlet instances are recorded in the database for storing portlet properties:
PF_PORTET_INSTANCE contains properties for the portlet for attributes such as DEFAULT_MINIMIZED, TITLE_BAR_ORIENTATION, and PORTLET_LABEL.
If a portlet has portlet preferences defined, those are stored in the PF_PORTLET_PREFERENCE table.
Finally, portlet titles can be internationalized. Those names are stored in the L10N_ RESOURCE table which is linked using L10N_INTERSECTION to PF_PORTLET_DEFINITION.
If a portlet is removed from a newly deployed portal application, and it has already been defined in the production database, it is marked as IS_PORTLET_FILE_DELETED in the PF_PORTLET_DEFINITION table. It will then show up as grayed out in the WebLogic Administration Portal, and user requests for the portlet if it is still contained in a desktop instance will return a message that says the portlet is unavailable.
One limitation of redeploying a portal application to a WebLogic cluster is that during redeployment users cannot access the site. For enterprise environments where it is not possible to schedule down time to update a portal application with new portlets and other components, a multi-cluster configuration lets you keep your portal application up and running during redeployment.
The basis for a multi-clustered environment is the notion that you have a secondary cluster to which user requests are routed while you update the portal application in your primary cluster.
For normal operations, all traffic is sent to the primary cluster, as shown in Figure 11. Traffic is not sent to the secondary cluster under normal conditions because the two clusters cannot use the same session cache. If traffic was being sent to both clusters and one cluster failed, a user in the middle of a session on the failed cluster would be routed to the other cluster, and the user's session cache would be lost.
Figure 11 During normal operations, traffic is sent to the primary cluster
Step 1 - All traffic is routed to the secondary cluster, then the primary cluster is updated with a new Portal EAR, as shown in Figure 12. This EAR has a new portlet, which is loaded into the database.
Figure 12 Traffic is routed to the secondary cluster; the primary cluster is updated
Routing requests to the secondary cluster is a gradual process. Existing requests to the primary cluster must first end over a period of time until no more requests exist. At that point, you can update the primary cluster with the new portal application.
Step 2 - All traffic is routed back to the primary cluster, and the secondary cluster is updated with the new EAR, as shown in Figure 13. Because the database was updated when the primary cluster was updated, the database is not updated when the secondary cluster is updated.
Figure 13 Traffic is routed back to the primary cluster; the secondary cluster is updated
Even though the secondary cluster does not receive traffic under normal conditions, you must still update it with the current portal application. When you next update the portal application, the secondary cluster will temporarily receive requests, and the current application must be available.
In summary, to upgrade a multi-clustered portal environment, you switch traffic away from your primary cluster to a secondary one that is pointed at the same portal database instance. You can then update the primary cluster and switch users back from the secondary. This switch can happen instantaneously, so the site experiences no down time. However, in this situation, any existing user sessions will be lost during the switches.
A more advanced scenario is a gradual switchover, where you switch new sessions to the secondary cluster, and after the primary cluster has no existing user sessions you upgrade it. Gradual switchovers can be managed using a variety of specialized hardware and software load balancers. For both scenarios, there are several general concepts that should be understood before deploying applications, including the portal cache and the impact of using a single database instance.
When you configure multiple clusters for your portal application, they will share the same database instance. This database instance stores configuration data for the portal. This can become an issue, because when you upgrade the primary cluster it is common to make changes to portal configuration information in the database. These changes are then picked up by the secondary cluster where users are working.
For example, redeploying a portal application with a new portlet to the primary cluster will add that portlet configuration information to the database. This new portlet will in turn be picked up on the secondary cluster. However, the new content (JSP pages or Page Flows) that is referenced by the portlet is not deployed on the secondary cluster.
Portlets are only invoked when they are part of a desktop, so having them available to the secondary cluster will have no immediate effect on the portal that users see. However, adding a new portlet to a desktop with the WebLogic Administration Portal will immediately affect the desktop that users see on the secondary cluster. In this case, that portlet would show up, but the contents of the portlet will not be found.
To handle this situation you have several options. First, you can delay adding the portlet to any desktop instances until all users are back on the primary cluster. Another option is to entitle the portlet in the library so that it will not be viewable by any users on the secondary cluster. Then add the portlet to the desktop, and once all users have been moved back to the primary cluster, remove or modify that entitlement.
A special case to be aware of is if you are updating an existing portlet's content URI to a new location that is not yet deployed. For this reason, updating the content URI of a portlet should be done with care or as part of a multi-phase update.
Another important consideration when running two portal clusters simultaneously against the same database is the portal cache.
WebLogic Portal provides facilities for a sophisticated cluster-aware cache. This cache is used by a number of different portal frameworks to cache everything from markup definitions to portlet preferences. Additionally, developers can define their own caches using the portal cache framework. The portal cache is configured in the WebLogic Administration under Configuration Settings / Service Administration / Cache Manager. For any cache entry, the cache can be enabled or disabled, a time to live can be set, the cache maximum size can be set, the entire cache can be flushed, or you can invalidate a specific key.
When a portal framework asset that is cached is updated, it will typically write something to the database and automatically invalidate the cache across all machines in the cluster. This process keeps the cache in sync for users on any managed server.
When operating a multi-clustered environment for application redeployment, special care needs to be taken with regard to the cache. The cache invalidation mechanism does not span both clusters, so it is possible to make changes on one cluster that will be written to the database but not picked up immediately on the other cluster. As this situation could lead to system instability, it is recommended that during this user migration window the caches be disabled on both clusters. This is important when you have a gradual switchover between clusters versus a hard switch that drops existing user sessions.
![]() |
![]() |
![]() |