49 Upgrade Considerations
Coherence has many features, several of which need to be implemented correctly to ensure an application can be successfully upgraded.
This chapter includes the following sections:
- Application Recompilation
Whenever an application upgrades its Coherence version (even for a minor patch update), it is recommended to recompile the application code against the new Coherence version. - Serialization
Every class that will be sent over the network between different JVMs in the application must be serializable, using either the default Coherence serialization (Java) or POF serialization. This applies to both cluster members and Coherence client processes. - Cache Keys and Values
You must take into consideration how changes to cache keys and values may affect serialization. - Entry Processors
If an application uses custom entry processors, then these need to be compatible across versions. This includes serialization evolvability, ideally using POF serialization. - Filters
If an application uses custom filter implementations, they must be compatible across versions. This includes serialization evolvability, ideally using POF serialization. - Aggregators
If an application uses custom aggregators, then these should be compatible across versions. This includes serialization evolvability, ideally using POF serialization. - Value Extractors
If an application uses custom extractors, these should be serialization compatible. Adding or removing fields must be done in an evolvable way, ideally using POF evolvability. - Use of Lambdas
Lambdas use in Coherence APIs must be treated like any other class in a Coherence application. Typically, a lambda used as a Coherence API method parameter means the lambda will be serialized and sent to the server to be executed. - Topics
Coherence topics, like caches, store serialized data on the server. The classes used for values published to topics should be serialization compatible and evolvable. - Cache Loaders and Cache Stores
Applications that use cache loaders or cache stores to access an external data source must still be able to load data from and write data to the external data source. - Coherence Clients
Client application code (both Extend and gRPC) must be written to have downward and upward compatible functionality. Although Coherence itself guarantees clients can connect to Coherence clusters on different versions (including different major versions), application code must be written to support this. - Cache Configuration Changes
There are a number of things that can be changed in a cache configuration file as part of an upgrade. There are also things that cannot be changed without a full cluster shutdown. - Operational Configuration Changes
There are a number of Coherence configuration items that can be changed in the Operational Configuration file (or override file) some of which can be changed in a rolling update and some cannot. - Security and SSL/TLS
Security in Coherence has a number of features: - Persistence
Applications that use Coherence persistence should ensure that the classes used for cache keys and values are serialization compatible and evolvable. - Federation
Federation copies data between different Coherence clusters. The classes used in federated cache keys and values must be evolvable across the versions so that data can successfully be sent between different clusters. - Executor Service
The executor service that is part of the Coherence Concurrent module executes tasks remotely in the Coherence cluster. These tasks are serialized and sent to other cluster members for execution, so they must be evolvable like any other class sent over the network in Coherence.
Parent topic: Building Upgradable Coherence Applications
Application Recompilation
Whenever an application upgrades its Coherence version (even for a minor patch update), it is recommended to recompile the application code against the new Coherence version.
Ideally, recompiling the application should also include a full continuous integration build and test run. Coherence APIs are guaranteed to be downward-compatible for patch releases, but they may not be binary compatible. This means that an application should not require code changes for a minor Coherence patch upgrade, but it may require recompilation to work correctly.
Application developers should always consult the release notes when upgrading Coherence, they should especially check for deprecations so that application code can be changed to use alternatives. Deprecated code is not removed in a patch but may be removed in a major Coherence upgrade.
Parent topic: Upgrade Considerations
Serialization
Every class that will be sent over the network between different JVMs in the application must be serializable, using either the default Coherence serialization (Java) or POF serialization. This applies to both cluster members and Coherence client processes.
When upgrading to a new version of an application where classes have changed, there are a number of rules to consider depending on how the classes are used, which are detailed in the following sections. For example, Cache Keys and Values.
Although Java serialization is the default serializer used in Coherence, application that require seamless upgrades should be using Coherence POF instead. Java serialization is not downward-compatible without significant effort and is not properly evolvable. POF, on the other hand, can be written to be down- and upward-compatible, and is fully evolvable as well as platform independent.
You cannot change the serializer used by a cache service in a rolling upgrade. For example, an application cannot change from Java serialization to POF serialization.
- Adding or Removing Classes
Take care when adding or removing classes during upgrades. - Enum Classes
If you use enum types in applications, then the same rules that apply to normal classes also apply to enum classes. plus some additional considerations.
Parent topic: Upgrade Considerations
Adding or Removing Classes
Take care when adding or removing classes during upgrades.
If a new class is added that can be serialized and sent over the network to another JVM in the application, then you must ensure that this class will not be sent to any JVM that does not have the new class during the rolling upgrade.
If a class is removed that can be serialized and sent over the network to another JVM in the application, then you must ensure that this class will not be sent to any new version JVM that does not have the new class during the rolling upgrade.
A general solution for this is to upgrade in two phases, if possible:
- In the first upgrade, add the new classes but do not add any functionality that uses those classes.
- In the second upgrade, add the rest of the code.
For example, in an application where the application code runs in storage disabled cluster members, or Extend clients, and the storage enabled members are separate, you would upgrade the storage enabled members first, because no application code would be currently using the new classes, then you would upgrade the application members.
Parent topic: Serialization
Enum Classes
If you use enum types in applications, then the same rules that apply to normal classes also apply to enum classes. plus some additional considerations.
Do not change the order of the enum values between versions as some code may rely on the enumeration values ordinal, which will change if the values are re-ordered.
For example, if the Day enum began with SUNDAY (as in the following example):
public enum Day {
SUNDAY, MONDAY, TUESDAY, WEDNESDAY,
THURSDAY, FRIDAY, SATURDAY
}
Then, moving SUNDAY to the end of the values (as in the following example) might result in a breaking change:
public enum Day {
MONDAY, TUESDAY, WEDNESDAY,
THURSDAY, FRIDAY, SATURDAY, SUNDAY
}
You should also not remove existing values for the enum type because one of those values may be received from a Coherence member still running the old version of the application. Removing a value will also change the ordinals of any remaining values, which will be a breaking change.
You should take care when adding new values to an enum. The new values must always be added to the end of the list of values for the enum. Existing applications will not be able to deserialize the new enum value so it is important that all Coherence members have been upgraded to the new code before the application starts to use the new value.
Parent topic: Serialization
Cache Keys and Values
You must take into consideration how changes to cache keys and values may affect serialization.
Key Classes
Changes can be made to cache key classes as long as their serialized form remains the same, and their equals
and hashCode
methods are compatible. Instances of both the new version of a key class and the old version of that class that represent the same cache key value must serialize to the equivalent Coherence Binary value.
See Using Portable Object Format.
Fields can be added if:
-
The new fields are transient (that is, they are not included in the serialized binary representation of the key)
-
The new fields are not part of the
equals
andhashCode
methods.
If using POF, the POF identifier for the key class cannot be changed, that is, the value used for the type-id
element in the POF configuration must remain the same across versions. The POF identifier is stored as part of the serialized binary value, so changing it would change the serialized form of the key data.
Additionally, if using POF, you also cannot change the POF identifier for fields in the key. The identifier for each field is stored as part of the serialized binary value, so changing it would change the serialized form of the key data.
Methods can be added to or removed from a key class, as these do not affect the serialized form of the key.
Value Classes
Classes used for cache values should be written to be evolvable. Upward and downward serialization evolvability can be achieved by using POF serialization.
Serialization compatibility is not only important for the key and value classes themselves, but also the types of any non-transient fields in keys and values must also be compatible.
Parent topic: Upgrade Considerations
Entry Processors
If an application uses custom entry processors, then these need to be compatible across versions. This includes serialization evolvability, ideally using POF serialization.
An entry processor must return the same result type as the previous version (or a compatible sub class of that type). An entry processor may be executed during an upgrade, so the caller on the old version of the application would expect to receive the same result type.
If the new version of an entry processor introduces new fields, the code in the process method must allow for those fields to be null (or some other default set during deserialization). If an old version of the application executed the entry processor, then it will not have been able to set those new fields, so if the processor executes on a storage member with the new version of the class, the fields will be missing when deserialized.
A new version of an entry processor should still populate fields of the old version if those fields are mandatory for the old version to execute without failure.
Introducing a new entry processor class can only be done if it can be guaranteed that the processor will not be executed until all the storage enabled cluster members have been upgraded. If a new version of the application tries to invoke an entry processor and that invoke call targets a storage member running the old version of the code, then the caller will receive an exception with a root cause of ClassNotFoundException
.
Removing an existing entry processor class from an application should only be done when it can be guaranteed no parts of the application will try to invoke that entry processor.
Parent topic: Upgrade Considerations
Filters
If an application uses custom filter implementations, they must be compatible across versions. This includes serialization evolvability, ideally using POF serialization.
Filters can be used to register for events using MapListener
so they must have a suitable implementation of equals
and hashCode
. If these methods change so that a new implementation does not equal an old implementation, then the application may not receive the correct events or may receive no events.
If a new filter implementation introduces new fields, the filter methods must be able to execute when that field is not set. If an old version of the application executes a method using the old version of the filter, when it is deserialized for execution on a cluster member running the new version, those fields will not be set. The new version should allow for this or should set suitable default values in its constructor or when deserialized.
Introducing new filter classes can only be done if it can be guaranteed that the new filter class will not be used until all of the storage enabled cluster members have been upgraded. If a new version of the application tries to execute a cache query, add a MapListener
, or any other operation involving the new filter class, that operation will fail if it executes on a cluster member that is not yet upgraded.
Custom filter implementations can only be removed once it can be guaranteed that no existing versions of the application will execute methods requiring those filters during the upgrade.
Parent topic: Upgrade Considerations
Aggregators
If an application uses custom aggregators, then these should be compatible across versions. This includes serialization evolvability, ideally using POF serialization.
The new version of the aggregator should also return the same result type as previous versions. If an application code executing the old version of the application invokes the aggregator and the aggregator runs on a storage member running the new version, the calling code will expect to receive the same result type (or a compatible sub-class of that result type).
The aggregators partial result type must also remain compatible across versions. As the aggregator runs the reduce phase where the results from different members are collected and reduced to a single result, the member running the reduction could receive partial results from storage members running different versions of the aggregator.
If the new version of an aggregator class introduces new fields, the aggregator must still work if those fields are not set during deserialization, or it should set suitable default values in the constructor or when deserialized. If during upgrade the aggregator is invoked from an old version of the application and executes on a storage member running the new version, the new fields will not be present when deserializing.
Introducing a new aggregator class can only be done if it can be guaranteed that the aggregator will not be executed until all the storage enabled cluster members have been upgraded. If a new version of the application tries to invoke an aggregator and that invoke call targets a storage member running the old version of the code, then the caller will receive an exception with a root cause of ClassNotFoundException
.
Custom aggregator implementations can only be removed once it can be guaranteed that no existing versions of the application will execute methods requiring those aggregators during the upgrade.
Parent topic: Upgrade Considerations
Value Extractors
If an application uses custom extractors, these should be serialization compatible. Adding or removing fields must be done in an evolvable way, ideally using POF evolvability.
Extractors that are used to extract a type to be returned to a calling application should return the same type as the previous version.
Value extractors are used to define indexes in Coherence. A specific index is identified from a Map keyed by the value extractor. If the implementation of a value extractor changes, such that it is not correctly equal to the previous version, then this could cause indexes not to be used or maybe even the wrong index could be used.
Introducing a new value extractor class can only be done if it can be guaranteed that the value extractor will not be executed until all the storage enabled cluster members have been upgraded. If a new version of the application tries to invoke a value extractor and that invoke call targets a storage member running the old version of the code, then the caller will receive an exception with a root cause of ClassNotFoundException
.
Custom value extractor implementations can only be removed once it can be guaranteed that no existing versions of the application will execute methods requiring those value extractors during the upgrade.
Parent topic: Upgrade Considerations
Use of Lambdas
Lambdas use in Coherence APIs must be treated like any other class in a Coherence application. Typically, a lambda used as a Coherence API method parameter means the lambda will be serialized and sent to the server to be executed.
Coherence uses lambdas in two ways, dynamic lambdas and static lambdas. For dynamic lambdas, both the lambda state and the code are serialized and sent to the server to be executed. For static lambdas, only the state is serialized and the lambda code must exist on the server already.
To fully support rolling upgrades, you should use dynamic lambdas. See Processing Entries Using Lambda Expressions.
Parent topic: Upgrade Considerations
Topics
Coherence topics, like caches, store serialized data on the server. The classes used for values published to topics should be serialization compatible and evolvable.
During a rolling upgrade, a topic will continue to work without loss of published elements. If the upgrade involves upgrading application JVMs containing topic subscribers, then this will cause uncommitted elements to be resent to new subscribers as the old application members are shutdown. Coherence guarantees at least one delivery, so application code that processes messages should alway be written to be idempotent to allow for messages to be received more than once. See Using Portable Object Format.
Parent topic: Upgrade Considerations
Cache Loaders and Cache Stores
Applications that use cache loaders or cache stores to access an external data source must still be able to load data from and write data to the external data source.
Writing evolvable database schemas is far beyond the scope of this documentation, but if a database schema is updated as part of an application upgrade, there may still be cluster members running the old version of the code trying to read from or write to the new database schema.
Cache stores are application code and hence can interact with anything, not just a database, so any changes to the external data sources used by cache stores must be downward and upward compatible.
Parent topic: Upgrade Considerations
Coherence Clients
Client application code (both Extend and gRPC) must be written to have downward and upward compatible functionality. Although Coherence itself guarantees clients can connect to Coherence clusters on different versions (including different major versions), application code must be written to support this.
Client application code must also be written to survive disconnection, which will happen during a rolling upgrade. Coherence will reconnect a client automatically on the next request during a rolling upgrade, but any request that was in-flight when the client was disconnected may throw an exception on the client. In-flight requests may actually complete on the server, so applications that re-try failed requests must make sure these requests are idempotent, that is, they can safely be replayed more than once. For example, a put
request from a client that is in progress on the storage member when the proxy is killed may throw an exception on the client, but the put
will succeed on the storage member. If the client retries the put
, the put
will happen twice. This may be fine, but side-effects of the put
will occur twice, for example, applications that perform other processing based on events would receive two events for that entry.
Parent topic: Upgrade Considerations
Cache Configuration Changes
There are a number of things that can be changed in a cache configuration file as part of an upgrade. There are also things that cannot be changed without a full cluster shutdown.
A few examples to consider:
-
Adding Caches - A new version of an application may require a new cache to be created. If the cache configuration file in the old version does not have a mapping for the new cache, the upgrade could fail.
-
Removing Caches - If a cache mapping is removed, then this can cause an exception when a cluster member using the new version of the configuration receives a cache
create
request from a member running the old version of the application. -
Change Cache Types - It is not always possible to change the type of a cache in a rolling upgrade. For example, you cannot change from a legacy replicated cache to a distributed cache. Some cache types can be changed, for example, it would be possible to change to use a near cache on a new version of the application, as this is local to the new JVM.
Adding Caches
If a new cache is added to a cache configuration file and that cache mapping maps to a totally new scheme and service name for the new version of the application, then this will work in a rolling upgrade. Only cluster members with the new configuration will start the new service and have the new cache. Existing members will not receive requests for the new cache.
The problem occurs when a new cache is added to the application and that maps to an existing service. When application code requests the new cache, the request to create that cache will go to all members running the same cache service. This will include members running the old version but they will have no mapping for that cache and therefore will throw an exception.
To make adding or removing caches easier, use wildcard names instead of fixed cache names in the cache configuration file.
In Example 49-1, the cache configuration file has mappings that are inflexible and difficult to upgrade. It only supports two cache names: foo
and bar
. If an application needs to add another cache where storage enabled members use this configuration, then the upgrade will not work.
Example 49-1 A cache configuration file with fixed mappings
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>foo</cache-name>
<scheme-name>storage</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>bar</cache-name>
<scheme-name>storage</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
In Example 49-2, the cache configuration file uses wildcard mappings. It has a single mapping that uses a wildcard character *
to map cache names to the scheme named storage-scheme
. Storage enabled members with this configuration support any cache name.
Example 49-2 A cache configuration file using wildcard mappings
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>storage-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
However, in applications where caches have to map to different schemes with different configurations, a single wildcard mapping is not enough. In this case an application would need to use wild card mappings with prefixes.
In Example 49-3, the cache configuration file has two mappings for foo-*
and bar-*
. This means any cache name that starts with foo-
will map to the foo-storage
scheme, so foo-
, foo-1
, foo-abc
, and so on. The same applies for cache names prefixed with bar-
. As long as a new version of the application introduces new cache names that match one of the existing mapping names, then the upgrade will work.
Example 49-3 A cache configuration file using wildcard mappings with prefixes
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>foo-*</cache-name>
<scheme-name>foo-storage</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>bar-*</cache-name>
<scheme-name>foo-storage</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
Removing Caches
Caches cannot easily be removed from a cache configuration during an upgrade.
During the upgrade process, when a cache service on a new members joins the cluster, it receives cache creation messages for all the cache names that exist for that service on the existing cluster senior member. If this includes a cache name that does not have a mapping in the new members cache configuration, then there will be an exception.
If a cache is to be removed from a cache configuration file, that cache must have been destroyed on the existing cluster members before the upgrade starts. The only way to destroy a cache is in application code. Obviously the destroying the cache on the old cluster members should not cause the existing application to fail, so that version of the application must be able to run without that cache and must not have code that would try to recreate the cache after it is destroyed.
Changing Cache Types
It is not possible to change the basic underlying type of a cache during an upgrade. For example, replicated caches are deprecated, but it is not possible to change a cache from a replicated to a distributed cache without a full cluster shutdown.
The following changes to cache type are allowed in an upgrade:
-
NearCache - it should be possible to change a cache from or to a Near Cache, as this is a local configuration on the Coherence JVM.
-
ViewScheme - it should be possible to change from or to a view scheme, as this is a local view over a distributed cache.
-
ReadWriteBackingMap - changing the backing map for a cache from or to a read/write backing map should be possible with some caveats. If the new version will use a cache store, then only data sent to new members will be written to the external data source during the upgrade, any data changes that happen on existing members will not.
-
Backing Map Type Changes - it should be possible to change the type of the backing map configuration in an upgrade. For example, changing from local scheme to Caffeine, or to Elastic Data, or vice versa. The backing map configures how an individual cluster members stores the cache data, so there are no compatibility issues with other members during the upgrade.
-
Backing Map Configuration - changing some of the backing map configuration that is only local to that JVM is also possible. For example, changes to high or low units, unit calculator, expiry delay, and so on.
Parent topic: Upgrade Considerations
Operational Configuration Changes
There are a number of Coherence configuration items that can be changed in the Operational Configuration file (or override file) some of which can be changed in a rolling update and some cannot.
Parent topic: Upgrade Considerations
Security and SSL/TLS
Security in Coherence has a number of features:
-
SSL/TLS changes for cluster members
-
SSL/TLS changes for Extend and gRPC clients
-
Identity assertion
-
Storage access authorization
See Introduction to Oracle Coherence Security in Securing Oracle Coherence.
SSL/TLS Changes for Cluster Members
Any changes to the SSL/TLS configuration in Coherence must be compatible with the existing cluster members and clients when upgrading.
It is not possible to change cluster communication from non-SSL/TLS to SSL/TLS (or vice-versa) in a rolling upgrade. The new members will not be able to form a cluster with the existing members.
Changing from one-way to two-way authentication is only possible if the existing members can supply a valid certificate to the new members.
Changing hostname verification can only be done if the existing members will still be verified during the upgrade.
See Using SSL to Secure TCMP Communication in Securing Oracle Coherence.
SSL/TLS Changes for Extend and gRPC Clients
Similarly to cluster membership, changes to SSL/TLS configuration for Extend or gRPC proxies should be made in such a way that existing clients can still connect to the new proxies. It is not possible to change a proxy from non-SSL/TLS to SSL/TLS (or vice-versa) as existing clients will not be able to connect to the new proxies. One solution would be to introduce new proxies running the changed configuration as well as leaving the existing proxy configurations, so each Coherence server is running two proxies, one SSL/TLS and one non-SSL/TLS. The existing and new clients will be able to connect to the relevant proxy. After upgrade it will be possible to do another upgrade where the old proxy configuration is removed.
Alternatively, if clients can be shut down during the upgrade, then there is no problem with any change to the SSL/TLS configuration on the proxies.
SSL/TLS changes in Extend or gRPC clients should also be done in such a way that the new clients can still connect to the existing proxies, or ensure that new clients are configured to only connect to the new proxies during the upgrade. See Using SSL to Secure Extend Client Communication in Securing Oracle Coherence.
Identity Assertion
Coherence Extend can be configured to use identity assertion as a mechanism to pass tokens from the client that are verified on the server when a connection is made. Any changes to the identity assertion code, either on the clients or proxies must be done in a compatible way so that old clients can still connect to new proxies and new clients can connect to old proxies.
The tokens sent by the client are serialized on the client and deserialized on the server. The updated token classes used must be serialization compatible and still be recognized on the existing cluster members.
Removing the use of identity assertion is possible in an upgrade providing it is removed by upgrading the server side proxy servers first.
Alternatively, if clients can be shut down during the upgrade then there is no problem with any change to the identity assertion configuration and code on the proxies. See Securing Extend Client Connections in Securing Oracle Coherence.
Storage Access Authorisation
If using the Storage Access Authorization feature to authorize operations on caches on the storage enabled cluster members, any changes must be compatible. See Authorizing Access to Server-Side Operations in Securing Oracle Coherence.
Parent topic: Upgrade Considerations
Persistence
Applications that use Coherence persistence should ensure that the classes used for cache keys and values are serialization compatible and evolvable.
If a cluster member tries to read persistence files stored by a different version of the application, this may fail if the data is not compatible. The actual issues may not be immediately visible as the serialized binary data will be read from files and loaded into the cache in its serialized binary form, but the application will later fail when it tries to deserialize that data.
For more information, see:
-
Persisting Caches in Administering Oracle Coherence
In particular, consider the following areas when performing a rolling upgrade:
Java Version
If you want servers from different Coherence version to run in the same cluster, then you must use the same major Java version before and after the upgrade.
Serialization Format
The general rules for Java serialization compatibility apply to persistence as well. See Serialization.
If you use Coherence POF serialization and if there are any changes in the cache data classes (keys, values, or both), then you can maintain serialization compatibility through the implementation of the IEvolvablePortableObject
interface. See Evolvable Portable User Types in Developing Remote Clients for Oracle Coherence.
If the serialization format changes, for example, from Java to POF, then you cannot load previously created snapshots or archives.
Changes to Persistence Location
If you want to change the persistence location, then you need to shutdown the server and move or copy all the persistence directories to the new location. Next, upgrade the server and update the persistence configuration to use the new location, before starting the server. The following are possible persistence locations you can change:
<active-directory>
<event-directory>
<snapshot-directory>
<trash-directory>
For example, if you want to change the oldEnvironment
to the newEnvironment
, then before starting the upgraded server, you need to copy or move the contents of directory /oldEnv
to /newEnv
. Then, update the persistence configuration accordingly.
Example 49-4 shows a persistence configuration where the persistence locations remain set to /oldEnv
.
Example 49-4 Persistence Locations for oldEnvironment
<persistence-environments>
<persistence-environment id="oldEnviornment">
<persistence-mode>on-demand</persistence-mode>
<active-directory>/oldEnv</active-directory>
<snapshot-directory>/oldEnv</snapshot-directory>
<trash-directory>/oldEnv</trash-directory>
</persistence-environment>
</persistence-environments>
In Example 49-5, the persistence configuration has been updated to reflect the new persistence location, /newEnv
.
Example 49-5 Persistence Locations for newEnvironment
<persistence-environments>
<persistence-environment id="newEnviornment">
<persistence-mode>on-demand</persistence-mode>
<active-directory>/newEnv</active-directory>
<snapshot-directory>/newEnv</snapshot-directory>
<trash-directory>/newEnv</trash-directory>
</persistence-environment>
</persistence-environments>
See Creating Persistence Environments in Administering Oracle Coherence.
Any distributed cache configuration change rules for rolling upgrade are also applicable to persistence. See Cache Configuration Changes.
Parent topic: Upgrade Considerations
Federation
Federation copies data between different Coherence clusters. The classes used in federated cache keys and values must be evolvable across the versions so that data can successfully be sent between different clusters.
For general information on using federation in Coherence, see Federating Caches Across Clusters in Administering Oracle Coherence.
In addition to the general considerations for upgrade such as JDK upgrade, Lambda, serialization compatibility, and so on, you must also consider aspects specific to federation.
Upgrading Federated Clusters using a Rolling Update
Where a rolling update is possible, federated clusters can be upgraded on a rolling basis as well. For guidance on performing the upgrade, see Performing a Rolling Restart. Because federation includes multiple Coherence clusters, you should upgrade clusters one at a time, until all clusters are upgraded.
Upgrading Federated Clusters when a Rolling Update is not Feasible
If a rolling update is not feasible, then you must shutdown the old cluster and create a new cluster with the upgrade. Then you can use federation's replicateAll()
operation to replicate the data from an existing cluster to the upgraded cluster. See FederationManager MBean in Managing Oracle Coherence.
Here are the prerequisites for replicateAll
:
-
Serialization form: Ensure that the serialization format has not changed and is compatible between the upgrade. See Serialization.
-
Changes to partition count: Federation tracks changes by partition and can resend changes that may have been missed when a partition migrates from one cluster member to another. While federation between two clusters with differing partition counts is possible, some of the tracking is disabled in this case.
When transitioning a partition count change in federated clusters, use
replicateAll()
to re-populate a restarted cluster, but afterward, take manual steps to verify that all of the cache data was federated to the upgraded cluster. For example, verify the cache sizes afterreplicateAll()
has completed.See Workarounds to Migrate a Persistent Service to a Different Partition Count in Administering Oracle Coherence.
Migrating from a Distributed Scheme to a Federated Scheme
Coherence persistence can be used to facilitate migrating a distributed cache service to a federated cache service. See Using Federation in Administering Oracle Coherence.
Parent topic: Upgrade Considerations
Executor Service
The executor service that is part of the Coherence Concurrent module executes tasks remotely in the Coherence cluster. These tasks are serialized and sent to other cluster members for execution, so they must be evolvable like any other class sent over the network in Coherence.
For more information, see Using Executors.
Parent topic: Upgrade Considerations