10 Tuning TCMP Behavior
This chapter includes the following sections:
- Overview of TCMP Data Transmission
TCMP is an IP-based protocol that is used to discover cluster members, manage the cluster, provision services, and transmit data. - Throttling Data Transmission
The speed at which data is transmitted is controlled using the<flow-control>
and<traffic-jam>
elements. - Bundling Packets to Reduce Load
Multiple small packets can be bundled into a single larger packet to reduce the load on the network switching infrastructure. - Changing Packet Retransmission Behavior
Packets that have not been acknowledged are retransmitted based on the packet publisher's configured resend interval. - Configuring the Size of the Packet Buffers
Packet buffers can be configured to control how many packets the operating system is requested to buffer. - Adjusting the Maximum Size of a Packet
The maximum and preferred packet sizes can be adjusted to optimize the efficiency and throughput of cluster communication. - Changing the Packet Speaker Volume Threshold
The packet speaker is responsible for sending packets on the network when thepacket-publisher
detects that a network send operation is likely to block. - Configuring the Incoming Message Handler
The incoming message handler assembles packets into logical messages and dispatches them to the appropriate Coherence service for processing. - Changing the TCMP Socket Provider Implementation
Coherence uses a combination of UDP/IP multicast and UDP/IP unicast for TCMP communication between cluster service members. Additional socket provider implementations are available and and can be specified as required. - Changing Transport Protocols
Coherence uses a TCP/IP message bus for reliable point-to-point communication between data service members. Additional transport protocol implementations are available and can be specified as required. - Enabling CRC Validation for TMB
TCP/IP Message Bus (TMB) includes a cyclic redundancy check (CRC) implementation that validates network messages that are sent on the wire. CRC is commonly used to detect and handle message corruption in networks.
Parent topic: Using Coherence Clusters
Overview of TCMP Data Transmission
The TCMP protocol is very tunable to take advantage of specific network topologies, or to add tolerance for low-bandwidth and high-latency segments in a geographically distributed cluster. Coherence comes with a pre-set configuration. Some TCMP attributes are dynamically self-configuring at run time, but can also be overridden and locked down for deployment purposes. TCMP behavior should always be changed based on performance testing. Coherence includes a datagram test that is used to evaluate TCMP data transmission performance over the network. See Performing a Network Performance Test in Administering Oracle Coherence.
TCMP data transmission behavior is configured within the tangosol-coherence-override.xml
file using the <packet-publisher>
, <packet-speaker>
, <incoming-message-handler>
, and <outgoing-message-handler>
elements. See Operational Configuration Elements.
See also Understanding TCMP.
Parent topic: Tuning TCMP Behavior
Throttling Data Transmission
<flow-control>
and <traffic-jam>
elements.These elements can help achieve the greatest throughput with the least amount of packet failure. The throttling settings discussed in this section are typically changed when dealing with slow networks, or small packet buffers.
This section includes the following topics:
- Adjusting Packet Flow Control Behavior
- Disabling Packet Flow Control
- Adjusting Packet Traffic Jam Behavior
Parent topic: Tuning TCMP Behavior
Adjusting Packet Flow Control Behavior
Flow control is used to dynamically throttle the rate of packet transmission to a given cluster member based on point-to-point transmission statistics which measure the cluster member's responsiveness. Flow control stops a cluster member from being flooded with packets while it is incapable of responding.
Flow control is configured within the <flow-control>
element. There are two settings that are used to adjust flow control behavior:
-
<pause-detection>
– This setting controls the maximum number of packets that are resent to an unresponsive cluster member before determining that the member is paused. When a cluster member is marked as paused, packets addressed to it are sent at a lower rate until the member resumes responding. Pauses are typically due to long garbage collection intervals. The value is specified using the<maximum-packets>
element and defaults to16
packets. A value of0
disables pause detection. -
<outstanding-packets>
– This setting is used to define the number of unconfirmed packets that are sent to a cluster member before packets addressed to that member are deferred. The value may be specified as either an explicit number by using the<maximum-packets>
element, or as a range by using both a<maximum-packets>
and<minimum-packets>
elements. When a range is specified, this setting is dynamically adjusted based on network statistics. The maximum value should always be greater than 256 packets and defaults to 4096 packets. The minimum range should always be greater than 16 packets an defaults to 64 packets.
To adjust flow control behavior, edit the operational override file and add the <pause-detection>
and <outstanding-packets>
elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <flow-control> <pause-detection> <maximum-packets>32</maximum-packets> </pause-detection> <outstanding-packets> <maximum-packets>2048</maximum-packets> <minimum-packets>128</minimum-packets> </outstanding-packets> </flow-control> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Parent topic: Throttling Data Transmission
Disabling Packet Flow Control
To disable flow control, edit the operational override file and add an <enabled>
element, within the <flow-control>
element, that is set to false
. For example
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <flow-control> <enabled>false</enabled> </flow-control> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Parent topic: Throttling Data Transmission
Adjusting Packet Traffic Jam Behavior
A packet traffic jam is when the number of pending packets that are enqueued by client threads for the packet publisher to transmit on the network grows to a level that the packet publisher considers intolerable. Traffic jam behavior is configured within the <traffic-jam>
element. There are two settings that are used to adjust traffic jam behavior:
-
<maximum-packets>
– This setting controls the maximum number of pending packets that the packet publisher tolerates before determining that it is clogged and must slow down client requests (requests from local non-system threads). When the configured maximum packets limit is exceeded, client threads are forced to pause until the number of outstanding packets drops below the specified limit. This setting prevents most unexpected out-of-memory conditions by limiting the size of the resend queue. A value of0
means no limit. The default value is8192
. -
<pause-milliseconds>
– This setting controls the number of milliseconds that the publisher pauses a client thread that is trying to send a message when the publisher is clogged. The publisher does not allow the message to go through until the clog is gone, and repeatedly sleeps the thread for the duration specified by this property. The default value is10
.
Specifying a packet limit which is to low, or a pause which is to long, may result in the publisher transmitting all pending packets and being left without packets to send. A warning is periodically logged if this condition is detected. Ideal values ensure that the publisher is never left without work to do, but at the same time prevent the queue from growing uncontrollably. The pause should be set short (10ms or under) and the limit on the number of packets be set high (that is, greater than 5000).
When the <traffic-jam>
element is used with the <flow-control>
element, the setting operates in a point-to-point mode, only blocking a send if the recipient has too many packets outstanding. It is recommended that the <traffic-jam>
element's <maximum-packets>
subelement value be greater than the <maximum-packets>
value for the <outstanding-packets>
element. When <flow-control>
is disabled, the <traffic-jam>
setting takes all outstanding packets into account.
To adjust the enqueue rate behavior, edit the operational override file and add the <maximum-packets>
and <pause-milliseconds> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <traffic-jam> <maximum-packets>8192</maximum-packets> <pause-milliseconds>10</pause-milliseconds> </traffic-jam> </packet-publisher> </cluster-config> </coherence>
Parent topic: Throttling Data Transmission
Bundling Packets to Reduce Load
<packet-bundling>
element and includes the following settings:
-
<maximum-defferal-time>
– This setting specifies the maximum amount of time to defer a packet while waiting for additional packets to bundle. A value of zero results in the algorithm not waiting, and only bundling the readily accessible packets. A value greater than zero causes some transmission deferral while waiting for additional packets to become available. This value is typically set below 250 microseconds to avoid a detrimental throughput impact. If the units are not specified, nanoseconds are assumed. The default value is1us
(microsecond). -
<agression-factor>
– This setting specifies the aggressiveness of the packet deferral algorithm. Where as the<maximum-deferral-time>
element defines the upper limit on the deferral time, the<aggression-factor>
influences the average deferral time. The higher the aggression value, the longer the publisher may wait for additional packets. The factor may be expressed as a real number, and often times values between 0.0 and 1.0 allows for high packet utilization while keeping latency to a minimum. The default value is0
.
The default packet-bundling settings are minimally aggressive allowing for bundling to occur without adding a measurable delay. The benefits of more aggressive bundling is based on the network infrastructure and the application object's typical data sizes and access patterns.
To adjust packet bundling behavior, edit the operational override file and add the <maximum-defferal-time> and <agression-factor>
elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <packet-bundling> <maximum-deferral-time>1us</maximum-deferral-time> <aggression-factor>0</aggression-factor> </packet-bundling> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Parent topic: Tuning TCMP Behavior
Changing Packet Retransmission Behavior
A negative acknowledgment (NACK) packet indicates that the packet was received incorrectly and causes the packet to be retransmitted. Negative acknowledgment is determined by inspecting packet ordering for packet loss. Negative acknowledgment causes a packet to be resent much quicker than relying on the publisher's resend interval. See Disabling Negative Acknowledgments.
This section includes the following topics:
- Changing the Packet Resend Interval
- Changing the Packet Resend Timeout
- Configuring Packet Acknowledgment Delays
Parent topic: Tuning TCMP Behavior
Changing the Packet Resend Interval
The packet resend interval specifies the minimum amount of time, in milliseconds, that the packet publisher waits for a corresponding ACK packet, before resending a packet. The default resend interval is 200
milliseconds.
To change the packet resend interval, edit the operational override file and add a <resend-milliseconds>
element as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <resend-milliseconds>400</resend-milliseconds> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Parent topic: Changing Packet Retransmission Behavior
Changing the Packet Resend Timeout
The packet resend timeout interval specifies the maximum amount of time, in milliseconds, that a packet continues to be resent if no ACK packet is received. After this timeout expires, a determination is made if the recipient is to be considered terminated. This determination takes additional data into account, such as if other nodes are still able to communicate with the recipient. The default value is 300000
milliseconds. For production environments, the recommended value is the greater of 300000 and two times the maximum expected full GC duration.
Note:
The default death detection mechanism is the TCP-ring listener, which detects failed cluster members before the resend timeout interval is ever reached. See Configuring Death Detection.
To change the packet resend timeout interval, edit the operational override file and add a <timeout-milliseconds>
element as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <timeout-milliseconds>420000</timeout-milliseconds> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Parent topic: Changing Packet Retransmission Behavior
Configuring Packet Acknowledgment Delays
The amount of time the packet publisher waits before sending ACK and NACK packets can be changed as required. The ACK and NACK packet delay intervals are configured within the <notification-queueing>
eminent using the following settings:
-
<ack-delay-milliseconds>
– This element specifies the maximum number of milliseconds that the packet publisher delays before sending an ACK packet. The ACK packet may be transmitted earlier if multiple batched acknowledgments fills the ACK packet. This value should be set substantially lower then the remote member's packet delivery resend timeout to allow ample time for the ACK to be received and processed before the resend timeout expires. The default value is16
. -
<nack-delay-milliseconds>
– This element specifies the number of milliseconds that the packet publisher delays before sending a NACK packet. The default value is1
.
To change the ACK and NACK delay intervals, edit the operational override file and add the <ack-delay-milliseconds>
and <nack-delay-milliseconds>
elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <notification-queueing> <ack-delay-milliseconds>32</ack-delay-milliseconds> <nack-delay-milliseconds>1</nack-delay-milliseconds> </notification-queueing> </packet-publisher> </cluster-config> </coherence>
Parent topic: Changing Packet Retransmission Behavior
Configuring the Size of the Packet Buffers
This section includes the following topics:
- Understanding Packet Buffer Sizing
- Configuring the Outbound Packet Buffer Size
- Configuring the Inbound Packet Buffer Size
Parent topic: Tuning TCMP Behavior
Understanding Packet Buffer Sizing
Packet buffer size can be configured based on either the number of packets or based on bytes using the following settings:
-
<maximum-packets>
– This setting specifies the number of packets (based on the configured packet size) that the datagram socket is asked to size itself to buffer. Seejava.net.SocketOptions#SO_SNDBUF
andjava.net.SocketOptions#SO_RCVBUF
properties for additional details. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size. See Adjusting the Maximum Size of a Packet. -
<size>
– Specifies the requested size of the underlying socket buffer in bytes.
The operating system only treats the specified packet buffer size as a hint and is not required to allocate the specified amount. In the event that less space is allocated then requested, Coherence issues a warning and continues to operate with the constrained buffer, which may degrade performance. See Socket Buffer Sizes in Administering Oracle Coherence.
Large inbound buffers can help insulate the Coherence network layer from JVM pauses that are caused by the Java Garbage Collector. While the JVM is paused, Coherence cannot dequeue packets from any inbound socket. If the pause is long enough to cause the packet buffer to overflow, the packet reception is delayed as the originating node must detect the packet loss and retransmit the packet(s).
Parent topic: Configuring the Size of the Packet Buffers
Configuring the Outbound Packet Buffer Size
The outbound packet buffer is used by the packet publisher when transmitting packets. When making changes to the buffer size, performance should be evaluated both in terms of throughput and latency. A large buffer size may allow for increased throughput, while a smaller buffer size may allow for decreased latency.
To configure the outbound packet buffer size, edit the operational override file and add a <packet-buffer>
element within the <packet-publisher>
node and specify the packet buffer size using either the <size>
element (for bytes) or the <maximum-packets>
element (for packets). The default value is 32 packets. The following example demonstrates specifying the packet buffer size based on the number of packets:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-buffer> <maximum-packets>64</maximum-packets> </packet-buffer> </packet-publisher> </cluster-config> </coherence>
Parent topic: Configuring the Size of the Packet Buffers
Configuring the Inbound Packet Buffer Size
The multicast listener and unicast listener each have their own inbound packet buffer.
To configure an inbound packet buffer size, edit the operational override file and add a <packet-buffer>
element (within either a <multicast-listener>
or <unicast-listener>
node, respectively) and specify the packet buffer size using either the <size>
element (for bytes) or the <maximum-packets>
element (for packets). The default value is 64 packets for the multicast listener and 1428 packets for the unicast listener.
The following example specifies the packet buffer size for the unicast listener and is entered using bytes:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <packet-buffer> <size>1500000</size> </packet-buffer> </unicast-listener> </cluster-config> </coherence>
The following example specifies the packet buffer size for the multicast listener and is entered using packets:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-buffer> <maximum-packets>128</maximum-packets> </packet-buffer> </packet-publisher> </cluster-config> </coherence>
Parent topic: Configuring the Size of the Packet Buffers
Adjusting the Maximum Size of a Packet
Note:
When specifying a packet size larger then 1024 bytes on Microsoft Windows a registry setting must be adjusted to allow for optimal transmission rates. The COHRENCE_HOME
/bin/optimize.reg
registration file contains the registry settings. See Datagram size (Microsoft Windows) in Administering Oracle Coherence.
Packet size is configured within the <packet-size>
element and includes the following settings:
-
<maximum-length>
– Specifies the packet size, in bytes, which all cluster members can safely support. This value must be the same for all members in the cluster. A low value can artificially limit the maximum size of the cluster. This value should be at least 512. The default value is 64KB. -
<preferred-length>
– Specifies the preferred size, in bytes, of theDatagramPacket
objects that are sent and received on the unicast and multicast sockets.This value can be larger or smaller than the
<maximum-length>
value, and need not be the same for all cluster members. The ideal value is one which fits within the network MTU, leaving enough space for either the UDP or TCP packet headers, which are 32 and 52 bytes respectively.This value should be at least 512. A default value is automatically calculated based on the local nodes MTU. An MTU of 1500 is used if the MTU cannot be obtained and is adjusted for the packet headers (1468 for UDP and 1448 for TCP).
To adjust the packet size, edit the operational override file and add the <maximum-length>
and <preferred-length>
elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-size> <maximum-length>49152</maximum-length> <preferred-length>1500</preferred-length> </packet-size> </packet-publisher> </cluster-config> </coherence>
Parent topic: Tuning TCMP Behavior
Changing the Packet Speaker Volume Threshold
packet-publisher
detects that a network send operation is likely to block.
Note:
The packet speaker is not used for TCMP/TMB, which is the default protocol for data communication.
When the packet load is relatively low it may be more efficient for the speaker's operations to be performed on the publisher's thread. When the packet load is high using the speaker allows the publisher to continue preparing packets while the speaker transmits them on the network.
The packet speaker is configured using the <volume-threshold>
element to specify the minimum number of packets which must be ready to be sent for the speaker daemon to be activated. If the value is unspecified (the default), it is set to match the packet buffer.
To specify the packet speaker volume threshold, edit the operational override file and add the <volume-threshold>
element as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-speaker> <enabled>true</enabled> <volume-threshold> <minimum-packets>32</minimum-packets> </volume-threshold> </packet-speaker> </cluster-config> </coherence>
Parent topic: Tuning TCMP Behavior
Configuring the Incoming Message Handler
<incoming-message-handler>
element.
This section includes the following topics:
Changing the Time Variance
The <maximum-time-variance>
element specifies the maximum time variance between sending and receiving broadcast messages when trying to determine the difference between a new cluster member's system time and the cluster time. The smaller the variance, the more certain one can be that the cluster time is closer between multiple systems running in the cluster; however, the process of joining the cluster is extended until an exchange of messages can occur within the specified variance. Normally, a value as small as 20 milliseconds is sufficient; but, with heavily loaded clusters and multiple network hops, a larger value may be necessary. The default value is 16
.
To change the maximum time variance, edit the operational override file and add the <maximum-time-variance>
element as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <incoming-message-handler> <maximum-time-variance>16</maximum-time-variance> </incoming-message-handler> </cluster-config> </coherence>
Parent topic: Configuring the Incoming Message Handler
Disabling Negative Acknowledgments
Negative acknowledgments can be disabled for the incoming message handler. When disabled, the handler does not notify the packet sender if packets were received incorrectly. In this case, the packet sender waits the specified resend timeout interval before resending the packet. See Changing Packet Retransmission Behavior.
To disable negative acknowledgment, edit the operational override file and add a <use-nack-packets>
element that is set to false
. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <incoming-message-handler> <use-nack-packets>false</use-nack-packets> </incoming-message-handler> </cluster-config> </coherence>
Parent topic: Configuring the Incoming Message Handler
Changing the TCMP Socket Provider Implementation
<unicast-listener>
element.
This section includes the following topics:
Parent topic: Tuning TCMP Behavior
Using the TCP Socket Provider
The TCP socket provider is a socket provider which, whenever possible, produces TCP-based sockets. This socket provider creates DatagramSocket
instances which are backed by TCP. When used with the WKA feature (mulitcast disabled), TCMP functions entirely over TCP without the need for UDP.
Note:
if this socket provider is used without the WKA feature (multicast enabled), TCP is used for all unicast communications; while, multicast is utilized for group based communications.
The TCP socket provider uses up to two TCP connections between each pair of cluster members. No additional threads are added to manage the TCP traffic as it is all done using nonblocking NIO based sockets. Therefore, the existing TCMP threads handle all the connections. The connections are brought up on demand and are automatically reopened as needed if they get disconnected for any reason. Two connections are utilized because it reduces send/receive contention and noticeably improves performance. TCMP is largely unaware that it is using a reliable protocol and as such still manages guaranteed delivery and flow control.
To specify the TCP socket provider, edit the operational override file and add a <socket-provider>
element that includes the tcp
value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <socket-provider system-property="coherence.socketprovider">tcp </socket-provider> </unicast-listener> </cluster-config> </coherence>
The coherence.socketprovider
system property is used to specify the socket provider instead of using the operational override file. For example:
-Dcoherence.socketprovider=tcp
Parent topic: Changing the TCMP Socket Provider Implementation
Using the SDP Socket Provider
The SDP socket provider is a socket provider which, whenever possible, produces SDP-based sockets provided that the JVM and underlying network stack supports SDP. This socket provider creates DatagramSocket
instances which are backed by SDP. When used with the WKA feature (mulitcast disabled), TCMP functions entirely over SDP without the need for UDP.
Note:
if this socket provider is used without the WKA feature (multicast enabled), SDP is used for all unicast communications; while, multicast is utilized for group based communications.
To specify the SDP socket provider, edit the operational override file and add a <socket-provider>
element that includes the sdp
value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <socket-provider system-property="coherence.socketprovider">sdp </socket-provider> </unicast-listener> </cluster-config> </coherence>
The coherence.socketprovider
system property is used to specify the socket provider instead of using the operational override file. For example:
-Dcoherence.socketprovider=sdp
Parent topic: Changing the TCMP Socket Provider Implementation
Using the SSL Socket Provider
The SSL socket provider is a socket provider which only produces SSL protected sockets. This socket provider creates DatagramSocket
instances which are backed by SSL/TCP or SSL/SDP. SSL is not supported for multicast sockets; therefore, the WKA feature (multicast disabled) must be used for TCMP to function with this provider.
The default SSL configuration allows for easy configuration of two-way SSL connections, based on peer trust where every trusted peer resides within a single SSL keystore. More elaborate configuration can be defined with alternate identity and trust managers to allow for Certificate Authority trust validation. See Using SSL to Secure TCMP Communication in Securing Oracle Coherence.
Parent topic: Changing the TCMP Socket Provider Implementation
Changing Transport Protocols
Coherence uses a TCP/IP message bus for reliable point-to-point communication between data service members. Additional transport protocol implementations are available and can be specified as required.
This section includes the following topics:
- Overview of Changing Transport Protocols
- Changing the Shared Transport Protocol
- Changing Transport Protocols Per Service Type
Parent topic: Tuning TCMP Behavior
Overview of Changing Transport Protocols
Coherence supports multiple transport protocols. By default, all data services use the transport protocol that is configured for the unicast listener. This configuration results in a shared transport instance. You can also explicitly specify a transport protocol for a service which results in a service-specific transport instance. A service-specific transport instance can result in higher performance but at the cost of increased resource consumption. In general, a shared transport instance uses less resource consumption than service-specific transport instances.
Coherence supports the following reliable transport protocols:
-
datagram
– UDP protocol -
tmb
(default) – TCP/IP message bus protocol -
tmbs
– TCP/IP message bus protocol with SSL support. TMBS requires the use of an SSL socket provider. See Using the SSL Socket Provider. -
sdmb
– Socket Direct Protocol (SDP) message bus. -
sdmbs
– SDP message bus with SSL support. SDMBS requires the use of an SSL socket provider. See Using the SSL Socket Provider.
Parent topic: Changing Transport Protocols
Changing the Shared Transport Protocol
You can override the default transport protocol that is used for reliable point-to-point communication. All data services use a shared instance of the specified transport protocol.
To specify the shared transport protocol, edit the operational override file and add a <reliable-transport>
element within the <unicast-listener>
element. For example:
<?xml version='1.0'?>
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config
coherence-operational-config.xsd">
<cluster-config>
<unicast-listener>
<reliable-transport system-property="coherence.transport.reliable">tmbs
</reliable-transport>
</unicast-listener>
</cluster-config>
</coherence>
The value can also be set using the coherence.transport.reliable
system property.
Parent topic: Changing Transport Protocols
Changing Transport Protocols Per Service Type
You can change the transport protocol that a data service uses for reliable point-to-point communication. A transport instance is created for the service and is not shared with other services. Use service-specific transport instances sparingly for select, high priority services.
To change the transport protocol per service type, override the reliable-transport
initialization parameter of a service definition that is located in an operational override file. The following example changes the transport protocol for the DistributedCache
service:
<?xml version='1.0'?>
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config
coherence-operational-config.xsd">
<cluster-config>
<services>
<service id="3">
<init-params>
<init-param id="26">
<param-name>reliable-transport</param-name>
<param-value system-property="coherence.distributed.transport.reliable">tmbs</param-value>
</init-param>
</init-params>
</service>
</services>
</cluster-config>
</coherence>
The reliable-transport
initialization parameter can be set for the DistributedCache
, ReplicatedCache
, OptimisticCache
, Invocation
, Proxy
, and FederatedCache
services. Refer to the tangosol-coherence.xml
file that is located in the coherence.jar
file for the correct service ID and initialization parameter ID to use when overriding the reliable-transport
parameter for a service.
Each service also has a system property that sets the transport protocol, respectively:
coherence.distributed.transport.reliable
(also used for the FederatedCache
service)
coherence.replicated.transport.reliable
coherence.optimistic.transport.reliable
coherence.invocation.transport.reliable
coherence.proxy.transport.reliable
Parent topic: Changing Transport Protocols
Enabling CRC Validation for TMB
TCP/IP Message Bus (TMB) includes a cyclic redundancy check (CRC) implementation that validates network messages that are sent on the wire. CRC is commonly used to detect and handle message corruption in networks.
The CRC implementation calculates a check value that is based on the message body and adds the value to the message header. An additional check value is then calculated based on the resulting message header and is likewise added to the message header before sending the message. When TMB receives a message, the check values are recalculated and validated against the values that were sent with the message. If CRC validation fails, indicating packet corruption, then the bus connection is migrated.
CRC is not enabled by default. Enabling CRC does impact performance, especially for cache operations that mutate data (approximately 5% performance impact for put
, invoke
, and similar operations).
To enable CRC validation, set the following system property on all cluster members:
-Dcom.oracle.common.internal.net.socketbus.SocketBusDriver.crc=true
Parent topic: Tuning TCMP Behavior