User Guide
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Get Adobe Reader |
This section covers the following topics:
Transaction management provides coordination for the completion of transactions, whether the transaction is successful or not. Application programmers can request the execution of remote services within a transaction, or users at remote domains can request the execution of local services within a transaction. Domains transaction management coordinates the mapping of remote transactions to local transactions, and the sane termination (commitment or rollback) of these transactions.
In the BEA Tuxedo system, a transaction tree is a two-level tree where the root is the group coordinating a global transaction and the branches are other groups on other machines that are involved in the transaction. Each group performs its part of the global transaction independently from the parts done by other groups. Each group, therefore, implicitly defines a transaction branch. If this BEA Tuxedo transaction branch is controlled by the TMA OSI TP domain, it may contain multiple actual OSI TP transaction branches. The TMA OSI TP gateway controls the mapping between the single BEA Tuxedo transaction root and the many OSI TP transaction branches. The BEA Tuxedo system uses Transaction Manager Servers (TMS) to coordinate the completion of the global transaction and make sure each branch completes.
Domains transaction management can be summarized as follows:
In the Open Group DTP Model, a Transaction Manager (TM) can construct transaction trees by defining either tightly-coupled or loosely-coupled relationships with a Resource Manager (RM). The coupling of the relationships are determined by the way the local service is defined in the DMCONFIG
file.
A tightly-coupled relationship is one in which the same transaction identifier, XID, is used by all processes participating in the same global transaction and accessing the same RM. This relationship maximizes data sharing between processes; XA-compliant RMs expect to share locks for resources used by processes having the same XID.
The BEA Tuxedo system achieves the tightly-coupled relationship through the group concept; that is, work done by a group on behalf of a given global transaction belongs to the same transaction branch; all the processes are given the same XID. In a loosely-coupled relationship, the TM generates a transaction branch for each part of the work in support of the global transaction. The RM handles each transaction branch separately; there is no sharing of data or of locks between the transaction branches. Deadlocks between transaction branches can occur. A deadlock results in the rollback of the global transaction. In the BEA Tuxedo system, when different groups participate in the same global transaction each group defines a transaction branch; this results in a loosely-coupled relationship. The TMA OSI TP instantiation is user configurable and can provide a tightly-coupled integration that solves this deadlock problem by minimizing the number of transaction branches required in the interoperation between two domains.
Following are diagrams showing loosely-coupled and tightly-coupled integrations and an explanation of each diagram.
Figure 2-1 Example of a Tightly-Coupled Integration
The transaction tree for the tightly-coupled integration shown in Figure 2-1 eliminates the probability for intra-transaction deadlock by minimizing the number of transaction branches required in the interoperation between two domains. Application A makes three requests (r1, r2 and r3) to a remote Domain B. OSI TP sends all three requests mapped to the same OSI TP transaction, T1. On Domain B, the TMA OSI TP gateway checks the COUPLING
flag for the remote service and discovers that for service B the COUPLING=TIGHT
. In this case all requests for service B belong to the same BEA Tuxedo system transaction. Each request for service B is added to the previous requests and all will have the same BEA Tuxedo XID
indicated by T2. Resources in group G1 will not be isolated and changes made by any instantiation of service B for this transaction will be "seen" by the others. Request r4 is mapped to identifier T2 on Domain B, but the Tuxedo domain generates a new branch in its transaction tree (r4: B to A'). This is a new transaction branch on Domain A, and therefore, the gateway generates a new mapping T3, to a new BEA Tuxedo system transaction. The gateway group on Domain A also coordinates group G4, so the hierarchical nature of inter-domain communication is fully enforced with this mapping; group G4 cannot commit before group G1.
Figure 2-2 Example of a Lossely-Coupled Integration
The transaction tree for the loosely-coupled integration shown in Figure 2-2 shows group G0 in Domain A coordinating the global transaction started by the client. In this case application A sends three requests (r1, r2, and r3) to a remote Domain B. Like the tightly-coupled case, all three branches are represented by OSI TP transaction T1. On Domain B, the TMA OSI TP gateway checks the COUPLING
flag for the remote service and sees that service B is COUPLING=LOOSE
. In this case, the TMA OSI TP gateway generates three BEA Tuxedo system transactions: T2, T3 and T4. Any changes made to G1 are isolated. For example, any changes made by service B can not be "seen" by service B'. When B calls back the A', a new transaction, T5, is generated.
A global transaction in a single BEA Tuxedo application follows a two-level transaction tree, but a global transaction across domains follows a more complex transaction tree. There are two reasons for this:
The commitment protocol across domains must be hierarchical to handle the complex transaction tree structure. For example, a loop-back service request is made from one domain (Domain A) to another domain (Domain B) and then comes back to be processed in the original domain. The service in Domain B requests another service in Domain A. The transaction tree has two branches at the network level: a branch b1 from A to B and a branch b2 from B to A. Domain A cannot commit the work done on branch b2 before receiving commit instructions from B.
The TMA OSI TP instantiation optimizes GTRID
mapping by optionally implementing a tightly-coupled relationship. In TMA OSI TP, service requests issued on behalf of the same global transaction are mapped to the same network transaction branch. Therefore, incoming service requests can be mapped to a single BEA Tuxedo transaction. However, the hierarchical structure of inter-domain communication and the inter-domain transaction tree must still be maintained. See Figure 2-1 for an example of a tightly-coupled relationship.
The optimization that TMA OSI TP introduces applies only to a single domain. When two or more domains are involved in the transaction, the network transaction tree contains at least one branch per domain interaction. Therefore, across domains, the network transaction tree remains loosely-coupled. There will be as many branches as there are domains involved in the transaction (even if all branches access the same resource manager instance). Domains gateway groups implement a loosely-coupled relationship because they generate different transaction branches for inter-domain transactions.
See Figure 2-2 for an example of a loosely-coupled relationship. Notice that the gateway still must perform mappings between a BEA Tuxedo transaction and a network transaction, and that the hierarchical nature of the communication between domains must be strictly enforced. Figure 2-2 shows that requests r1, r2, and r3 are mapped to a single TMA OSI TP transaction branch. Therefore, on Domain B only one BEA Tuxedo transaction needs to be generated; request r4 is mapped to an identifier on Domain B, but TMA OSI TP generates a new branch in its transaction tree (r4: B to A'). This is a new transaction branch on Domain A, and therefore, the gateway generates a mapping to a new BEA Tuxedo transaction. The graph shows that gateway group GW on Domain A also coordinates group G4. Hence, the hierarchical nature of inter-domain communication is fully enforced with this mapping: group G4 cannot commit before group G1.
OSI TP can recover an entire transaction of individual dialogues if one or more failures occur during the second phase of the two-phase commit process. Failures that occur before the second phase of commitment cause the transaction to roll back automatically. Three types of failure can occur after the second phase of commitment begins. For these types of failures, the following transaction recovery actions can occur:
The TMA OSI TP software uses typed buffers to transmit and receive data. Full buffer translation is supported for the following buffer types:
Note: Null X_OCTET buffers are not supported.
The following sections introduce procedures that TMA OSI TP follows to process and convert data buffers.
XATMI (X/Open Application Transaction Manager Interface) mappings to OSI TP are defined in the XATMI ASE (Application Service Element). BEA TMA OSI TP supports this combination. Interoperability using TMA OSI TP requires that remote systems support XATMI ASE. Therefore, Tuxedo-specific buffer types, such as STRING, VIEW, FML,
and CARRAY
may need to be converted into XATMI standard types. BEA TMA OSI TP Gateways perform these layout conversions implicitly.
Abstract Syntax Notation 1 (ASN.1) is an international standard that provides a canonical representation to deal with data representation differences such as byte order, word length, and character sets. The local gateway (GWOSITP
) encodes input from the local client program. It produces an ASN.1 encoded record that is sent to the remote service. When a reply is received, it is decoded before being returned to the client. Similarly, when remote requests for local services are received by the local gateway, they are decoded from the ASN.1 format. Replies are then encoded for return to the remote client.
The following terms are used to describe input and output data:
Input or output data as it exists inside the local BEA Tuxedo region. This includes all the buffer types that BEA Tuxedo software supports-both BEA Tuxedo ATMI buffer types and X/Open XATMI buffer types.
These terms make it easier to understand how TMA OSI TP handles input and output data.
The TMA OSI TP gateway processes buffers from local programs in the following manner.
The TMA OSI TP gateway processes records from remote programs in the following manner.
The TMA OSI TP product provides four configuration parameters you can use to map buffers and records. These parameters are optional.
The following buffer configuration parameters are specified in the DM_LOCAL_SERVICES
and/or the DM_REMOTE_SERVICES
sections of the domain configuration file (DMCONFIG
) as appropriate.
Identifies the type, and in some cases the format, of a buffer received from a Tuxedo client or server. This restricts the buffer type naming space of data types accepted by this service to a single buffer type.
Identifies the type, and in some cases the format, of a buffer to be sent to a Tuxedo client or server. Use this parameter to specify the type and format to translate the incoming message to.
Identifies the type, and in some cases the format, of a record to be sent to a remote gateway. This parameter is used for buffer and record translation.
The definitions of these four parameters depend on whether the service requests originate locally or remotely. The following sections describe these parameters in relation to where the service request originates.
This section describes in more detail how TMA OSI TP handles service calls that originate locally, within the immediate BEA Tuxedo region. It also explains how the INBUFTYPE, INRECTYPE, OUTRECTYPE
, and OUTBUFTYPE
parameters can be used to manage the conversion of buffers and records that flow between local client programs and remote services.
In the following figure, a local BEA Tuxedo client program issues a service call that a local TMA OSI TP gateway routes to a remote server through TMA OSI TP.
Figure 2-3 How Parameters Are Mapped During Locally Originated Calls
In this situation, the four configuration parameters that are shown in the figure have the following meanings:
INBUFTYPE
parameter describes the BEA Tuxedo input buffer that the local client program provides to the TMA OSI TP gateway through BEA Tuxedo software. When this parameter is specified, the data type and subtype are verified.INRECTYPE
parameter describes the input record that is sent to the service on the remote system. OUTRECTYPE
parameter describes the output record that is received from the service on the remote system. OUTBUFTYPE
parameter describes the BEA Tuxedo output buffer that is returned to the local client program. The following sections provide detailed information explaining how to use the INBUFTYPE
and INRECTYPE
parameters for service calls that originate locally (where local client programs call remote services).
The INBUFTYPE
parameter is used to specify the request buffer type that is provided to a local TMA OSI TP gateway when a local client program issues a service request. Tuxedo uses this information to restrict the buffer type from the local client to only the types defined by the INBUFTYPE
parameter.
The INRECTYPE
parameter is used to specify the type, and in some cases the format, of the request record that a particular remote service requires. The TMA OSI TP gateway uses this information to convert BEA Tuxedo request buffers into records that remote services can process.
The INRECTYPE
parameter may be omitted if the request buffer is identical, in type and structure, to the request record the remote service expects.
You must specify the INRECTYPE
parameter when one of the cases described in the following table is true.
The following sections provide detailed information explaining how to use the OUTRECTYPE
and OUTBUFTYPE
parameters for service calls that originate locally (where local client programs call remote services and receive output from those services).
The OUTBUFTYPE
parameter is used to specify the type, and in some cases the structure, of the reply buffer that a local client program expects. The TMA OSI TP gateway uses this information to map reply records from remote services to the appropriate kinds of reply buffers.The TMA OSI TP maps the incoming record to the type and subtype defined by the OUTBUFTYPE
parameter.
The OUTRECTYPE
parameter is used to specify the type, and in some cases the format, of the reply record that a particular remote service returns to the local TMA OSI TP gateway. The TMA OSI TP maps the incoming record to the type and subtype defined by the OUTBUFTYPE
parameter.
The OUTBUFTYPE
parameter may be omitted if the remote service returns a reply record that is identical, in type and structure, to the reply buffer the local client program expects.
You must specify the OUTBUFTYPE
parameter when one of the cases described in the following table is true.
This section describes how TMA OSI TP handles service calls that originate on remote computers, outside the local BEA Tuxedo region. It also explains how the INRECTYPE, INBUFTYPE, OUTBUFTYPE,
and OUTRECTYPE
parameters can be used to manage the conversion of buffers and records that flow between remote client programs and local services.
In the following figure, a remote client program issues a service request that a remote TMA OSI TP gateway routes to the local TMA OSI TP gateway. The gateway receives the request from the network and passes the request to a local BEA Tuxedo server.
Figure 2-4 How Parameters Are Mapped During Remotely Originated Calls
In this situation, the four configuration parameters that are shown in the figure have the following meanings:
OUTRECTYPE
parameter describes the record that the remote client sends to the TMA OSI TP gateway. OUTBUFTYPE
parameter describes the BEA Tuxedo buffer that is provided to the local server. INBUFTYPE
parameter describes the BEA Tuxedo buffer that the local server returns to the TMA OSI TP gateway. INRECTYPE
parameter describes the record that the local TMA OSI TP gateway returns to the remote client program. This section provides detailed information explaining how to use the INRECTYPE
and INBUFTYPE
parameters for service calls that originate on remote systems (where remote client programs call local services).
The INBUFTYPE
parameter is used to specify the type, and in some cases the structure, of the reply buffer that the TMA OSI TP gateway expects from a local server. This restricts the buffer type naming space of data types accepted by this service to a single buffer type. Because the gateway determines the type of buffer automatically at runtime this parameter is described here for conceptual completeness only.
The INRECTYPE
parameter is used to specify the type, and in some cases the format, of the reply record that the local TMA OSI TP gateway sends to the remote client. You can omit the INRECTYPE
parameter if the local server program sends a reply buffer that is identical in type and structure to the reply record the remote client expects.
You must specify the INRECTYPE
parameter when one of the cases described in the following table is true.
This section provide detailed information explaining how to use the OUTBUFTYPE
and OUTRECTYPE
parameters for service calls that originate on remote computers (where remote client programs call local services and receive output from those services).
The OUTBUFTYPE
parameter specifies the request buffer type that the local TMA OSI TP gateway provides to the local server. The TMA OSI TP gateway uses this information to convert request records from remote clients into buffers that local server programs can process.
The OUTRECTYPE
parameter is used to specify the type, and in some cases the format, of the request record a particular remote client program sends to the TMA OSI TP gateway. The TMA OSI TP maps the incoming record to the type and subtype defined by the OUTRECTYPE
parameter.
The OUTBUFTYPE
parameter may be omitted if the local service's request buffer is identical, in type and structure, to the request record the remote client program provides.
You must specify the OUTBUFTYPE
parameter when one of the cases described in the following table is true.
The following figure shows all the possibilities for mapping buffers to records. The TMA OSI TP gateway is responsible for mapping buffers to records based on information it finds in the TMA OSI TP configuration. This mapping occurs for Tuxedo client requests and Tuxedo server responses.
Figure 2-5 Buffer to Record Mappings
Following are explanations about the mapping possibilities shown in the figure above and some suggestions for setting the INRECTYPE
parameter. The INBUFTYPE
parameter is only used for verification purposes and is not discussed here.
CARRAY
input buffers can be copied to X_OCTET
input records. A CARRAY
buffer contains raw data that is not converted or translated. The TMA OSI TP gateway automatically sends the CARRAY
Tuxedo buffer as X_OCTET
to the remote system; there is no need to set the INRECTYPE
parameter. STRING
input buffers can be mapped to X_OCTET
input records. No data conversion or translation is performed. The STRING
buffer is copied left to right, up to and including the first NULL character encountered. However, before sending the data, the gateway removes the NULL terminator. The TMA OSI TP gateway automatically sends the STRING
buffer as X_OCTET
; there is no need to set the INRECTYPE
parameter.X_OCTET
input records. The XML buffer is copied to an X_OCTET
buffer. There is no translation or conversion performed on the data. This can be useful when passing XML data to systems that do not support XML data types.VIEW
input buffers can be mapped to X_COMMON
or X_C_TYPE
input records. Specify the desired record type and the name of this VIEW
definition with the INRECTYPE
parameter. The TMA OSI TP gateway translates the VIEW
into the correct X_COMMON
or X_C_TYPE
input record. VIEW, X_COMMON
or X_C_TYPE
input buffers can be mapped to X_COMMON
or X_C_TYPE
input records-in any combination. However, in this situation, the data structure that the remote service expects (designated as X_COMMON
\QB' mapping possibilities in Figure 3-3) differs from the data structure the client program uses (designated as VIEW \QA' in Figure 3-3). Consequently, you must X_C_TYPE
or X_COMMON
. Also, you must create a VIEW definition for the input data structure that the remote service expects. Set INRECTYPE
to VIEW:viewname
. Refer to the BEA Tuxedo online documentation for more detailed information about FML translation.Note: If the source and target VIEW names are different, FML fields must be specified for all VIEW to VIEW conversions that TMA OSI TP performs (For example, VIEW=V10.V --> X_C_TYPE=V10.V
does not require FML mapping fields). In other words, any VIEW that is to be used in a VIEW to different VIEW, VIEW to FML, or FML to VIEW
conversion must be defined with appropriate FML fields (no dashes in the FNAME
column of the VIEW definition). In order for the FML fields to match, you must compile the VIEWs without the -n option specified.
The following figure shows all the possibilities for mapping records to buffers. The TMA OSI TP gateway is responsible for mapping records to buffers, based on information it finds in the TMA OSI TP configuration. This mapping occurs for remote client requests and remote server responses.
Figure 2-6 Record to Buffer Mappings
Following are explanations about the mapping possibilities shown in the figure above and some suggestions for setting the OUTBUFTYPE
parameter (for service calls that originate locally. These suggestions use the OUTBUFTYPE
parameter, which controls data translation.
X_OCTET
output records can be copied to CARRAY
or X_OCTET
output buffers. A CARRAY
buffer contains raw data that is not converted or translated. Set the OUTBUFTYPE
to either CARRAY
or X_OCTET
; the OUTRECTYPE
does not need to be set. X_OCTET
output records can also be copied to STRING
output buffers. This creates a string that goes through no conversion and no translation. When going from X_OCTET
to STRING
, a NULL value is added at the end for the Tuxedo application. The resultant buffer is the length of the original X_OCTET
buffer. Since all characters are copied, if the X_OCTET
buffer contains null characters, it affects the buffer when later handled as a STRING
. The OUTBUFTYPE
should be set to STRING
. X_OCTET
output records can be mapped to XML output buffers. No translation or conversion is performed on the data. This can be useful when passing XML buffers from systems that do not support XML buffer types to a Tuxedo domain that does support XML data types.VIEW
output buffers. In this situation, the data structure that the remote service returns is identical to the data structure the local client program expects. There is no need to create a new VIEW
definition. The OUTRECTYPE
parameter can be set to VIEW:viewname
, for greater type checking, but it is not mandatory.X_C_TYPE
and X_COMMON
output records can be mapped to VIEW
output buffers-in any combination. However, in this situation, the data structure that the remote service returns (designated as X_C_TYPE
\QB' in Figure 2-6) differs from the data structure the client program expects (designated as VIEW
\QA' in Figure 2-6). To facilitate the conversion process, perform the following tasks. X_C_TYPE
(or X_COMMON
)\QB' with the OUTBUFTYPE
parameter. (By doing this, you override the value the TMA OSI TP requester automatically detects.) OUTBUFTYPE
parameter.Note: FML Field definitions may be required to map VIEW 'B' to VIEW 'A'.
X_COMMON
or X_C_TYPE
output records can be mapped to FML output buffers. To facilitate the conversion process, you must perform the following tasks. OUTBUFTYPE
parameter. (By doing this, you override the value the TMA OSI TP requester automatically detects.) OUTBUFTYPE
to FML or FML32
in the DMCONFIG
followed by a colon(:). (Example: OUTBUFTYPE="FML32:"
)Note: FML fields must be specified for all FML to VIEW conversions that TMA OSI TP performs. In other words, any VIEW
that is to be used in an FML to VIEW conversion must be defined with appropriate FML fields (no dashes in the FBNAME
column of the VIEW definition). In order for the FML fields to match, you must compile the VIEWs without the -n option specified.
Following are some examples of special cases and considerations for buffer conversion.
(INBUFTYPE)
or (OUTRECTYPE) = STRING
, CARRAY, X-OCTET
,
or XML
the conversion type must be STRING, CARRAY, X-OCTET
,
or XML
INBUFTYPE = "FML"
or "FML32"
: or if the buffer being sent to a remote service is of type "FML"
or "FML32", then...
INRECTYPE
and OUTRECTYPE
cannot be "FML"
or "FML32"
.INBUFTYPE
and OUTBUFTYPE
, configure buffer type FML
as follows: INBUFTYPE="FML:"
OUTBUFTYPE="FML32:"
Note: A colon is required at the end of keywords FML
and FML32
.
INBUFTYPE
and OUTRECTYPE
must be configured to the same type/subtype as the incoming buffer type/subtype. dec_t
cannot be used in views participating in a conversion when either the source or the destination buffer is FML or FML32 or when the source and destination are two different views.dec_t
cannot be used in views participating in conversion to NATIVE A buffers.int
with fields of type long
or short
."NONE"
in the default Null column of the source View file.In the following example, incoming buffer X_COMMON:v10
gets converted to VIEW:v12
before the request is sent to the service.
Listing 2-1 Conversion to View32
*DM_REMOTE_SERVICES
TOUPPER12
RDOM=DALNT2
LDOM=DALNT19220
PRIO=66
RNAME="TOUPPER12"
INBUFTYPE="X_COMMON:v10"
INRECTYPE="VIEW:v12"
In the following example, incoming buffer VIEW:v12
gets converted to FML, before the request is sent to the local service, and FML gets converted to X_C_TYPE:v16
before the reply is returned to the remote client.
*DM_LOCAL_SERVICES
OUTRECTYPE="VIEW:v12"
OUTBUFTYPE="FML:"
INBUFTYPE="FML:"
INRECTYPE="X_C_TYPE:v16"
For more information about FML, refer to the BEA Tuxedo Online Documentation athttp://download.oracle.com/docs/cd/E13203_01/tuxedo.
BEA TMA OSI TP supports Tuxedo XML buffer types. XML buffers are treated like X_OCTET buffers because they are only passed through to the network. No data conversion is performed on the data.
XML buffers are passed to the remote domain as an XML data type unless converted to X_OCTET
. If the remote domain does not support the XML data type, a decoding error occurs on the remote domain.
![]() ![]() |
![]() |
![]() |