2 Using Batch Runtime
This chapter contains the following topics:
- Configuration Files
- Setting Environment Variables
- Configuring Batch Runtime in MP Mode
- Creating a Script
- Controlling Script Behavior
- Different Behaviors from z/OS
- Using Files
- Submitting a Job Using INTRDR Facility
- Submitting a Job With EJR
- User-Defined Entry/Exit
- Batch Runtime Logging
- Using Batch Runtime With a Job Scheduler
- Executing an SQL Request
- Simple Application on COBOL-IT / BDB
- Native JCL Job Execution
- Native JCL Test Mode
- Network Job Entry (NJE) Support
- File Catalog Support
- Launching REXX EXECs
- COBOL Program Access to Oracle and TimesTen Database
2.1 Configuration Files
The Configuration files are implemented in the directory CONF of the RunTime Batch.
Parent topic: Using Batch Runtime
2.1.1 BatchRT.conf
This file contains variables definition.
These variables must be set before using the RunTime Batch.
Parent topic: Configuration Files
2.1.2 Messages.conf
This file contains messages used by RTBatch.
The messages may be translated in a local language.
Parent topic: Configuration Files
2.1.3 FunctionReturnCode.conf
This file contains internal codes associated with a message.
Parent topic: Configuration Files
2.1.4 ReturnCode.conf
This file contains return codes associated with a message and returned to the KSH script.
Parent topic: Configuration Files
2.1.5 Writer.conf
This file contains usage and samples of writer.
Users can add their writers into this file.
Parent topic: Configuration Files
2.2 Setting Environment Variables
Some variables (such as ORACLE_SID
, COBDIR
, LIBPATH
, COBPATH
…) are shared variables between different components and are not described in this current document. For more information, see Rehosting Workbench Installation Guide.
Parent topic: Using Batch Runtime
2.2.1 Environment Variables for EJR
The following table lists the environment variables that are used in the KSH scripts and must be defined before using the software.
Table 2-1 KSH Script Environment Variables
Variable | Usage |
---|---|
DATA |
Directory for permanent files. |
TMP |
Directory for temporary application files. |
SYSIN |
Directory where the sysin are stored. |
MT_JOB_NAME |
Name of the job, managed by the Batch Runtime. |
MT_JOB_PID
|
PID (process id) of the job, managed by the Batch Runtime. |
Note:
ForDATA
and TMP
, the full path can only contain [a-zA-Z0-9_/.]
.
Table 2-2 Oracle Tuxedo Application Runtime for Batch Environment Variables
Variable | Usage |
---|---|
JESDIR
|
Directory where TuxJES is installed. |
PROCLIB
|
Directory for PROC and INCLUDE files, used during the conversion phase. |
MT_ACC_FILEPATH
|
File concurrency access, directory that contains the files AccLock and AccWait. These files must be created empty before running the Batch Runtime (). |
MT_COBOL
|
Depending on the used COBOL, must contain:
|
MT_CTL_FILES
|
Directory where the control file (CTL) used by the function m_DBTableLoad (sqlldr with ORACLE, load and export with UDB). |
MT_DSNUTILB_LOADUNLOAD
|
Indicates the working mode of When it is set to " When it is set to other values than "yes", " |
MT_DB_DEFAULT_SCHEMA
|
Indicates the default schema for DB, When MT_DSNUTILB_LOADUNLOAD is set to "yes ". The default value is "DEFSCHEMA ". This variable is used for specify the schema for COBOL programs "schema-table-L" and "schema-table-U".
|
MT_DB
|
Depending on the target data base, must contain:
|
MT_DB_LOGIN
|
Database connection user. |
MT_FROM_ADDRESS_MAIL
|
From-Address used by the function m_SendMail when the option “-f” is omitted. |
MT_FTP_TEST
|
Variable used by the function m_Ftp to do the tranfer or not (test mode). |
MT_GENERATION
|
A mandatory environment variable which indicates the directory to GDG technical functions.
The default is directory GENERATION_FILE . To manage GDG files in database, you need to set the value to GENERATION_FILE_DB and configure MT_GDG_DB_ACCESS appropriately. If the value is specified as NULL or with an incorrect directory name, error occurs when using this environment variable.
|
MT_KSH
|
Path of the used " Note:
|
MT_LOG
|
Logs directory (without TuxJes). |
MT_ROOT
|
Directory where the Batch Runtime application has been installed. (See the BatchRT.conf configuration file) |
MT_SMTP_PORT
|
Port used by the functions m_Smtp and m_SendMail (localhost by default). |
MT_SMTP_SERVER
|
Server used by the functions m_Smtp and m_SendMail (25 by default). |
MT_SORT
|
Depending on the used SORT, must contain:
- “SORT_MicroFocus” for Micro Focus Sort Utility - “SORT_SyncSort” for SyncSort Sort Utility - “SORT_CIT” for citsort utility |
MT_SYSOUT |
Sysout directory (without TuxJes). |
MT_TMP
|
Directory for temporary internal files |
MT_EXCI
|
EXCI Interface (Default is Oracle Tuxedo). |
MT_JESDECRYPT
|
MT_JESDECRYPT must be set to jesdecrypt object file.
|
MT_EXCI_XA
|
Name of the resource manager for |
MT_EXCIGRPNAME
|
|
See Also:
For more information, see BatchRT.confNote:
For the following environment variables, the full path can only contain[a-zA-Z0-9_/.]
JESDIR
MT_KSH
MT_LOG
MT_REFINEDIR
MT_SYSOUT
MT_TMP
The following table lists optional environment variables used by Batch Runtime:
Table 2-3 Oracle Tuxedo Application Runtime for Batch Environment Variables (Optional)
Variable | Usage |
---|---|
MT_ACC_WAIT
|
Retry interval (seconds) to acquire file lock when a job tries to access a file that is locked by other jobs. |
MT_ACC_MAXWAIT
|
Maximum wait time (seconds) to acquire file lock. If the lock is not acquired within such time, relevant file operation will fail. |
MT_CATALOG_DB_LOGIN
|
Variable used with valid database login information to access Database file catalog. Its format is the same as MT_DB_LOGIN
Note: It precedesMT_DB_LOGIN in accessing file catalog. If file catalog DB is the same as data DB, configuring MT_DB_LOGIN only is required; otherwise, both must be configured.
|
MT_CLEANUP_EMPTY_SYSOUT
|
Controls whether empty SYSOUT files are cleaned up at the end of job execution.
The default value is |
MT_CONFIG_FILE
|
A variable to use the specified configuration file instead of the default configuration file BatchRT.conf under "ejr/CONF ".
|
MT_CPU_MON_STEP
|
A variable used to enable CPU time usage monitor of step for all job. Set MT_CPU_MON_STEP=yes to enable CPU time usage monitor of step for all job.
If |
MT_DB_LOGIN2
|
Used with valid database login information to access database. If MT_DB_LOGIN2 has a non-null value, BatchRT uses runb2 (which supports parallel Oracle and DB2 access). |
MT_DB_SQL_PREPROCESS
|
Specifies which DB preprocessor is executed before SQL is executed. The built-in DB2-to-Oracle SQL Converter is "${JESDIR}/tools/sql/oracle/BatchSQLConverter.sh ".
|
MT_DB2_SYSTEM_MAPPING
|
Specifies a full file path. The file is used to store the mapping from " <DB SYSTEM NAME>:<DB TYPE>:<Connection String>
This file is accessed when " Note: When "DB SYSTEM " is specified in a nested way, only the outer setting takes effect. For example, in the following case, only "ORA01 " takes effect (and "ORA02 " is ignored).
|
MT_DSNTIAUL
|
If it is configured to "Y ", Batch runtime provides you DSNTIAUL utility to unload data from Oracle Database tables. This utility has the same functionality as DSNTIAUL utlity on mainframe with DB2. If it is configured to "N " or if it is not configured, Batch runtime executes SQL statement and writes the output to a specific file in plain text. The default value is "Y".
|
MT_EJRLOG
|
If it is configured to "Y ", BatchRT generates an EJR log file and writes every phase's log to it. If it is configured to "N ", BatchRT does not generate the EJR log file. The default value is "Y ".
|
MT_EXCI_PGM_LIST
|
A list of executable programs. The programs are invoked by runbexci instead of runb . For each program in this list, whether or not -n is specified by m_ProgramExec , the program is invoked only by runbexci .
The default value is empty; programs are separated by commas. For example:
|
MT_FTP_PASS
|
Sets ftp password stored in jes security profile, and used at runtime. |
MT_GDG_DB_ACCESS
|
A variable used with valid database login information to access Oracle Database for GDG management. For example, user/password@sid. Mandatory if MT_GENERATION is set to GENERATION_FILE_DB .
|
MT_GDG_DB_BUNCH_OPERATION
|
If configured to "Y ", the GDG changes are committed using a single database access.
If configured to " |
MT_GDG_USEDCB
|
A variable used to enable DCB support function for GDG.
|
MT_META_DB
|
The database used for the file catalog and GDG meta data. The default is null
If |
MT_REFINEDIR
|
The full install path of Workbench refine , which will be invoked to convert a JCL job to a KSH job. For example:
|
MT_REFINEDISTRIB
|
The value of environment variable REFINEDISTRIB , which is used when Workbench converts a JCL job. For example:
|
MT_RUNB_SIGNAL_TO_TRAP
|
Contains all signals caused by running user application which will be handled by Batch Runtime. The default value is all the supporting signals. For example:
|
MT_SYS_IO_REDIRECT
|
In BatchRT.conf this item is used to make runb redirect SYSIN and SYSOUT for COBOL program run by m_ProgramExec .
|
MT_SYSLOG
|
In EJR mode, if it is configured to "Y ", BatchRT generates a SYSLOG file. If it is configured to "N ", BatchRT does not generate the SYSLOG file. The default value is "Y ".
|
MT_SYSLOG_MILLISECOND
|
If it is configured to "Y ", use hour, minute, second, or millisecond for Step Start Time and Step End Time in SYSLOG file. If it is configured to "N ", use hour, minute, or second (millisecond cannot be used). The default value is "N ".
|
MT_USERENTRYEXIT
|
Controls whether user entry/exit function is enabled or not.
The default value is |
MT_UTILITY_LIST_UNSUPPORT
|
A list of executable programs, programs that don't exist but users don't want to fail any jobs because of them. When m_ProgramExec invokes nonexistent programs, JOB will continue if those programs are specified in this list. For example:
|
MT_WB_HOSTNAME
|
The host name (or IP address), where Workbench is installed to be invoked to convert JCL job to KSH job. The value of MT_WB_HOSTNAME is null if Workbench is in localhost. User name is optional to be added. For example:
|
MT_SORT_BY_EBCDIC
|
If configured to " If configured to " The default value is " |
MT_SIGN_EBCDIC
|
If configured to "Y ", for numeric DISPLAY items with included signs, the signs are to be interpreted according to the EBCDIC convention.
If configured to " The default value is " |
MT_PROG_RC_ABORT
|
This env controls dataset termination operation. Any return code greater than or equal to MT_PROG_RC_ABORT is considered as abort; any code less than MT_PROG_RC_ABORT is considered as commit.
The default value is 1. |
Parent topic: Setting Environment Variables
2.2.2 Environment Variables for Native JCL
The following lists the environment variables that are used by Native JCL Batch Runtime and must be defined before using the software:
Table 2-4 Oracle Tuxedo Application Runtime for Native JCL Batch Environment Variables
Variable | Usage |
---|---|
JESDIR |
Directory where TuxJES is installed. |
DATA |
Directory for permanent files. |
TMP |
Directory for temporary application files. |
PROCLIB |
Directory used for search of PROC and INCLUDE files.
|
MT_ACC_FILEPATH |
File concurrency access directory that contains the files AccLock and AccWait . These files must be created empty before you run Batch Runtime.
|
MT_COBOL |
Depending on the used COBOL, must contain:
|
MT_DB |
Depending on the target data base, must contain:
|
MT_DB_LOGIN |
Database connection information. |
MT_LOG |
Logs directory. |
MT_TMP |
Directory for temporary internal files. |
The following table lists optional environment variables used by Native JCL Batch Runtime.
Table 2-5 Oracle Tuxedo Application Runtime for Native JCL Batch Environment Variables (Optional)
Variable | Usage |
---|---|
DSNUTILB_PARALLEL_NUM
|
Sets the number of thread to insert records to table in the load process of utility DSNUTILB . Default value is 5.
|
JESTRACE
|
Control log output level. Its values could be one of the followings: ERROR , WARN , INFO , DEBUG , and DUMP . Default value is INFO .
|
MT_VOLUME_DEFAULT
|
When MT_VOLUME_DEFAULT is set to a non-empty value, catalog feature is enabled. It is used as volume value if there is no volume specified when a new dataset is created. If MT_VOLUME_DEFAULT is not set, catalog feature is disabled.
|
MT_E2A_FILE
|
Identifies a customer specified EBCDIC to ASCII mapping table file.
|
MT_ACC_WAIT
|
Retry interval (seconds) to acquire file lock when a job tries to access a file locked by other job. The default value is 5. |
MT_ACC_MAXWAIT
|
Maximum wait time (seconds) to acquire file lock. If the lock is not acquired within such time, relevant file operation fails. The default value is 0. |
MT_DB_LOGIN2
|
Used with valid database login information to access database. If MT_DB_LOGIN2 has a non-null value, BatchRT uses parallel Oracle and DB2 access.
|
MT_DB2_SYSTEM_MAPPING
|
Specifies a full-path file name. The file is used to store the mapping from "DB SYSTEM " to "DB connection credential string". The file format is:
When |
MT_DSNTIAUL
|
If it is configured to "Y " or if it is not configured, Batch runtime provides you DSNTIAUL utility to unload data from Oracle Database tables. This utility has the same functionality as DSNTIAUL utlity on mainframe with DB2. If it is configured to "N ", Batch runtime executes SQL statement and writes the output to a specific file in plain text. The default value is "Y ".
|
MT_SORT
|
Depending on the used SORT. It must contain
If not specified, it depends on
|
MT_SIGN_EBCDIC
|
Identifies how numeric DISPLAY items with included signs are interpreted:
|
MT_SORT_BY_EBCDIC
|
Identifies how record-sequential ASCII files are sorted:
|
MT_RUNB_SIGNAL_TO_TRAP
|
Lists all signals caused by running user application that BatchRT will handle.
Values are separated by space, for example:
|
MT_SYS_IO_REDIRECT
|
Identifies whether SYSIN and SYSOUT are redirected for utilities that ARTCOBRUN runs.
The default value is |
MT_PROG_RC_ABORT
|
Any return code greater than or equal to MT_PROG_RC_ABORT is considered as abort; any code less than MT_PROG_RC_ABORT is considered as commit.
The default value is 1. |
Parent topic: Setting Environment Variables
2.3 Configuring Batch Runtime in MP Mode
Batch Runtime (EJR) will need to be specially configured so as to work well in MP mode if users want to either use EJR to run jobs, which may share resources (normally files), from different machines or configure a MP mode TuxJES domain and submit jobs from any node through the utility provided by TuxJES.
In the latter case, the job submitted from node A may be run by node B and the execution sequence is totally random. Similarly, these jobs submitted from different nodes may share resources.
This section clarifies the details of configuring Batch Runtime (EJR) to support MP mode.
- All the resources should be put on a shared storage (NFS), which should have the same mount point on all machines in the domain, to ensure any file has the same path from the view of each node, because any job submitted from one machine may be run by another machine. For example, if users prefer to store all files under environment variable DATA described in above section, ${DATA} should point to the shared root directory where files are located and have the same value on all machines.
MT_ACC_FILEPATH
should be located on shared storage (NFS), which should have same mount point on all machines in the domain, since the control files for file locking are put in this directory; in addition, users need to make sure AccLock and AccWait files under this directory can be read / written by the effective user of the process running the jobs- NLM (Network Lock Manager) needs to be enabled on the NFS server and all machines in the domain since some shared resources, which are located on NFS, need to be locked to prevent jobs from corrupting them. The configuration is not directly related to Batch Runtime but has close relationship in MP mode.
- ARTJESADM server should be configured and started on each node in the MP domain to check, by other nodes, whether a job on this node is running or not. This is a part of the file lock mechanism in Batch Runtime. If either ARTJESADM server on one node dies abnormally or the node itself dies abnormally, the file lock owned by the job running on this node won't be released automatically; in this case, the utility artjescleanlock can be used to release the inactive file lock. For details of artjescleanlock, see Using Tuxedo Job Enqueueing Service (TuxJES).
Parent topic: Using Batch Runtime
2.4 Creating a Script
This section contains the following topics:
- General Structure of a Script
- Script Example
- Defining and Using Symbols
- Creating a Step That Executes a Program
- Application Program Abend Execution
- Creating a Procedure
- Using a Procedure
- Modifying a Procedure at Execution Time
Parent topic: Using Batch Runtime
2.4.1 General Structure of a Script
Oracle Tuxedo Application Runtime for Batch normalizes Korn shell script formats by proposing a script model where the different execution phases of a job are clearly identified.
Oracle Tuxedo Application Runtime for Batch scripts respect a specific format that allows the definition and the chaining of the different phases of the KSH (JOB).
Within Batch Runtime, a phase corresponds to an activity or a step on the source system.
A phase is identified by a label and delimited by the next phase.
At the end of each phase, the JUMP_LABEL
variable
is updated to give the label of the next phase to be executed.
In the following example, the last functional phase sets
JUMP_LABEL
to JOBEND
: this label allows a
normal termination of the job (exits from the phase loop).
The mandatory parts of the script (the beginning and end parts) are shown in bold and the functional part of the script (the middle part) in normal style as shown in the table below. The optional part of the script must contain the labels, branching and end of steps as described below. The items of the script to be modified are shown in italics.
Table 2-6 Script Structure
Script | Description |
---|---|
#!/usr/bin/ksh
|
- |
m_JobBegin -j JOBNAME -s START -v 2.00 |
m_JobBegin is mandatory and must contain at least the following options:
|
while true ;do |
The "while true; do" loop provides a mechanism to simulate the movement from one step to the next. |
m_PhaseBegin
|
m_PhaseBegin enables parameters to be initialized at the beginning of a step. |
case ${CURRENT_LABEL} in |
The case statement enables a branching to the current step. |
(START)
|
The start label (used in the -s option of m_JobBegin) |
JUMP_LABEL=STEP1 |
JUMP_LABEL is mandatory in all steps and gives the name of the next step. |
;;
|
;; ends a step and are mandatory. |
(STEP1)
|
A functional step begins with (LABEL); where LABEL is the name of the step. |
m_*
|
A typical step continues with a series of calls to Batch Runtime functions. |
JUMP_LABEL=STEP2 |
There is always a branching to the next step (JUMP_LABEL=) |
;;
|
And always the ;; at the end of each step. |
(PENULTIMATESTEP) |
- |
m_*
|
The last functional step has the same format as the others, except… |
JUMP_LABEL=END_JOB
|
For the label, which must point to END_JOB . The _ is necessary, because the character is forbidden on z/OS.
|
|
This step enables the processing loop to be broken. |
|
This is a catch-all step that picks-up branching to unknown steps. |
m_PhaseEnd done |
m_PhaseEnd manages the end of a step including file management depending on disposition and return codes. |
m_JobEnd |
m_JobEnd manages the end of a job including clearing-up temporary files and returning completion code to job caller. |
Parent topic: Creating a Script
2.4.2 Script Example
Listing 2‑1 Korn shell Script Example
#!/bin/ksh
#@(#)--------------------------------------------------------------
#@(#)-
m_JobBegin -j METAW01D -s START -v 2.00 -c A
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
# -----------------------------------------------------------------
# 1) 1st Step: DELVCUST
# Delete the existing file.
# 2) 2nd Step: DEFVCUST
# Allocates the Simple Sample Application VSAM customers file
# -----------------------------------------------------------------
#
# -Step 1: Delete...
JUMP_LABEL=DELVCUST
;;
(DELVCUST)
m_FileAssign -d OLD FDEL ${DATA}/METAW00.VSAM.CUSTOMER
m_FileDelete ${DD_FDEL}
m_RcSet 0
#
# -Step 2: Define...
JUMP_LABEL=DEFVCUST
;;
(DEFVCUST)
# IDCAMS DEFINE CLUSTER IDX
m_FileBuild -t IDX -r 266 -k 1+6 ${DATA}/METAW00.VSAM.CUSTOMER
JUMP_LABEL=END_JOB
;;
(ABORT)
break
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT} "Unknown label : ${JUMP_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
#@(#)--------------------------------------------------------------
Parent topic: Creating a Script
2.4.3 Defining and Using Symbols
m_SymbolSet
function as shown in the listing below. To use a symbol, use the following syntax: $[symbol]
Note:
The use of brackets ([]) instead of braces ({}) is to clearly distinguish symbols from standard Korn shell variables.(STEP00)
m_SymbolSet VAR=40
JUMP_LABEL=STEP01
;;
(STEP01)
m_FileAssign -d SHR FILE01 ${DATA}/PJ01DDD.BT.QSAM.KBSTO0$[VAR]
m_ProgramExec BAI001
Parent topic: Creating a Script
2.4.4 Creating a Step That Executes a Program
A step (also called a phase) is generally a coherent set of calls to Batch Runtime functions that enables the execution of a functional (or technical) activity.
The most frequent steps are those that execute an application or utility program. These kind of steps are generally composed of one or several file assignment operations followed by the execution of the desired program. All the file assignments operations must precede the program execution operation shown in Listing below.
Listing 2‑3 Application Program Execution Step Example
(STEPPR15)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO099
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO001
m_OutputAssign -c “*” SYSOUT
m_FileAssign -i LOGIN
IN-STREAM DATA
_end
m_FileAssign -d MOD LOGOUT ${DATA}/PJ01DDD.BT.QSAM.KBPRO091
m_ProgramExec BPRAB001 "20071120"
JUMP_LABEL=END_JOB
;;
Parent topic: Creating a Script
2.4.5 Application Program Abend Execution
ABEND
routines, ILBOABN0
,
CEE3ABD
and ART3ABD
can be called from a
running program to force it to abort and return the
abend
code to KSH script. For example,
ILBOABN0
is supplied as both source and binary
gnt
file. It can be directly called by any
user-defined COBOL program.
(STEPPR15)
m_ProgramExec USER
JUMP_LABEL=END_JOB
;;
Listing 2‑5 USER.cbl Example
PROCEDURE DIVISION.
PROGRAM-BEGIN.
DISPLAY "USER: HELLO USER".
MOVE 2 TO RT-PARAM.
CALL "ILBOABN0" USING RT-PARAM.
DISPLAY "USER: CAN'T REACH HERE WHEN ILBOABN0 IS CALLED".
PROGRAM-DONE.
...
Parent topic: Creating a Script
2.4.6 Creating a Procedure
Oracle Tuxedo Application Runtime for Batch offers a set of functions to define and use "procedures". These procedures follow generally the same principles as z/OS JCL procedures.
The advantages of procedures are:
- Write a set of tasks once and use it several times.
- Make this set of tasks dynamically modifiable.
Procedures can be of two types:
- In-stream Procedures: Included in the calling script, this kind of procedure can be used only in the current script.
- External Procedures: Coded in a separate source file, this kind of procedure can be used in multiple scripts.
The following topics describe how to create the above procedures:
2.4.6.1 Creating an In-Stream Procedure
Unlike the z/OS JCL convention, an in-stream procedure must be
written after the end of the main JOB, that is: all the in-stream
procedures belonging to a job must appear after the call to the
function m_JobEnd
.
An in-stream procedure in a Korn shell script always starts with a call to the m_ProcBegin
function, followed by all the tasks composing the procedure and terminating with a call to the m_ProcEnd
function. The following listing is an example:
Listing 2‑6 In-stream Procedure Example
m_ProcBegin PROCA
JUMP_LABEL=STEPA
;;
(STEPA)
m_FileAssign -c “*” SYSPRINT
m_FileAssign -d SHR SYSUT1 ${DATA}/PJ01DDD.BT.DATA.PDSA/BIEAM00$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/PJ01DDD.BT.QSAM.KBIEO005
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
Parent topic: Creating a Procedure
2.4.6.2 Creating an External Procedure
External procedures do not require the use of the m_ProcBegin
and m_ProcEnd
functions; simply code the tasks that are part of the procedure shown in the listing below.
JUMP_LABEL=FIRSTSTEP
;;
(FIRSTSTEP)
and end it with:
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
Listing 2‑7 External Procedure Example
JUMP_LABEL=PR2STEP1
;;
(PR2STEP1)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRI001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBPRO001
m_OutputAssign -c “*” SYSOUT
m_FileAssign -d SHR LOGIN ${DATA}/PJ01DDD.BT.SYSIN.SRC/BPRAS002
m_FileAssign -d MOD LOGOUT ${DATA}/PJ01DDD.BT.QSAM.KBPRO091
m_ProgramExec BPRAB002
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
Parent topic: Creating a Procedure
2.4.7 Using a Procedure
The use of a procedure inside a Korn shell script is made
through a call to the m_ProcInclude
function.
As described in Script Execution Phases, during the Conversion Phase, a Korn shell script is expanded by including the procedure's code each time a call to the m_ProcInclude
function is encountered. It is necessary that after this operation, the resulting expanded Korn shell script still respects the rules of the general structure of a script as defined in the General Structure of a Script.
A procedure, either in-stream or external, can be used in any place inside a calling job provided that the above principals are respected shown in Listing below.
…
(STEPPR14)
m_ProcInclude BPRAP009
JUMP_LABEL=STEPPR15
…
Parent topic: Creating a Script
2.4.8 Modifying a Procedure at Execution Time
The execution of the tasks defined in a procedure can be modified in two different ways:
- Modifying symbols and/or parameters
- Symbols can be used inside a procedure and the values of these symbols can be specified when calling the procedure.
Listing 2‑9 Defining Procedure Example
m_ProcBegin PROCE
JUMP_LABEL=STEPE
;;
(STEPE)
m_FileAssign -d SHR SYSUT1 ${DATA}/DATA.IN.PDS/DTS$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/DATA.OUT.PDS/DTS$[SEQ]
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
Listing 2‑10 Calling Procedure Example
(COPIERE)
m_ProcInclude PROCE SEQ="1"
JUMP_LABEL=COPIERF
;;
Parent topic: Creating a Script
2.4.8.1 Using Overrides for File Assignments
As specified in Best Practices, this way of coding procedures is provided mainly for supporting Korn shell scripts resulting from z/OS JCL translation and it is not recommended for Korn shell scripts newly written for the target platform.
The overriding of a file assignment is made using the
m_FileOverride
function that specifies a replacement
for the assignment present in the procedure. The call to the
m_FileOverride
function must follow the call to the
procedure in the calling script.
The following Listing shows how to replace the assignment of the logical file SYSUT1
using the m_FileOverride
function.
Listing 2‑11 m_FileOverride Function Example
m_ProcBegin PROCE
JUMP_LABEL=STEPE
;;
(STEPE)
m_FileAssign -d SHR SYSUT1 ${DATA}/DATA.IN.PDS/DTS$[SEQ]
m_FileAssign -d MOD SYSUT2 ${DATA}/DATA.OUT.PDS/DTS$[SEQ]
m_FileLoad ${DD_SYSUT1} ${DD_SYSUT2}
JUMP_LABEL=ENDPROC
;;
(ENDPROC)
m_ProcEnd
Listing 2‑12 m_FileOverride Procedure Call:
(COPIERE)
m_ProcInclude PROCE SEQ="1"
m_FileOverride -i -s STEPE SYSUT1
Overriding test data
_end
JUMP_LABEL=COPIERF
;;
Parent topic: Modifying a Procedure at Execution Time
2.5 Controlling Script Behavior
This section contains the following topics:
Parent topic: Using Batch Runtime
2.5.1 Conditioning the Execution of a Step
This section contains the following topics:
Parent topic: Controlling Script Behavior
2.5.1.1 Using m_CondIf, m_CondElse, and m_CondEndif
The m_CondIf
, m_CondElse
and m_CondEndif
functions can be used to condition the execution of one or several steps in a script. The behavior is similar to the z/OS JCL statement constructs IF
, THEN
, ELSE
and ENDIF
The m_CondIf
function must always have a relational expression as a parameter as shown in the listing below. These functions can be nested up to 15 times.
Listing 2‑13 m_CondIf, m_CondElse, and m_CondEndif Example
…
(STEPIF01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_ProgramExec BAX001
m_CondIf "STEPIF01.RC,LT,5"
JUMP_LABEL=STEPIF02
;;
(STEPIF02)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF002
m_ProgramExec BAX002
m_CondElse
JUMP_LABEL=STEPIF03
;;
(STEPIF03)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF003
m_ProgramExec BAX003
m_CondEndif
Parent topic: Conditioning the Execution of a Step
2.5.1.2 Using m_CondExec
The m_CondExec
function is used to condition the
execution of a step. The m_CondExec
must have at least
one condition as a parameter and can have several conditions at the
same time. In case of multiple conditions, the step is executed
only if all the conditions are satisfied.
A condition can be of three forms:
- Relational expression testing previous return codes:
m_CondExec 4,LT,STEPEC01
- EVEN: Indicates that the step is to be executed even if a previous step terminated abnormally:
m_CondExec EVEN
- ONLY: Indicates that the step is to be executed only if a previous step terminated abnormally:
m_CondExec ONLY
The m_CondExec
function must be the first function to be called inside the concerned step as shown in the Listing below:
…
(STEPEC01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_ProgramExec BACC01
JUMP_LABEL=STEPEC02
;;
(STEPEC02)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF001
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF002
m_ProgramExec BACC02
JUMP_LABEL=STEPEC03
;;
(STEPEC03)
m_CondExec 4,LT,STEPEC01 8,GT,STEPEC02 EVEN
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF000
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIF003
Parent topic: Conditioning the Execution of a Step
2.5.2 Controlling the Execution Flow
The script's execution flow is determined, and can be controlled, in the following ways:
- The start label specified by the
m_JobBegin
function: this label is usually the first label in the script, but can be changed to any label present in the script if the user wants to start the script execution from a specific step. - The value assigned to the
JUMP_LABEL
variable in each step: this assignment is mandatory in each step, but its value is not necessarily the label of the following step. - The usage of the
m_CondExec
,m_CondIf
,m_CondElse
andm_CondEndif
functions: see Conditioning the Execution of a Step - The return codes and abnormal ends of steps.
Parent topic: Controlling Script Behavior
2.5.3 Changing Default Error Messages
If Batch Runtime administrator wishes to change the default
messages (to change the language for example), this can be done
through a configuration file whose path is specified by the
environment variable: MT_DISPLAY_MESSAGE_FILE
.
This file is a CSV (comma separated values) file with a semicolon as a separator. Each record in this file describes a certain message and is composed of six fields:
- Message identifier.
- Functions that can display the message (can be a generic name using '*').
- Level of display.
- Destination of display.
- Reserved for future use.
- Message to be displayed.
Parent topic: Controlling Script Behavior
2.6 Different Behaviors from z/OS
On z/OS, before one job is executed, JES checks its syntax. If any error is found, JES reports it and runs nothing of the job. For example, if there is a JCL statement applying “NEW” on generation(0) of a GDG, because NEW is not allowed to be applied to existing files, JES reports this error and does not run the job.
However, in ART for Batch, JCL job is converted to ksh job by Oracle Tuxedo ART Workbench at first, and ART for Batch only checks ksh script syntax in the converted ksh job. Grammar errors, if any, are detected when this statement runs, resulting in the fact that statements after the wrong statement are not executed, but statements before it are executed without being affected.
Parent topic: Using Batch Runtime
2.7 Using Files
This section contains the following topics:
- Creating a File Definition
- Assigning and Using Files
- Concurrent File Accessing Control
- Using Generation Data Group (GDG)
- Using an In-Stream File
- Using a Set of Concatenated Files
- Using an External “sysin”
- Deleting a File
- RDB Files
- Using an RDBMS Connection
Parent topic: Using Batch Runtime
2.7.1 Creating a File Definition
Files are created using the m_FileBuild
or the
m_FileAssign
function.
Four file organizations are supported:
- Sequential file
- Line sequential file
- Relative file
- Indexed file
You must specify the file organization for the file being created. For indexed files, the length and the primary key specifications must also be mentioned.
Parent topic: Using Files
2.7.1.1 m_FileBuild Examples
- Definition of a line sequential file
m_FileBuild -t LSEQ ${DATA}/PJ01DDD.BT.VSAM.ESDS.KBIDO004
- Definition of an indexed file with a record length of 266 bytes and a key starting at the first bytes and having a size of 6 bytes.
m_FileBuild -t IDX -r 266 -k 1+6 ${DATA}/METAW00.VSAM.CUSTOMER
Parent topic: Creating a File Definition
2.7.1.2 m_FileAssign Examples
- Definition of a new sequential file with a record length of 80 bytes.
m_FileAssign -d NEW -t SEQ -r 80 ${DATA}/PJ01DDD.BT.VSAM.ESDS.KBIDO005
Parent topic: Creating a File Definition
2.7.2 Assigning and Using Files
When using Batch Runtime, a file can be used either by a Batch
Runtime function (for example: m_FileSort
,
m_FileRename
etc.) or by a program, such as a COBOL
program.
In both cases, before being used, a file must first be assigned.
Files are assigned using the m_FileAssign
function
that:
- Specifies the DISP mode (Read or Write)
- Specifies if the file is a generation file
- Defines an environment variable linking the logical name of the file (IFN) with the real path to the file (EFN).
The environment variable defined via the
m_FileAssign
function is named: DD_IFN
.
This naming convention is due to the fact that it is the one used
by Micro Focus COBOL to map internal file names to external file
names.
Once a file is assigned, it can be passed as an argument to any
of Batch Runtime functions handling files by using the
${DD_IFN}
variable.
For COBOL programs, the link is made implicitly by Micro Focus COBOL. COBOL-IT is compatible with Micro Focus COBOL regarding DD assignment.
Listing 2‑15 Example of File Assignment
(STEPCP01)
m_FileAssign -d SHR INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIDI001
m_FileAssign -d SHR OUTFIL ${DATA}/PJ01DDD.BT.VSAM.KBIDU001
m_FileLoad ${DD_INFIL} ${DD_OUTFIL}
…
Listing 2‑16 Example of Using a File by a COBOL Program
(STEPCBL1)
m_FileAssign -d OLD INFIL ${DATA}/PJ01DDD.BT.QSAM.KBIFI091
m_FileAssign -d MOD OUTFIL ${DATA}/PJ01DDD.BT.QSAM.KBIFO091
m_ProgramExec BIFAB090
…
Parent topic: Using Files
2.7.2.1 About DD DISP=MOD
Enhance ART/BatchRT to keep consistency with mainframe for DISP=MOD. That is, make the behavior of DISP=MOD on the target operation system of ART/BatchRT to be same with mainframe. Currently, BatchRT is depending on below 2 kinds of COBOL compile/runtime environment:
Note:
For VSAM data set DISP=MOD is always treated as DISP=OLD (file exist), and DISP=NEW(file doesn't exist) has been same with z/OS.Parent topic: Assigning and Using Files
2.7.2.1.1 Micro Focus COBOL
For Micro Focus COBOL, one new file handler (ARTEXTFH.gnt) is added to BatchRT. In order to have the correct behavior of DISP=MOD, user COBOL programs must be compiled with this file handler using the following option:
CALLFH("ARTEXTFH")
If the compile option is not specified, then the write operation with the mode "open output", in the COBOL program, will erase the existing file contents. This is unexpected.
It is suggested to always add this compiler option when you compile COBOL programs. The following table lists the behavior of API which support DDN.
Table 2-7 Micro Focus COBOL DISP=MOD Behaior
API | DISP=MOD is allowed? | Read output file is allowed? | Result of write output file | |
---|---|---|---|---|
- | INPUT | OUTPUT | - | - |
m_FileRepro
|
YES | YES | NO such requirement | Appended |
m_FilePrint
|
YES | YES | NO such requirement | Appended |
m_FileSort |
YES | YES | NO such requirement | Appended |
m_ProgramExec: Other Program |
YES | YES | YES | Appended |
m_ProgramExec: Other Program |
YES | YES | YES | Written but erase existing contents |
All other API which support DDN | YES | YES | NO such requirement | NO such requirement |
INPUT means INPUT file, only read operation will occur for INPUT file. Specify DISP=MOD is not reasonable for INPUT file, because no data will be written to INPUT file, but it's allowed, For INPUT file, DISP=MOD always act as DISP=OLD.
OUTPUT means OUTPUT file, read and write operation occur for OUTPUT file. All the data written to OUTPUT file will be appended to the original file regardless of open mode in COBOL program: "open output" or "open extend."
Parent topic: About DD DISP=MOD
2.7.2.1.2 COBOL-IT
For COBOL-IT, there is no File Handle level support for DISP=MOD (like Micro Focus COBOL). So there is no special requirement for compiling COBOL program. The following table lists the behavior of API which support DDN.
Table 2-8 Micro Focus COBOL DISP=MOD Behaior
API | DISP=MOD is allowed? | Read output file is allowed? | Result of write output file | |
---|---|---|---|---|
- | INPUT | OUTPUT | - | - |
m_FileRepro
|
NO | YES | NO such requirement | Appended |
m_FilePrint
|
NO | YES | NO such requirement | Appended |
m_FileSort
|
NO | YES | NO such requirement | Appended |
m_ProgramExec : COBOL Program
|
NO | YES | YES | Appended |
m_ProgramExec: Other Program
|
NO | YES | YES | Written but erase existing contents |
All other API which support DDN | NO | YES | NO such requirement | NO such requirement |
INPUT means INPUT file, only read operation will occur for INPUT file. Specify DISP=MOD to INPUT file is not reasonable, and it's not allowed in COBOL-IT. if one INPUT file is assigned as DISP=MOD, its contents can't be read.
OUTPUT means OUTPUT file, read and write operation occur for OUTPUT file. All the data written to OUTPUT file will be appended to the original file regardless of open mode in COBOL program: "open output" or "open extend."
Parent topic: About DD DISP=MOD
2.7.3 Concurrent File Accessing Control
Batch Runtime provides a lock mechanism to prevent one file from being written simultaneously in two jobs.
To enable the concurrent file access control, do the following:
- Use environment variable
MT_ACC_FILEPATH
to specify a directory for the lock files required by concurrent access control mechanism. - Create two empty files,
AccLock
andAccWait
, under the directory specified in step 1.Make sure the effective user executing jobs has read/write permission to these two files.
Note:
- The file names of
AccLock
andAccWait
are case sensitive. - When accessing generation files, a GDG rather than a generation file is locked. That is, a GDG is locked as a whole.
- Following two lines in
ejr/CONF/BatchRT.conf
should be commented out:${MT_ACC_FILEPATH}/AccLock
${MT_ACC_FILEPATH}/AccWait
- The file names of
Parent topic: Using Files
2.7.4 Using Generation Data Group (GDG)
Oracle Tuxedo Application Runtime for Batch allows you to manage Generation Data Group (GDG)files either based on file or based on database (DB). In file-based management way, Batch Runtime manages GDG files in separate "*.gens" files, and one "*.gens" corresponds to one GDG file. In DB-based management way, ART for Batch allows users to manage GDG information in Oracle database or DB2 database.
- GDG Management Functionalities
- File-Based Management
- DB-Based Management
- Support for Data Control Block (DCB)
Parent topic: Using Files
2.7.4.1 GDG Management Functionalities
Note:
Copying or Renaming GDG is not supported.- Defining and/or Redefining a GDG
- Adding Generation Files in a GDG
- Referring an Existing Generation Files in a GDG
- Deleting Generation Files in a GDG
- Deleting a GDG
- Cataloging a GDG
- Committing a GDG
Parent topic: Using Generation Data Group (GDG)
2.7.4.1.1 Defining and/or Redefining a GDG
It is required to define a GDG before using it.
A GDG file is defined and/or redefined through
m_GenDefine
. The operation of defining or redefining a
GDG is committed immediately and cannot be rolled back.
As shown in Listing below, the first line defines a GDG and sets its maximum generations to 15, the second line redefines the same GDG maximum generations to 30, the third line defines a GDG without specifying "-s" option (its maximum generations is set to 9999), the fourth line defines a GDG implicitly and sets its maximum generations to 9999, the fifth line defines a GDG use model file $DATA/FILE
, which can be either a GDG file or a normal file.
Listing 2‑17 Example of Defining and Redefining GDG Files
m_GenDefine -s 15 ${DATA}/PJ01DDD.BT.FILE1
m_GenDefine -s 30 -r ${DATA}/PJ01DDD.BT.FILE1
m_GenDefine ${DATA}/PJ01DDD.BT.FILE2
m_FileAssign -d NEW,CATLG -g +1 SYSUT2 ${DATA}/PJ01DDD.BT.FILE3
m_FileAssign -d NEW,CATLG -g +1 -S $DATA/FILE FILE1 $DATA/GDG
Parent topic: GDG Management Functionalities
2.7.4.1.2 Adding Generation Files in a GDG
To add a new generation file (GDS) into a GDG, call
m_FileAssign
with "-d NEW/MOD
,…"
and "-g +n
" parameters. GDS file types can be only
LSEQ or SEQ.
There are four key points to add generation files in a GDG.
- Multiple generation files (GDS) can be added in one job or step discontinuously and disorderedly. See Listing 2‑18 for an example.
- One generation number (GenNum) can be added only one time in a job. Listing 2‑19 shows an incorrect usage.
- The filename of a newly created GDS is generated by the generation number specified in
m_FileAssign
in the format of<current GDS number> + <GenNum>
. See Listing 2‑20 for an example. - In a job, if multiple generation files (GDS) are newly created, the GDS with the maximum RGN becomes the current GDS after the job finishes. See Listing 2‑21 for an example.
Four examples as below elaborate those key points individually.
Listing 2‑18 Example of Adding Multiple Generation Files Discontinuously and Disorderedly
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +9 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT2 "$DATA/GDG1"
The above example adds the following GDS files to GDG.
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0005
$DATA/GDG1.Gen.0009
Listing 2‑19 Example of Adding One Generation Number Multiple Times in a Job (Incorrect Usage)
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +4 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
The above example shows an incorrect usage, where generation number (+5) is added two times.
Listing 2‑20 Example of Listing GDS Filenames
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
In the above example, suppose $DATA/GDG1
has three GDS numbered as 1, 2, and 4, respectively. The corresponding GDS files are listed as below:
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0004
After the above job runs, $DATA/GDG1
has five GDS
numbered as 1, 2, 4, 5, and 9, respectively. The corresponding GDS
files are listed as below.
$DATA/GDG1.Gen.0001
$DATA/GDG1.Gen.0002
$DATA/GDG1.Gen.0004
$DATA/GDG1.Gen.0005
$DATA/GDG1.Gen.0009
Listing 2‑21 Example of Defining the Current GDS
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d MOD,KEEP,KEEP -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT3 "$DATA/GDG1"
In the above example, the GDS whose RGN equals +5 becomes the current GDS, meaning its RGN becomes 0 after job finishes successfully.
Parent topic: GDG Management Functionalities
2.7.4.1.3 Referring an Existing Generation Files in a GDG
To refer to an existing generation file (GDS) in a GDG, call m_FileAssign
"-d OLD/SHR/MOD,…"
and "-g 0", "-g all"
or "-g -n"
parameters. "-g 0"
refers to the current generation, "-g all"
refers to all generation files, "-g -n"
refers to the generation file which is the nth generation counting backward from the current generation (as 0 generation).
When using relative generation number (RGN) to reference a GDS, note that the "relative generation number" means "relative position with the newest GDS whose generation number is 0".
For example, if GDG1
contains six GDS numbered as 1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is listed as below:
GN | 1 | 4 | 6 | 7 | 9 | 10 |
---|---|---|---|---|---|---|
RGN | -5 | -4 | -3 | -2 | -1 | 0 |
In the following job, use RGN=-1
to reference GDS
whose GN equals 9 and use RGN=-4
to reference GDS
whose GN equals 4.
Listing 2‑22 Example of Referencing Existing Generation Files
(STEP1)
m_FileAssign -d SHR,KEEP,KEEP -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d SHR,KEEP,KEEP -g -4 SYSUT2 "$DATA/GDG1"
If "DELETE" is specified in the DISPOSITION
filed
of m_FileAssign
, the corresponding GDS will be deleted
after the current step finishes, resulting in a change of mapping
between GN and RGN. The changed mapping will be visible in the next
step.
For example, if GDG1
contains six GDS numbered as
1, 4, 6, 7, 9, and 10, respectively, the mapping of GN and RGN is
listed as below.
GN | 1 | 4 | 6 | 7 | 9 | 10 |
---|---|---|---|---|---|---|
RGN | -5 | -4 | -3 | -2 | -1 | 0 |
In the following job, use RGN=-1
to reference GDS
whose GN equals 9 and use RGN=-4
to reference GDS
whose GN equals 4.
You can run a job as below.
Listing 2‑23 Example of Referencing Existing Generation Files with DELETE Specified
(STEP1)
m_FileAssign -d OLD,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d OLD,DELETE,DELETE -g -4 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d OLD,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d OLD,DELETE,DELETE -g -2 SYSUT2 "$DATA/GDG1"
In the above example, after STEP1 finishes, the mapping of GN and RGN becomes the one as below.
GN | 1 | 6 | 7 | 10 |
---|---|---|---|---|
RGN | -3 | -2 | -1 | 0 |
In STEP2, the GDS pointed by SYSUT1 (the GDS whose GN is 7) and the GDS pointed by SYSUT2 (the GDS whose GN is 6) are deleted.
After STEP2 finishes, the mapping of GN and RGN becomes the one as below:
GN | 1 | 10 |
---|---|---|
RGN | -1 | 0 |
Parent topic: GDG Management Functionalities
2.7.4.1.4 Deleting Generation Files in a GDG
ART for Batch supports you to delete generation files, newly
added or current existing, through the disposition of
DD
specified for m_FileAssign
.
- Deleting Newly Added GDS (See Listing 2‑24 for an example)
- Deleting Existing GDS (See Listing 2‑25 for an example)
Listing 2‑24 Deleting Newly Added GDS
(STEP1)
m_FileAssign -d NEW,DELETE,DELETE -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g +5 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,DELETE,DELETE -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g +5 SYSUT2 "$DATA/GDG1"
In the above example, eventually, no GDS is added to
GDG1
.
Listing 2‑25 Deleting Existing GDS
(STEP1)
m_FileAssign -d NEW,DELETE,DELETE -g -1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g -3 SYSUT2 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,DELETE,DELETE -g -1 SYSUT3 "$DATA/GDG1"
m_FileAssign -d NEW,DELETE,DELETE -g -3 SYSUT4 "$DATA/GDG1"
Note:
Removing a GDG's all GDS does not remove the GDG itself, but just result in the fact that the GDG contains 0 GDS.Parent topic: GDG Management Functionalities
2.7.4.1.5 Deleting a GDG
You can delete a GDG as a whole by calling m_FileDelete
with the GDG base name, as shown in the listing below. In this way, all the GDG's GDS will be deleted accordingly. The operation of deleting GDG is committed immediately and cannot be rolled back.
Listing 2‑26 Deleting a GDG
m_FileDelete ${DATA}/PJ01DDD.BT.GDG
Parent topic: GDG Management Functionalities
2.7.4.1.6 Cataloging a GDG
Only GDG base can be cataloged; its GDS cannot be cataloged individually.
[-v volume]
specified in m_FileAssign
is ignored.
Note:
A GDG will be cataloged once it is defined.Parent topic: GDG Management Functionalities
2.7.4.1.7 Committing a GDG
All GDG having changes in the current step will be committed no matter if the current step successfully finishes.
Committing a GDG updates the information in GDG management
system, such as Oracle DataBase or file (*.gens
), and
commits the temporary generation files; however, committing a GDG
does not change the mapping relationship between GN and RGN,
meaning, in one step of a job, a RGN always references to the same
GDS.
For example, GDG1
has six GDS numbered as 1, 4, 6,
7, 9, and 10, respectively.
Listing 2‑27 Example of Committing a GDG
(STEP1)
m_FileAssign -d NEW,KEEP,KEEP -g +1 SYSUT1 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g +2 SYSUT2 "$DATA/GDG1"
m_FileAssign -d NEW,KEEP,KEEP -g -1 SYSUT3 "$DATA/GDG1"
(STEP2)
m_FileAssign -d NEW,KEEP,KEEP -g -1 SYSUT4 "$DATA/GDG1"
In STEP1, the mapping of GN and RGN (both in job and in GDG management system) becomes the one as below. SYSUT3 references to the GDS whose GN is 9.
GN | 1 | 4 | 6 | 7 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|
RGN | -5 | -4 | -3 | -2 | -1 | 0 | 1 | 2 |
In STEP2, the mapping of GN and RGN in GDG management system becomes the one as below.
GN | 1 | 4 | 6 | 7 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|
RGN | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 |
However, the mapping of GN and RGN in the current running job is not changed; in the below example, SYSUT4 stills references to the GDS whose GN is 9 rather than the GDS whose GN is 11.
GN | 1 | 4 | 6 | 7 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|
RGN | -5 | -4 | -3 | -2 | -1 | 0 | 1 | 2 |
Parent topic: GDG Management Functionalities
2.7.4.2 File-Based Management
This section contains the following topics:
2.7.4.2.1 Configuration
MT_GENERATION
variable specifies the way of managing GDG files. To manage GDG in *.gens
files, you need to set the value to GENERATION_FILE
.
Parent topic: File-Based Management
2.7.4.2.2 Concurrency Control and Authorization
In file-based GDG management mechanism, one GDG file can only be
accessed by one job at any time, that is, a single GDG cannot be
accessed by multiple jobs simultaneously. To access a GDG file, the
file lock must be acquired by the existing internal function
mi_FileConcurrentAccessReservation
.
File-based GDG management mechanism uses a file
*.gens
(* represents the GDG base name) to control
concurrency and authorization. User access checking depends on
whether the *.gens
file can be accessed or not.
Parent topic: File-Based Management
2.7.4.3 DB-Based Management
Note:
To enable this function,MT_GENERATION
must be set to GENERATION_FILE_DB
, MT_DB
must be set to DB_ORACLE
or DB_DB2LUW
(or set MT_META_DB
to DB_ORACLE
or DB_DB2LUW)
, and MT_GDG_DB_ACCESS
must be set to valid database connection string to access Oracle Database or DB2 database.
- Database Tables
- Generation File Naming Rule
- Configuration Variables
- External Shell Scripts
- Concurrency Control and Authorization
- Exception Handling
Parent topic: Using Generation Data Group (GDG)
2.7.4.3.1 Database Tables
The following table shows the general defines for each GDG managed by Batch Runtime. In this table, each row represents a GDG. All GDG files share a single GDG_DETAIL
table.
Table 2-9 GDG_DEFINE
Name | Type | Description |
---|---|---|
GDG_BASE_NAME
|
VARCHAR(1024)
|
Full path name of GDG.
It cannot contain only a relative path relative to a single repository. The length ofGDG_BASE_NAME is limited to 1024, i.e. the minimum of PATH_MAX on different UNIX platforms.
|
GDG_MAX_GEN
|
INT
|
Maximum number of generation files.
It contains the upper limit of generations specified by |
GDG_CUR_GEN
|
INT
|
GDG current generation number |
Primary Key: GDG_BASE_NAME
|
The following table shows the detailed information of all the GDG generation files. In this table, each row represents a generation file of a GDG.
Table 2-10 GDG_DETAIL
Name | Type | Description |
---|---|---|
GDG_BASE_NAME
|
VARCHAR(1024)
|
Full path of the GDG principal name. |
GDG_REL_NUM
|
INT
|
Relative generation number of a generation file. |
GDG_ABS_NUM
|
INT
|
Absolute generation number of a generation file. |
GDG_JOB_ID
|
VARCHAR(8)
|
The ID of the job that creates the file. |
GDG_JOB_NAME
|
VARCHAR(32)
|
The name of the job that creates the file. |
GDG_STEP_NAME
|
VARCHAR(32)
|
The name of the step that creates the file. |
GDG_CRE_TIME
|
TIMESTAMP
|
The timestamp when the file is created. |
Primary Key: GDG_BASE_NAME+GDG_ABS_NUM |
GDG_FILE_NAME
(the physical generation file name)
is not stored in table GDG_DETAIL
since it can be
constructed from GDG_BASE_NAME
in
GDG_DEFINE
and GDG_ABS_NUM
in
GDG_DETAIL
.
Note:
To back up GDG information, you need to back up two database tables:GDG_DEFINE
and GDG_DETAIL
.
Parent topic: DB-Based Management
2.7.4.3.2 Generation File Naming Rule
Table 2-11 Generation File Naming Rule
Condition | File Name | Description |
---|---|---|
GDG_REL_NUM > 0 |
${GDG_BASE_NAME}.Gen.${GDG_ABS_NUM}.tmp
|
Uncommitted
|
GDG_REL_NUM <= 0 |
${GDG_BASE_NAME}.Gen.${GDG_ABS_NUM}
|
Committed
|
Parent topic: DB-Based Management
2.7.4.3.3 Configuration Variables
MT_GENERATION
This variable specifies the way of managing GDG files. To manage
GDG files in database, you need to set the value to
GENERATION_FILE_DB
and configure
MT_GDG_DB_ACCESS
appropriately.
MT_GDG_DB_ACCESS
This variable is used along with MT_GENERATION
when it is set to GENERATION_FILE_DB
, and must be set with the valid database login account. For accessing Oracle DB, it should be specified in the format of userid/password@sid
, for example, scott/password@orcl
.
MT_GDG_DB_BUNCH_OPERATION
Used along with MT_GENERATION
when set to
GENERATION_FILE_DB
. It indicates how to commit GDG
changes to database during the commit phase. If configured to
"Y
", the GDG changes are committed using a single
database access. If configured to "N", the GDG changes are
committed using one or more database accesses.
Parent topic: DB-Based Management
2.7.4.3.4 External Shell Scripts
You can use the two external shell scripts to create and drop the new database table automatically.
CreateTableGDG.sh
Description
Creates table GDG_DEFINE
and
GDG_DETAIL
in database
Usage
CreateTableGDG.sh <DB_LOGIN_PARAMETER>
Sample
CreateTableGDG.sh scott/password@orcl
DropTableGDG.sh
Description
Drops table GDG_DEFINE
and GDG_DETAIL
from database.
Usage
DropTableGDG.sh <DB_LOGIN_ PARAMETER>
Sample
DropTableGDG.sh scott/password@orcl
Parent topic: DB-Based Management
2.7.4.3.5 Concurrency Control and Authorization
DB-based GDG management mechanism maintains the same concurrency control behavior as File-based GDG management mechanism, but has a different *.ACS (* represents the GDG base name) file format. In DB-based GDG management mechanism, you don’t need to lock the tables mentioned in Database Tables as any job that accesses the rows corresponding to a GDG must firstly acquire the file lock of the GDG. That is to say, there is no need to perform concurrency control in the database access level. You cannot access database if you don’t have access permission (read or write) to the corresponding *.ACS
file. If you need to modify a GDG file, you must have write permissions to the generation files and the directory holding the generation files, and MT_GDG_DB_ACCESS
must be configured correctly to have appropriate permissions to the tables mentioned in Database Tables.
You can only copy DB-based GDG management description entirely and replace the file name.
Parent topic: DB-Based Management
2.7.4.3.6 Exception Handling
There are four kinds of information in DB-based GDG management mechanism:
GDG_DEFINE
*.ACS
fileGDG_DETAIL
- Physical file on disk
These information should be kept consistently for a GDG file.
Batch Runtime checks the consistency from GDG_DEFINE
to Physical files when a GDG file is accessed the first time in a
job. If exceptions happen and result in inconsistency among these
information, Batch Runtime terminates the current job and reports
error.
This behavior is different from the existing file-based mechanism, which does not check the consistency but only reports exceptions encountered in the process.
Parent topic: DB-Based Management
2.7.4.4 Support for Data Control Block (DCB)
Both file-based GDG and DB-based GDG support Data Control Block (DCB).
Parent topic: Using Generation Data Group (GDG)
2.7.4.4.1 Defining .dcb File
.dcb
file can have two values: "-t <file type>"
and "-r <record length>
".
-t <file type>
-t <file type>
must be LSEQ
or SEQ
in m_FileAssign
to create the first generation file. If you don't specify any file type in job ksh
file, LSEQ
will be used by default.
-r <record length>
For SEQ
file, the value is mandatory and must be a
number or "number1-number2
".
For LSEQ
file, the value is optional. Once set, this value must be a number or "number1-number2
".
Note:
If a GDG is created bym_GenDefine
rather than m_FileAssign
, .dcb
file will not exist until the first generation file is created by m_FileAssign -g +1
.
Once .dcb
file is created, its contents will not be changed by any other m_FileAssign
statement afterwards, unless such m_FileAssign
creates the first generation file again.
Parent topic: Support for Data Control Block (DCB)
2.7.4.4.2 Creating .dcb file
.dcb
file for GDG data set when the first generation file is created by m_FileAssign -g +1
.
Note:
If a GDG is created bym_GenDefine
rather than m_FileAssign
, .dcb
file will not exist until the first generation file is created by m_FileAssign -g +1
.
Once .dcb
file is created, its contents will not be changed by any other m_FileAssign
statement afterwards, unless such m_FileAssign
creates the first generation file again.
Parent topic: Support for Data Control Block (DCB)
2.7.4.4.3 Deleting .dcb file
If a GDG is deleted by m_FileDelete
, the
corresponding .dcb
file will be deleted
automatically.
However, if all generation files in one GDG are deleted while
the GDG itself exists, the corresponding .dcb
file
will not be deleted.
Parent topic: Support for Data Control Block (DCB)
2.7.5 Using an In-Stream File
To define and use a file whose data is written directly inside the Korn shell script, use the m_FileAssign
function with the -i
parameter. By default the string _end
is the “end” delimiter of the in-stream flow as shown in Listing below:
Listing 2‑28 In-stream Data Example
(STEP1)
m_FileAssign -i INFIL
data record 1
data record 2
…
_end
Parent topic: Using Files
2.7.6 Using a Set of Concatenated Files
To use a set of files as a concatenated input (which in z/Os JCL was coded as a DD card, where only the first one contains a label), use the m_FileAssign
function with the -C
parameter as shown in the Listing 2-29 below:
Listing 2‑29 Using a Concatenated Set of Files Example
(STEPDD02)
m_FileAssign -d SHR INF ${DATA}/PJ01DDD.BT.QSAM.KBDDI002
m_FileAssign -d SHR -C ${DATA}/PJ01DDD.BT.QSAM.KBDDI001
m_ProgramExec BDDAB001
Parent topic: Using Files
2.7.7 Using an External “sysin”
m_UtilityExec
function.m_FileAssign -d OLD SYSIN ${SYSIN}/SYSIN/MUEX07
m_UtilityExec
Parent topic: Using Files
2.7.8 Deleting a File
Files (including generation files) can be deleted using the
m_FileDelete
function:
m_FileDelete ${DATA}/PJ01DDD.BT.QSAM.KBSTO045
Parent topic: Using Files
2.7.9 RDB Files
In a migration project from z/Os to UNIX/Linux, some permanent data files may be converted to relational tables. See the File-to-Oracle chapter of the Oracle Tuxedo Application Runtime Workbench.
When a file is converted to a relational table, this change has an impact on the components that use it. Specifically, when such a file is used in a z/Os JCL, the converted Korn shell script corresponding to that JCL should be able to handle operations that involve this file.
In order to keep the translated Korn shell script as standard as possible, this change is not handled in the translation process. Instead, all the management of this type of file is performed at execution time within Batch Runtime.
In other words, if in the z/OS JCL there was a file copy
operation involving the converted file, this is translated to a
standard copy operation for files in Batch Runtime, in other words
an m_FileLoad
operation).
The management of a file converted to a table is made possible
through an RDB file. An RDB file is a file that has the same name
as the file that is converted to a table but with an additional
suffix:.rdb
.
Each time a file-related function is executed by Batch Runtime, it checks whether the files were converted to table (through testing the presence of a corresponding .rdb file). If one of the files concerned have been converted to a table, then the function operates the required intermediate operations (such as: unloading and reloading the table to a file) before performing the final action.
All of this management is transparent to the end-user.
Parent topic: Using Files
2.7.10 Using an RDBMS Connection
When executing an application program that needs to connect to
the RDBMS, the -b
option must be used when calling the
m_ProgramExec
function.
Connection and disconnection (as well as the commit and rollback operations) are handled implicitly by Batch Runtime and can be defined using the following two methods:
- Set the environment variable
MT_DB_LOGIN
before booting the TuxJES system.Note:
In this case, all executing jobs use this variable. - Set its value in the TuxJES Security Configuration file for different users.
MT_DB_LOGIN
value must use the following form: dbuser/dbpasswd[@ssid]
or “/
”.
Note:
"/
" should be used when the RDBMS is configured to allow the use of UNIX authentication and not RDBMS authentication, for the database connection user.
Make sure the executable program can be found in $PATH
.
Please check with the database administrator whether "/" should be used or not.
The -b
option must also be used if the main program executed does not directly use the RDBMS but one of its subsequent sub-programs does as shown in Listing below:
(STEPDD02)
m_FileAssign -d MOD OUTF ${DATA}/PJ01DDD.BT.QSAM.REPO001
m_ProgramExec -b DBREP001
The m_ProgramExec
function may submit three types
of files.
- Generated code files (
.gnt
file extension) compiled from COBOL source code file.Make sure that the
.gnt
files can be found in$COBPATH
(for Micro Focus COBOL) or$COB_LIBRARY_PATH
(for COBOL-IT). - Callable shared library (
.so
file extension) compiled from C source code file.Make sure the callable shared library file can be found at
$COBPATH
(for Micro Focus COBOL) or$COB_LIBRARY_PATH
(for COBOL-IT), or at system library file search path likeLIBPATH
,LD_LIBRARY_PATH
, and so on.This type of file must have an entry function whose name is equal to the file name.
For example, callable shared library file
ProgA.so
must contain a function declared by one of the following:-
ProgA(short* arglen, char* argstr)
: if you need parameters -
ProgA()
: if you do not need parameters
-
- Any other types of executable program (such as system utilities, shell scripts, and third party utilities)
m_ProgramExec
will determine the deliverable type of the program in the following sequence: COBOL program (.gnt
), C program in callable shared library (.so
), and other executable programs. Once a COBOL program is executed,m_ProgramExec
will not execute other programs with the same name. For example, onceProgA.gnt
is executed,ProgA.so
or other programs namedProgA
will not be executed.
For .gnt
file and .so
files,
m_ProgramExec
launches the runb
program
to run it. ART provides runb
for the following:
$JESDIR/ejr_mf_ora
for combination of Micro Focus COBOL and Oracle database$JESDIR/ejr_mf_db2
for combination of Micro Focus COBOL and DB2 database$JESDIR/ejr_cit_ora
for combination of COBOL-IT and Oracle database$JESDIR/ejr_cit_db2
for combination of COBOL-IT and DB2 database
If you do not use the above four types of combination, go to
$JESDIR/ejr
and run make.sh
to generate
your personalized runb
.
The runb
program, runtime compiled with database libraries, runs the runbatch program.
The runbatch
program, is in charge to:
- do the connection to the database (if necessary)
- run the user program
- do the commit or rollback (if necessary)
- do the disconnection from the database (if necessary)
Parent topic: Using Files
2.8 Submitting a Job Using INTRDR Facility
The INTRDR
facility allows you to submit the contents of a sysout to TuxJES (see the Using Tuxedo Job Enqueueing Service (TuxJES) documentation). If TuxJES is not present, a command “nohup EJR” is used.
Example:
m_FileAssign -d SHR SYSUT1 ${DATA}/MTWART.JCL.INFO
m_OutputAssign -w INTRDR SYSUT2
m_FileRepro -i SYSUT1 -o SYSUT2
In this example, the contents of the file
${DATA}/MTWART.JCL.INFO
(ddname SYSUT1
)
are copied into the file (ddname SYSUT2
) which is
using the option -w INTRDR
, and then this file (ddname
SYSUT2
) is submitted.
Note:
The ouput file must contain valid ksh syntax.INTRDR
job which is generated by COBOL program can be submitted automatically in real time. Once a COBOL program closes INTRDR
, the job INTRDR
is submitted immediately without waiting for the current step to finish. To enable this feature, file handler ARTEXTFH.gnt
needs to be linked to COBOL program.
- For Micro Focus COBOL, add the following compile option.
CALLFH("ARTEXTFH")
- For COBOL-IT, add the following compile option.
flat-extfh=ARTEXTFH
flat-extfh-lib="<fullpath of ARTEXTFH.gnt>"
ARTEXTFH.gnt
is placed at "${MT_ROOT}/COBOL_IT/ARTEXTFH.gnt
".
INTRDR
jobs is submitted after the current step finishes.
Note:
If the batch job script generated at runtime is in JCL language, it can't be submitted byINTRDR
.
Parent topic: Using Batch Runtime
2.9 Submitting a Job With EJR
When using Batch Runtime, TuxJES can be used to launch jobs (see the Using Tuxedo Job Enqueueing Service (TuxJES) documentation), but a job can also be executed directly using the EJR spawner.
Before performing this type of execution, ensure that the entire context is correctly set. This includes environment variables and directories required by Batch Runtime.
Example of launching a job with EJR
# EJR DEFVCUST.ksh
For a complete description of the EJR spawner, please refer to the Oracle Tuxedo Application Runtime for Batch Reference Guide.
Parent topic: Using Batch Runtime
2.10 User-Defined Entry/Exit
Batch Runtime allows you to add custom pre- or post- actions for
public APIs. For each m_* (* represents any function name)
function, you can provide m_*_Begin
and
m_*_End
function and put them in
ejr/USER_EXIT
directory. They are invoked
automatically when a job execution entering or leaving an
m_*
API.
Whether an m_*
API calls its user-defined
entry/exit function depends on the existence of
m_*_Begin
and m_*_End
under
ejr/USER_EXIT
.
mi_UserEntry
and mi_UserExit
, are called at the entry and exit point of each external API. The argument to these APIs consists of the function name in which they are called, and the original argument list of that function. You don’t need to modify these two APIs, but just need to provide your custom entry/exit for m_* external APIs. mi_UserEntry
and mi_UserExit
are placed under ejr/COMMON
.
Note:
In user entry/exit function, users are not allowed to use any function provided by ART for Batch; however, in user's script, a return statement returns value to the caller and ART for Batch checks if calling user entry/exit function works successfully through the return code. Return code 0
continues the job; non-zero value terminates the job.
You are suggested not to call exit in user entry/exit function. Because In the framework, exit
is aliased an internal function, mif_ExitTrap
, which is invoked ultimately if exit
in user entry/exit function is called. If exit 0
is called, the framework does nothing and the job continues. However if exit not_0
is called, then a global variable is set and may terminate the current job.
Parent topic: Using Batch Runtime
2.10.1 Configuration
You should include only one function in a single file with the same name as the function. For example, m_*_Begin
or m_*_End
. Further, you should put all such files under ejr/USER_EXIT
.
You are not allowed to provide custom entry/exit functions for
any mi_
prefix function provided by Batch Runtime.
Parent topic: User-Defined Entry/Exit
2.11 Batch Runtime Logging
This section contains the following topics:
2.11.1 General Introduction
This section contains the following topics:
Parent topic: Batch Runtime Logging
2.11.1.1 Log Message Format
Each log message defined in CONF/Messages.conf
is composed of six fields, as listed in the Table 2‑8 of the topic COBOL-IT
Table 2-12 Log Message Format
Field | Content |
---|---|
1 | Message identifier |
2 | Functions that can display the message (generic name using *) |
3 | Level of display. Default value: 4 |
4 | Destination of display (u,e,o).
|
5 | Header flag (0,1,b). Default value: 0
|
6 | The message to be displayed with possible dynamic values |
The levels of these messages are set to 4 by default.
You can specify the message level of Batch Runtime to control whether to print these three messages in job log.
Parent topic: General Introduction
2.11.1.2 Log Message Level
Table 2-13 Log Message Level
Level | Message |
---|---|
1 | FATAL only |
2 | Previous level and errors |
3 | Previous level and information |
4 | Previous level and file information log |
5 | Previous level and high level functions |
6 | Previous level and technical functions |
7 | Same as level 3 and high level functions which correspond to the -d regexp option
|
8 | Same as 7 and technical level functions which correspond to the -d regexp option
|
9 | Reserved |
Parent topic: General Introduction
2.11.1.3 Log Level Control
The default level of displaying messages in job log is 3. You can also choose one of the following ways to change the level:
- Use
-V
option of EJR - Use the environment variable
MT_DISPLAY_LEVEL
The display level set by EJR can override the level set by
MT_DISPLAY_LEVEL
.
Parent topic: General Introduction
2.11.1.4 Log File Structure
For each launched job, Batch Runtime produces a log file containing information for each step that was executed. This log file has the following structure as shown in Listing below:
Listing 2‑31 Log File Example
JOB Jobname BEGIN AT 20091212/22/09 120445
BEGIN PHASE Phase1
Log produced for Phase1
.......
.......
.......
END PHASE Phase1 (RC=Xnnnn, JOBRC=Xnnnn)
BEGIN PHASE Phase2
Log produced for Phase2
.......
.......
.......
END PHASE Phase2 (RC=Xnnnn, JOBRC=Xnnnn)
..........
..........
BEGIN PHASE END_JOB
..........
END PHASE END_JOB (RC=Xnnnn, JOBRC=Xnnnn)
JOB ENDED WITH CODE (C0000})
Or
JOB ENDED ABNORMALLY WITH CODE (S990})
When not using TuxJes, the log file is created under the${MT_LOG}
directory with the following name: <Job name>_<TimeStamp>_<Job id>.log
For more information, see Using Tuxedo Job Enqueueing Service (TuxJES).
Parent topic: General Introduction
2.11.2 Log Header
Batch Runtime logging functionality provides an informative log header in front of each log line, in the following format:
YYYYmmdd:HH:MM:SS:TuxSiteID:JobID:JobName:JobStepName
You can configure the format of log header, but should not impact any configuration and behavior of existing specific message header: type 0, 1 and b.
The following table shows the variables you can use for specifying the general log header:
Table 2-14 Variables for Specifying General Log Header
Variable | Description |
---|---|
MTI_SITE_ID
|
If the job is submitted from TuxJES, it is the logical machine ID configured for the machine by TuxJES, otherwise it's empty. |
MTI_JOB_ID
|
If the job is submitted from TuxJES, it is the job ID assigned by JES. |
MTI_JOB_NAME
|
Name of the job assigned by m_JobBegin in the job script.
|
MTI_STEP_NAME
|
Name of the current executing job step. |
MTI_SCRIPT_NAME
|
Name of the job script. |
MTI_PROC_NAME |
Name of the proc when the code included from a PROC by m_ProcInclude is executing; empty otherwise.
|
Parent topic: Batch Runtime Logging
2.11.2.1 Configuration
MT_LOG_HEADER
is a new configuration variable added in CONF/BatchRT.conf
, for example:
MT_LOG_HEADER='$(date'+%Y%m%d:%H%M%S'):${MTI_SITE_ID}:${MTI_JOB_NAME}:${MTI_JOB_ID}:${MTI_JOB_STEP}
: '
MT_LOG_HEADER
is not a null string, its contents are evaluated as a shell statement to get its real value to be printed as the log header, otherwise this feature is disabled.
Note:
The string that configured toMT_LOG_HEADER
is treated as a shell statement in the source code, and is interpreted by "eval
" command to generate the corresponding string used as log header:
Syntax inside: eval mt_MessageHeader=\"${MT_LOG_HEADER}\"
To configure this variable, you need to comply with the following rules:
MT_LOG_HEADER
must be a valid shell statement for "eval
", and must be quoted by single quotation marks.- All the variables used in
MT_LOG_HEADER
must be quoted by "${}
". For example:${ MTI_JOB_STEP }
- All the command line used in
MT_LOG_HEADER
must be quoted by "$()
". For example:$(date '+%Y%m%d:%H%M%S')
You can modify the above examples according to your format needs using only the variables listed in Table 2‑10 of the topic Database Tables.
This configuration variable is commented by default, you need to uncomment it to enable this feature.
Parent topic: Log Header
2.11.3 File Information Logging
Logging system can logs the detailed file information in job log, as well as the information when a file is assigned to a DD and when it is released.
File assignment information is logged in the following functions:
m_FileAssign
File release information is logged in the following functions:
m_PhaseEnd
File information is logged in the following functions:
m_FileBuild
m_FileClrData
m_FileConcatenate
m_FileCopy
m_FileDelete
m_FileEmpty
m_FileExist
m_FileLoad
m_FileRename
m_FilePrint
m_FileRepro
The following topic describes the configurations required to log detailed file information:
Parent topic: Batch Runtime Logging
2.11.3.1 Configuration
This section contains the following files:
Parent topic: File Information Logging
2.11.3.1.1 Messages.conf
The following message identifiers are defined in CONF/Messages.conf
to support using of mi_DisplayFormat
to write file assignment and file information Log Message Format.
FileAssign;m_FileAssign;4;ueo;0;%s
FileRelease;m_PhaseEnd;4;ueo;0;%s
FileInfo;m_File*;4;ueo;0;%s
Note:
CONF/Messages.conf
is not configurable. Do not edit this file.The string "
%s
" at the end of each identifier represents it will be written to log file. You can configure its value using the following variables defined inCONF/Batch.conf
. For more information, see the Table 2‑12 of the topic Log Message FormatMT_LOG_FILE_ASSIGN
(forFileAssign
)MT_LOG_FILE_RELEASE
(forFileRelease
)MT_LOG_FILE_INFO
(forFileInfo
)
Parent topic: Configuration
2.11.3.1.2 BatchRT.conf
Three configuration variables should be defined in CONF/BatchRT.conf
to determine the detailed file information format. With the placeholders listed in Table 2‑11 in the topic Generation File Naming Rule , you can configure file log information more flexibly.
Table 2-15 Placeholders
Placeholder | Description | Value and Sample |
---|---|---|
<%DDNAME%>
|
DD Name for the file being operated | SYSOUT1
|
<%FULLPATH%>
|
Full path for the file being operated | /local/simpjob/work/TEST001.Gen.000000001 |
<%FILEDISP%> |
DISP for the file being operated | SHR or NEW
|
Table 2-16 Configuration Variables in CONF/BatchRT.conf
Name | Value and Sample | Available Placeholder |
---|---|---|
MT_LOG_FILE_ASSIGN
|
FileAssign: DDNAME=(<%DDNAME%>); FILEINFO=($(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%>)';FILEDISP=(<%FILEDISP%>) |
|
MT_LOG_FILE_RELEASE
|
FileRelease: DDNAME=(<%DDNAME%>); FILEINFO=($(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%>)';FILEDISP=(<%FILEDISP%>) |
<%DDNAME%>
|
MT_LOG_FILE_INFO
|
FILEINFO=($(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%>)) leCopy source , FileCopy Destination , and FileDelete etc.
|
<%FULLPATH%>
|
To configure strings to these MT_LOG_FILE_*
variables, replace the placeholders with corresponding values (just string replacement). The result is treated as a shell statement, and is interpreted by "eval
" command to generate the corresponding string writing to log:
Syntax inside: eval mt_FileInfo=\"${MT_LOG_FILE_INFO}\"
To configure these variables, you need to comply with the following rules:
- After placeholders are replaced,
MT_LOG_FILE_*
must be a valid shell statement for "eval
", and must be quoted by single quotation marks. - Only the placeholders listed in Table3‑11 can be used in
MT_LOG_FILE_*
- All the command line used in
MT_LOG_HEADER
must be quoted by "$()
". For example:$(ls -l --time-style=+'%Y/%m/%d %H:%M:%S' --no-group <%FULLPATH%>)
FileInfo
message is equal to or less than the message level specified for Batch Runtime and MT_LOG_FILE_*
is set to a null string, FileInfo
message will not be displayed in job log. If MT_LOG_FILE_*
is set to an incorrect command to make file information invisible, FileInfo message will not be displayed in job log as well, but the job execution will not be impacted.
Note:
You can customize these variables according to your format needs, but make sure the command is valid, otherwise the file information will not be logged.Parent topic: Configuration
2.12 Using Batch Runtime With a Job Scheduler
Entry points are provided in some functions
(m_JobBegin
, m_JobEnd
,
m_PhaseBegin
, m_PhaseEnd
) in order to
insert specific actions to be made in relation with the selected
Job Scheduler.
Parent topic: Using Batch Runtime
2.13 Executing an SQL Request
A SQL request may be executed using the function
m_ExecSQL
.
Note:
The environment variableMT_DB_LOGIN
must be set (database connection user login).
The SYSIN
file must contain the SQL requests and the user has to verify the contents regarding the database target.
Parent topic: Using Batch Runtime
2.14 Simple Application on COBOL-IT / BDB
Batch COBOL programs compiled by COBOL-IT can access the indexed ISAM files which are converted from Mainframe VSAM files through the ART Workbench. VSAM files can be stored in BDB through COBOL-IT.
To enable this function in Batch runtime, do the followings during runtime:
- Compile COBOL programs by COBOL-IT complier with specifying
bdb:yes
. - Set
DB_HOME
correctly because it is required by BDB;DB_HOME
points to a place where temporary files are put by BDB. - Set the following environment variables before ART for Batch launches a job:
-
export COB_EXTFH_INDEXED=BDBEXTFH
export COB_EXTFH_LIB=/path_to_Cobol-IT/lib/libbdbextfh.so #For example, export COB_EXTFH_LIB=/opt/cobol-it-64/lib/libbdbextfh.so
-
- Unset
COB_ENABLE_XA
environment variable before booting the TuxJES system.unset COB_ENABLE_XA
Note:
It is required to setCOB_ENABLE_XA
when you use COBOL-IT with ART CICS Runtime.
Parent topic: Using Batch Runtime
2.15 Native JCL Job Execution
This section contains the following topics:
- General Introduction
- Configurations
- Using JES Client to Manage JCL Jobs
- Supporting Range for JCL Statements and Utilities
Parent topic: Using Batch Runtime
2.15.1 General Introduction
Oracle Tuxedo ART Batch Runtime supports users to manage native JCL jobs without pre-conversion by ART Workbench.
Parent topic: Native JCL Job Execution
2.15.2 Configurations
It is required to set TuxJES using database to manage job. See Setting up TuxJES as an Oracle Tuxedo Application (Using Database) for more information.
It is required to add following setting item to
jesconfig
file: JOBLANG=JCL
.
All the environment variables described in Setting Environment Variables is also available for native JCL execution.
Parent topic: Native JCL Job Execution
2.15.3 Using JES Client to Manage JCL Jobs
The usage is the same as KSH jobs:
- Submitting a JCL Job
- Printing Jobs
- Holding/Releasing/Canceling/Purging a JCL Job
- JCL Engine's Debug Trace File
Parent topic: Native JCL Job Execution
2.15.3.1 Submitting a JCL Job
You can use option -I
to submit a JCL job with the
following usage.
artjesadmin -I JCLScriptName
(in the shell command line)submitjob -I JCLScriptName
(in theartjesadmin
console)
Also, you can specify env
file when submitting job
as below:
artjesadmin -o "-e <envfile_path>" -I JCLScriptName
submitjob -o "-e <envfile_path>" -I JCLScriptName
(in theartjesadmin
console)
All the items in this env
file should conform to
format "<NAME>=<VALUE>"
, such as
DATA=/home/testapp/data
PROCLIB=${PROCLIB}:${APPDIR}/proc
JESTRACE=DEBUG
Parent topic: Using JES Client to Manage JCL Jobs
2.15.3.2 Printing Jobs
[-t JCL|KSH]
is used as a filter with the following usage:
- Print all jobs:
printjob
- Print JCL jobs:
printjob -t JCL
- Print KSH jobs:
printjob -t KSH
The column, job type, is added to the results with one of the following values:
- JCL for JCL jobs
- KSH for KSH jobs
Before the conversion phase completes, the JCL job name and class are null, and the priority is displayed as 0.
Parent topic: Using JES Client to Manage JCL Jobs
2.15.3.3 Holding/Releasing/Canceling/Purging a JCL Job
The usage is the same as KSH jobs.
Parent topic: Using JES Client to Manage JCL Jobs
2.15.3.4 JCL Engine's Debug Trace File
The JCL conversion log is
$JESROOT/<JOBID>/LOG/<JOBID>.trace
.
This trace file is only for debug purpose:
Parent topic: Using JES Client to Manage JCL Jobs
2.15.4 Supporting Range for JCL Statements and Utilities
Table 2‑17 lists the supported JCL statement. Table 2‑18 lists the supported utilities.
Table 2-17 Supported JCL Statement
Item | Sub-Item |
---|---|
DD Statement | Param DISP |
Param DUMMY | |
Param SYSOUT=CLASS | |
Param DCB/RECFM/LRECL | |
Param DSN | |
Param DSN (backward reference) | |
Param DDNAME (forward reference) | |
Param LIKE | |
Temporary DD, &&TEMP | |
Instream DD | |
Concatenate DD | |
Dataset MOD (append) | |
Dataset catalog feature, and Param VOLUMN | |
Dataset expire feature | |
Duplicate DD in a single step | |
Dataset exclusive (DD lock and unlock) | |
Special DD JOBLIB | |
Special DD STEPLIB | |
EXEC Statement | Param PARM |
Param COND | |
Execute utility | |
Execute COBOL program | |
Execute COBOL program with param | |
JOB Statement | Param CLASS |
Param COND | |
Param TYPERUN(COPY/HOLD/JCLHOLD/SCAN) | |
Param MSGCLASS | |
Param RESTART(*/stepname/stepname.procstepname) | |
JCLLIB Statement | - |
SET Statement | - |
IF/ELSE/ENDIF Statement | - |
INCLUDE Statement | - |
OUTPUT Statement | - |
JCL Variable | - |
PROC/PEND Statement | - |
PROC Invocation | Symbol override |
PARM override | |
COND override | |
DD override |
Table 2-18 Supported Utilities
Utiliy | Description |
---|---|
DSNTEP2 | Execute SQL DML |
DSNTEP4 | Execute SQL DML |
DSNTIAUL | Execute SQL DML |
DSNTIAD | Execute SQL DML |
FTP | FTP Client |
ICETOOL | Perform Multiple Purpose Dataset Operations |
IEBUPDTE | Create or Modify PS/PDS |
IEHLIST | List Entries in PDS/PDSE |
IEHPROGM | Modify System Control Data |
IKJEFT01 | Launch Application |
IKJEFT1A | |
IKJEFT1B | |
PKZIP | Compress Files into ZIP format |
PKUNZIP | Ucompress Files in ZIP format |
IEBDG | Create Dataset based on description |
IEBGENER | Copy PS or member in PDS |
IEBPTPCH | Prints or punches all, or selected portions of a sequential or partitioned data set or PDSE in a single step |
IEBCOPY | Copy or Merge members in PDS/PDSE |
FILEAID | Consolidates the functions of many standard IBM utilities |
IEFBR14 | Null Utility |
SORT | Sort, merge or filter datasets |
ICEGENER | Alias of IEBGENER |
ICEMAN | Alias of SORT |
IDCNOGFL | Alias of IDCAMS |
IEBFR14 | Alias of IEFBR14 |
IDCAMS | Generate and modify VSAM and Non-VSAM datasets |
REXEC | Execute commands on remote host |
DFSRRC00 | Invoke a BMP program in IMS |
DSNUTILB | DB2 utility used to load/unload table |
Parent topic: Native JCL Job Execution
2.16 Native JCL Test Mode
This section contains the following topics:
Parent topic: Using Batch Runtime
2.16.1 General Introduction
Native JCL Test Mode is a running mode of native JCL. This mode helps you analyze defects in user jobs/procs, find gaps in the use of the Native JCL feature, and detect environment dependency issues that block jobs from running.
Parent topic: Native JCL Test Mode
2.16.2 Configurations
Configure as follows to use Native JCL Test Mode.
- Environment Variables Configurations (Mandatory)
- Native JCL Configuration File Configurations (Optional)
Parent topic: Native JCL Test Mode
2.16.2.1 Environment Variables Configurations (Mandatory)
Set required environment variables before you submit a test mode job. The following table lists all the environment variables you must set.
Table 2-19 Environment Variables Required for Native JCL Test Mode
Environment Variable | Description |
---|---|
JESDIR
|
Directory where TuxJES is installed |
JESROOT
|
Root directory for JES2 system |
DATA
|
Directory for permanent files |
PROCLIB
|
Directory for PROC and INCLUDE files
|
MT_COBOL
|
Specifies COBOL. Use one of the following values:
|
MT_DB
|
Specifies target database. Use one of the following values:
|
MT_LOG
|
Directory for logs |
MT_TMP
|
Directory for temporary internal files |
MT_SORT
|
Specifies SORT. Use one of the following values:
|
Parent topic: Configurations
2.16.2.2 Native JCL Configuration File Configurations (Optional)
Configure the additional utilities that you need to use in
native JCL configuration file. This configuration file is located
under ${JESDIR}/jclexec/conf/JCLExecutor.conf
; its
ADDUTILITYLIST
item is used to define additional
utility list.
For example, if you would like to define MYUTILITY
utility, you should specify ADDUTILITYLIST=MYUTILITY
;
if you would like to define multiple utilities, you should specify
ADDUTILITYLIST=MYUTILITY1,MYUTILITY2,…
, using
comma (',
') to separate utilities.
Parent topic: Configurations
2.16.3 Using Client to Manage Test Mode
You can use artjclchk
tool to launch test mode for
a job or a group of jobs. The command line syntax for the
artjclchk
tool is as follows:
artjclchk -d <destdir> [-i <job_file|job_dir>] [-p <parallel number>] [-r]
-d <destdir>
Specifies the destination directory to save output report files. There are three types of output report file; all of them are generated here. See Test Mode Report Files for more information.
-i <job_file|job_dir>
Specifies jobs to be analyzed in test mode. You can specify an individual job or a directory; if you specify a directory, all jobs under this directory are analyzed.
-p <parallel number>
Specifies the number of jobs that can be processed concurrently.
-r
Specifies to generate category report and summary report.
- If you specify
-r
but do not specify-i
, this command generates category report and summary report for every individual report under-d
directory. - If you specify
-r
and specify-i
, this command generates individual reports for all jobs that-i
specifies, and then generates category report and summary report only for these individual reports. - If you do not specify
-r
, category report and summary report are not generated.
Note:
If you runartjclchk
tool twice with the same -i
and -d
option values, results from the second run will replace results from the first run.
Parent topic: Native JCL Test Mode
2.16.4 Test Mode Report Files
There are three types of report files that
artjclchk
generates.
- Individual Report File
An individual report file is a job specific report file.
artjclchk
generates an individual report file for each job; anything found in the job is reported in this file. - Category Report File
A category report file is organized according to the type of information.
artjclchk
generates a summary report file for each type of information; any occurrence falling in the category together with job location and line number is reported in this file. - Summary Report File
A summary report file is a simplified version of category report.
artjclchk
generates it. Unlike category report file, summary report file only records the issues and issue occurrences. Summary report file has the same name with the corresponding category report file but without "Occurences
".The following topics describe each of the reports in details:
2.16.4.1 Individual Report File
An individual report file is a job specific report file.
artjclchk
generates an individual report file for each
job; anything found in the job is reported in this file.
This file is named in the format of
<JOBFILENAME>.rpt
; fields in each line are separated by comma. See the following
tables for these fields.
- Table 2‑20 lists fields for JCL element
- Table 2‑21 lists fields for
IKJEFTxx
utilities - Table 2‑22 lists fields for other utilities
Table 2-20 Report Fields for JCL Elements
Field | Value | Description |
---|---|---|
TYPE
|
ROC
|
Identifies a PROC issue
|
INCLUDE
|
Identifies an INCLUDE issue
|
|
STATEMENT
|
Identifies a JCL statement issue | |
PARAM
|
Identifies a JCL parameter issue | |
SYMBOL
|
Identifies a JCL symbol issue | |
UTILITY
|
Identifies a utility issue | |
PROGRAM
|
Identifies a program issue | |
DATASET
|
Identifies a dataset issue | |
DD
|
Identifies a DD issue | |
STEP
|
Identifies a STEP issue
|
|
INTERNAL
|
Identifies an internal issue, such as memory fault, I/O defect, etc. | |
STATUS
|
FOUND
|
Identifies if PROC , INCLUDE , PROGRAM , and DATASET objects are found or not found
|
SUPPORTED
|
Identifies if STATEMENT , PARAM , and UTILITY objects are supported or not supported
|
|
|
Identifies if SYMBOL object is defined or not defined
|
|
IGNORED
|
Identifies STATEMENT or PARAM is recognized but ignored
|
|
INVALID
|
Identifies STATEMENT or PARAM is unrecognized
|
|
ERROR
|
Identifies a system/internal error is met | |
NAME
|
Object name |
The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, PARAM name, DATASET name, or other object name
|
FILE
|
File location |
Identifies file location |
LINE
|
Line location |
Identifies line location |
JCL
|
JCL
|
Identify it is a JCL issue, not a utility issue |
DETAIL
|
Detailed information |
The detailed description of this issue |
Table 2-21 Report Fields for IKJEFTxx Utilities
Field | Value | Description |
---|---|---|
TYPE
|
COMMAND
|
Identifies a utility command issue |
PARAM
|
Identifies a utility parameter issue | |
UTILITY
|
Identifies a utility issue | |
PROGRAM
|
Identifies a program issue | |
DD
|
Identifies a DD issue | |
STEP
|
Identifies a STEP issue
|
|
INTERNAL
|
Identifies an internal issue, such as memory fault, I/O defect, etc. | |
STATUS
|
FOUND NOTFOUND
|
Identifies if PROGRAM object is found or not found
|
SUPPORTED UNSUPPORTED
|
Identifies if COMMAND , PARAM , and UTILITY objects are supported or not supported
|
|
IGNORED
|
Identifies COMMAND or PARAM is recognized but ignored
|
|
INVALID
|
Identifies COMMAND or PARAM is unrecognized
|
|
ERROR
|
Identifies a system/internal error is met | |
NAME
|
Object name |
The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, or other object name
|
FILE
|
File location |
Identifies file location |
LINE
|
Line location |
Identifies line location. The FILE and LINE locations are related to JCL job itself, for example, the STEP location where the current utility is launched.
|
UTILITY
|
<UTILITYNAME>
|
Identifies the name of the utility that generates the report line, for example, IKJEFT01
|
DETAIL
|
Detailed information | The detailed description of this issue |
Table 2-22 Report Fields for Other Utilities
Field | Value | Description |
---|---|---|
TYPE
|
COMMAND
|
Identifies a utility command issue |
PARAM
|
Identifies a utility parameter issue | |
DD
|
Identifies a DD issue | |
INTERNAL
|
Identifies an internal issue, such as memory fault, I/O defect, etc. | |
STATUS
|
SUPPORTED
|
Identifies if COMMAND and PARAM objects are supported or not supported
|
IGNORED
|
Identifies COMMAND or PARAM is recognized but ignored
|
|
INVALID
|
Identifies COMMAND or PARAM is unrecognized
|
|
ERROR
|
Identifies a system/internal error is met | |
NAME
|
Object name | The object name. It can be PROC name, INCLUDE name, PROGRAM name, UTILITY name, or other object name
|
FILE
|
File location | Identifies file location |
LINE
|
Line location | Identifies line location. The FILE and LINE locations are related to JCL job itself, for example, the STEP location where the current utility is launched.
|
UTILITY
|
<UTILITYNAME>
|
Identifies the name of the utility that generates the report line, for example, IEBGENER , SORT , and PKZIP
|
DETAIL
|
Detailed information | The detailed description of this issue |
Parent topic: Test Mode Report Files
2.16.4.2 Category Report File
A category report file is organized according to the type of
information. artjclchk
generates a summary report file
for each type of information; any occurrence falling in the
category together with job location and line number is reported in
this file.
The following reports will be generated:
- Missing Item Report
This is the category report file for missing items. This report file is named in the format "
Missing_Item_<DATETIME>_Occurences.csv
". See Table 2‑23 for its columns. - Unsupported Item Report
This is the category report file for unsupported items. This report file is named in the format "
Unsupported_Item_<DATETIME>_Occurences.csv
". See Table 2‑24 for its columns. - Ignored Item Report
This is the category report file for ignored items. This report file is named in the format "
Suspicious Code Defect ReportIgnored_Item_<DATETIME>_Occurences.csv
". See Table 2‑25 for its columns.This is the category report file for ignored items. This report file is named in the format "
Missing Dataset ReportIgnored_Item_<DATETIME>_Occurences.csv
". See Table 2‑26 for its columns.This is the category report file for missing dataset. This report file is named in the format "
Internal Error ReportMissing_Dataset_<DATETIME>_Occurences.csv
". See Table 2‑27 for its columns.This is the category report file for internal error. This report file is named in the format "
Supported Utility ReportAdd content
". See Table 2‑28 for its columns.This is the category report file for supported utilities. This report file is named in the format "
Supported_Utility_<DATETIME>_Occurences.csv
". See Table 2‑29 for its columns.". .
Table 2-23 Category Report File: Missing Item Report
Column Name | Description |
---|---|
Name | Name of the item which is missing. |
Type | Item type. It can be PROC , INCLUDE or PROGRAM .
|
File Location | Called file. |
Line Number | Corresponding line number. |
Table 2-24 Report File: Unsupported Item Report
Column Name | Description |
---|---|
Name | Name of the item which is missing. |
Type | Item type. It can be PROC , INCLUDE or PROGRAM .
|
JCL/Utility | JCL or utility name. |
File Location | Called file. |
Line Number | Corresponding line number. |
Description | Corresponding line number. |
Table 2-25 Category Report File: Ignored Item Report
Column Name | Description |
---|---|
Name | Name of the item which is ignored. |
Type | Item type. It can be STATEMENT , COMMAND or PARAM .
|
JCL/Utility | JCL or utility name. |
File Location | Called file. |
Line Number | Corresponding line number. |
Table 2-26 Category Report File: Suspicious Code Defect Report
Column Name | Description |
---|---|
Statement | Invalid statement. |
Type | Item type. |
JCL/Utility | JCL or utility name. |
File Location | Called file. |
Line Number | Corresponding line number. |
Description | Description for the error. |
Note:
The detected location may be not the real root cause but just the location where parsing failed. The actual code defect may be located in previous lines; when you check it, pay attention to the context where the error is reported.Table 2-27 Category Report File: Missing Dataset Report
Column Name | Description |
---|---|
Dataset | The missing dataset name. |
File Location | Called file. |
Line Number | Corresponding line number. |
DD | DD name. |
Table 2-28 Category Report File: Internal Error Report
Column Name | Description |
---|---|
Error | The parameter which is unsupported, ignored,or invalid. |
Type | Item type. |
File Location | Called file. |
Line Number | Corresponding line number. |
Description | Description for the error. |
Table 2-29 Category Report File: Supported Utility Report
Column Name | Description |
---|---|
Name | Utility name. |
Parent topic: Test Mode Report Files
2.16.4.3 Summary Report File
A summary report file is a simplified version of category
report. artjclchk
generates it. Unlike category report
file, summary report file only records the issues and issue
occurrences. Summary report file has the same name with the
corresponding category report file but without
"Occurences
".
- Missing Item Report
This report file is named in the format "
Missing_Item_<DATETIME>.csv
". See Table 2-23 for its columns.Note:
This is the simplified version of the "Missing Item Report" category report file. - Unsupported Item Report
This report file is named in the format "
Unsupported_Item_<DATETIME>.csv
". See Table 2-24 for its columns.Note:
This is the simplified version of the "Unsupported Item Report" category report file.This report file is named in the format "
Ignored_Item_<DATETIME>.csv
". See Table 2-25 for its columns.Note:
This is the simplified version of the "Ignored Item Report" category report file. - Suspicious Code Defect Report
This report file is named in the format "
CodeDefect_<DATETIME>.csv
". See Table 2-26 for its columns.Note:
This is the simplified version of the "Suspicious Code Defect Report". - Missing Dataset Report
This report file is named in the format "
Missing_Dataset_<DATETIME>.csv
". See Table 2-27 for its columns.Note:
This is the simplified version of the "Missing Dataset Report" category report file. - Internal Error Report
This report file is named in the format "
Internal_Error_<DATETIME>.csv
". See Table 2-28 for its columns.Note:
This is the simplified version of the "Internal Error Report" category report file. - Supported Utility Report
This report file is named in the format
"Supported_Utility_<DATETIME>.csv"
. See Table 2-29 for its columns.Note:
This is the simplified version of the "Supported Utility Report" category report file.
Table 2-30 Summary Report File: Missing Item Report
Column Name | Description |
---|---|
Name | Name of the item which is missing. |
Type | Item type. It can be PROC , INCLUDE or PROGRAM .
|
Occurrences | Repeated times. |
Table 2-31 Summary Report File: Unsupported Item Report
Column Name | Description |
---|---|
Name | Name of the item which is unsupported. |
Type | Item type. It can be UTILITY , STATEMENT , COMMAND or PARAM .
|
JCL/Utility | JCL or utility name. |
Description | Description for the error. |
Occurrences | Repeated times. |
Table 2-32 Summary Report File: Ignored Item Report
Column Name | Description |
---|---|
Name | Name of the item which is ignored. |
Type | Item type. It can be STATEMENT , COMMAND or PARAM .
|
JCL/Utility | JCL or utility name. |
Occurrences | Repeated times. |
Table 2-33 Suspicious Code Defect Report
Column Name | Description |
---|---|
Statement | Invalid statement. |
Type | Item type. |
JCL/Utility | JCL or utility name. |
Description | Description for the error. |
Occurrences | Repeated times. |
Table 2-34 Summary Report File: Missing Dataset Report
Column Name | Description |
---|---|
Dataset | The missing dataset name. |
DD | DD name. |
Occurrences | Repeated times. |
Table 2-35 Summary Report File: Internal Error Report
Column Name | Description |
---|---|
Error | The parameter which is unsupported, ignored,or invalid. |
Type | Item type. |
Description | Description for the error. |
Occurrences | Repeated times. |
Table 2-36 Summary Report File: Supported Utility Report
Column Name | Description |
---|---|
Name | Utility name. |
Occurrences | Repeated times. |
Parent topic: Test Mode Report Files
2.17 Network Job Entry (NJE) Support
This section contains the following topics:
Parent topic: Using Batch Runtime
2.17.1 General Introduction
With NJE support, users can implement the following functionalities in Batch Runtime exactly as they do in JCL jobs.
-
/* ROUTE XEQ
-
/* XEQ
-
/* XMIT
By m_SetJobExecLocation
API of Batch Runtime, users
can develop KSH jobs with NJE support. For example,
- Specify the server group, on which the job will be executed.
- In a job, transmit an in-stream job to another server group and make it run on that server group.
Parent topic: Network Job Entry (NJE) Support
2.17.2 Configurations
This section contains the following topics:
- Job Execution Server Group
- ON/OFF Setting of NJE Support
- Environment Variable MT_TMP in MP Mode
- Queue EXECGRP
Parent topic: Network Job Entry (NJE) Support
2.17.2.1 Job Execution Server Group
When specifying the server group name, which is specified as job execution group in API m_JobSetExecLocation
, please ensure the following:
- The specified server group must exist in
ubbconfig
file of JES domain. - At least one
ARTJESINITIATOR
server must be deployed in that server group.
Parent topic: Configurations
2.17.2.2 ON/OFF Setting of NJE Support
Table 2-37 Configurations in <APPDIR>/jesconfig
Name | Value | Default Value |
---|---|---|
NJESUPPORT
|
ON : Enable NJE supportOFF : Disable NJE support
|
OFF
|
If NJE support is disabled in jesconfig
, the
statement m_SetJobExecLocation <SvrGrpName>
is
ignored by TuxJES and then the job may executed by any
ARTJESINITIATOR
in any server group.
Parent topic: Configurations
2.17.2.3 Environment Variable MT_TMP in MP Mode
In MP mode, MT_TMP
needs to be configured on NFS,
and all the nodes in tuxedo domain should have the same value of
MT_TMP
and share it.
MT_TMP
can be configured in file $MT_ROOT/CONF/BatchRT.conf
, or to export
it as environment value before tlisten is started in each node.
Parent topic: Configurations
2.17.2.4 Queue EXECGRP
If NJESUPPORT
is enabled in jesconfig
,
a new queue named EXECGRP
must be created in the
existing queue space JES2QSPACE
. If
EXECGRP
is not created, no jobs can be processed by
JES.
Parent topic: Configurations
2.17.3 NJE Job Sample
Listing 2‑32 Sample of Specifying Job Execution Server Group (XEQ)
m_JobBegin -j SAMPLEJCL -s START -v 2.0 -c R
m_JobSetExecLocation "ATLANTA"
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
# XEQ ATLANTA
JUMP_LABEL=STEP01
;;
(STEP01)
m_OutputAssign -c "*" SYSPRINT
m_FileAssign -i SYSIN
m_FileDelete ${DATA}/GBOM.J.PRD.ABOMJAW1.ABEND02
m_RcSet 0
_end
m_UtilityExec
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : ${CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
Listing 2‑33 Sample of Transmitting and Submitting a Job to Another Server Group (XMIT)
m_JobBegin -j JOBA -s START -v 2.0
while true;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
m_FileAssign -i -D \_DML_XMIT_TEST1 SYSIN
m_JobBegin -j TEST1 -s START -v 2.0 -c B
m_JobSetExecLocation "ATLANTA"
while true ;
do
m_PhaseBegin
case ${CURRENT_LABEL} in
(START)
JUMP_LABEL=STEP01
;;
(STEP01)
m_OutputAssign -c "*" SYSPRINT
m_FileAssign -i SYSIN
m_FileDelete ${DATA}/GBOM.J.PRD.ABOMJAW1.ABEND02
m_RcSet 0
_end
m_UtilityExec
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : ${CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
_DML_XMIT_TEST1
m_ProgramExec artjesadmin -i ${DD_SYSIN}
JUMP_LABEL=END_JOB
;;
(END_JOB)
break
;;
(*)
m_RcSet ${MT_RC_ABORT:-S999} "Unknown label : {CURRENT_LABEL}"
break
;;
esac
m_PhaseEnd
done
m_JobEnd
In the above sample, job TEST1
will be submitted by
the current job and executed by the ARTJESINITIATOR
which belongs to JES's Tuxedo server group
ATLANTA
.
Parent topic: Network Job Entry (NJE) Support
2.18 File Catalog Support
This section contains the following topics:
- General Introduction
- Database Table
- Configuration Variables
- External Shell Scripts
- External Dependency
Parent topic: Using Batch Runtime
2.18.1 General Introduction
With file catalog support in Batch Runtime, users can access dataset under volumes. A volume is a dataset carrier and exists as a folder; each dataset should belong to a volume.
File catalog contains the mapping from each dataset to each volume. When referencing an existing and cataloged file on Mainframe, file catalog will be requested to find out the volume in which the file is located, and then the file will be accessed.
If file catalog functionality is disabled, the behavior in Batch Runtime remains the same as it is without such functionality.
Parent topic: File Catalog Support
2.18.2 Database Table
This table shows the general management for file catalog functionality by Batch Runtime. In this table, each row represents one file-to-volume mapping.
Table 2-38 Batch Runtime Catalog
Name | Type | Description |
---|---|---|
FILENAME
|
VARCHAR(256)
|
The file name. It cannot contain any slash. |
VOLUME
|
VARCHAR(256)
|
The volume name. It cannot contain any slash. |
VOLUME_ATTR
|
CHAR(1) |
Reserved. |
EXPDT_DATE
|
CHAR(7) |
Expiration date of the file |
CREATE_DATE
|
CHAR(7) |
The date when the file is created. |
FILE_TYPE
|
VARCHAR(8) |
File organization. |
JOB_ID
|
VARCHAR(8) |
The ID of the job that creates the entry. |
JOB_NAME
|
VARCHAR(32) |
The name of the job that creates the entry. |
STEP_NAME
|
VARCHAR(32) |
The name of the step that creates the entry. |
Primary Key: PK_ART_BATCH_CATALOG |
Parent topic: File Catalog Support
2.18.3 Configuration Variables
Four configuration variables are required to be added in BatchRT.conf or set as environment variables:
MT_USE_FILE_CATALOG
If it is set to yes (MT_USE_FILE_CATALOG=yes
), the
file catalog functionality is enabled; otherwise, the functionality
is disabled.
MT_VOLUME_DEFAULT
If no volumes are specified when a new dataset is created, Batch
Runtime uses the volume defined by MT_VOLUME_DEFAULT
.
MT_VOLUME_DEFAULT
contains only one volume. For
example, MT_VOLUME_DEFAULT=volume1
.
MT_DB_LOGIN
This variable contains database access information. For Oracle, its value is "username/password@sid
" (for example, "scott/password@gdg001
").
For Db2, its value is "your-database USER your-username USING your-password
" (for example, "db2linux USER db2svr USING db2svr
").
MT_CATALOG_DB_LOGIN
This variable contains file catalog database access information.
Its format is the same as MT_DB_LOGIN
. Since the file
catalog is stored in database, BatchRT must access it through
MT_DB_LOGIN
or MT_CATALOG_DB_LOGIN
.
MT_CATALOG_DB_LOGIN
precedes MT_DB_LOGIN
in accessing file catalog. If file catalog DB is the same as data DB, configuring MT_DB_LOGIN
only is required; otherwise, both must be configured.
Parent topic: File Catalog Support
2.18.4 External Shell Scripts
You can use CreateTableCatalog[Oracle|Db2].sh
or
DropTableCatalog[Oracle|Db2].sh
to create or drop the
new database table.
2.18.4.1 Description
Creates table ART_BATCH_CATALOG
in database.
Parent topic: External Shell Scripts
2.18.4.2 Usage
CreateTableCatalog[Oracle|Db2].sh <DB_LOGIN_PARAMETER>
Parent topic: External Shell Scripts
2.18.4.3 Sample
CreateTableCatalogOracle.sh scott/password@orcl
Parent topic: External Shell Scripts
2.18.4.4 DropTableCatalog[Oracle|Db2].sh
This section contains the following topics:
Parent topic: External Shell Scripts
2.18.4.4.1 Description
Drops table ART_BATCH_CATALOG
from database.
Parent topic: DropTableCatalog[Oracle|Db2].sh
2.18.4.4.2 Usage
DropTableCatalog[Oracle|Db2].sh <DB_LOGIN_PARAMETER>
Parent topic: DropTableCatalog[Oracle|Db2].sh
2.18.4.4.3 Sample
DropTableCatalogOracle.sh scott/password@orcl
Parent topic: DropTableCatalog[Oracle|Db2].sh
2.18.5 External Dependency
To use file catalog functionality in Batch Runtime, File Converter and JCL Converter in ART Workbench should enable catalog functionality. For more information, please refer to Oracle Tuxedo Application Rehosting Workbench User Guide.
Parent topic: File Catalog Support
2.19 Launching REXX EXECs
This section contains the following topics:
2.19.1 Setting MT_REXX_PATH
MT_REXX_PATH
has no default value. It should be set with the main path where all REXX execs located in. Place REXX programs in proper subdirectories under ${MT_REXX_PATH}
. These subdirectories correspond to PDS on mainframe where REXX programs live.
Parent topic: Launching REXX EXECs
2.19.2 Launching REXX EXECs
If SYSEXEC
is defined, BatchRT accepts programs run
by m_ProgramExec
as REXX EXEC
.
DD SYSEXEC
specifies where to find object REXX programs.
Parent topic: Launching REXX EXECs
2.19.3 TSO Batch Commands
All relevant REXX files (REXX APIs and TSO commands) are located in the Batch_RT/tools/rexx
directory. The directory structure is as follows:
-- lib
/-- outtrap.rex
/ -- regis_tso_cmd
-- tso
/-- DELETE
/-- LISTDS
/- RENAME
/-- libTSO.so
Batch_RT/tools/rexx/tso
is where TSO commands are located. REXX APIs should be put in theBatch_RT/tools/rexx/lib
directory.
Note:
TSO commands provide a lock mechanism. Files accessed by TSO commands in a REXX program are locked when commands start. All file locks are released once TSO commands finish executing.Parent topic: Launching REXX EXECs
2.20 COBOL Program Access to Oracle and TimesTen Database
ART for Batch supports the following scenarios in this feature:
- COBOL program accesses to both Oracle database and TimesTen database
- COBOL program accesses only to TimesTen database
The following topics describe the procedure to access the Oracle and TimesTen Database:
Parent topic: Using Batch Runtime
2.20.1 Setting Environment Variables
To enable this feature, you need to set the following environment variables:
Table 2-39 Required Environment Variables
Scenario | Environment Variables | |||
---|---|---|---|---|
MT_TT_CONN | MT_DB | MT_DB_LOGIN | MT_DB_LOGIN2 | |
Oracle database and TimesTen database | Required.Specifies TimesTen connection name. | Required. Specifies ORA. | Required.Specifies Oracle connection credential. | Required. Specifies TimesTen connection credential. |
TimesTen database only | Required.Specifies TimesTen connection name.
Note: You only need to specify a pseudo connection name which is used to indicate that you are using only TimesTen. This name is not actually used in your COBOL program. |
Required. Specifies ORA. | Required. Specifies TimesTen connection credential. | Not required |
Also, you need to set the following environment variables.
- LD_LIBRARY_PATH should include TimesTen library and the bundled instant client library.
- TNS_ADMIN should be set to the directory containing file tnsnames.ora, in which both TimesTen and Oracle TNS information should be included. For more information, see Oracle TimesTen In-Memory Database documentation.
Parent topic: COBOL Program Access to Oracle and TimesTen Database
2.20.2 Programming in COBOL
All EXEC SQL statements to Oracle database does not need to specify connection name; however, all operations to TimesTen should specify the TimesTen connection name as it is set in MT_TT_CONN .
Parent topic: COBOL Program Access to Oracle and TimesTen Database
2.20.3 Preprocessing COBOL Programs
In this feature, all COBOL programs accessing Oracle database or TimesTen database should be processed by TimesTen Pro*COBOL.
Parent topic: COBOL Program Access to Oracle and TimesTen Database
2.20.4 Examples
The following are the list of examples:
- Example for Setting Environment Variables
- Example for COBOL Programs Accessing to Oracle Database
- Example for COBOL Programs Accessing to TimesTen Database
- Example for Preprocessing COBOL Programs
- Example for Compiling COBOL Programs (CIT)
Parent topic: COBOL Program Access to Oracle and TimesTen Database
2.20.4.1 Example for Setting Environment Variables
Listing 2‑34 Example for Setting Environment Variables
export MT_TT_CONN=TTNAME1
export MT_DB=ORA
export MT_DB_LOGIN=user1/pass1@oracle
export MT_DB_LOGIN2=user2/pass2@tt
export TIMESTEN_HOME=/opt/TimesTen/tt
export LD_LIBRARY_PATH=${TIMESTEN_HOME}/lib:${TIMESTEN_HOME}/ttoracle_home/instantclient_11_2:${LD_LIBRARY_PATH}
Parent topic: Examples
2.20.4.2 Example for COBOL Programs Accessing to Oracle Database
Listing 2‑35 Example for COBOL Programs Accessing to Oracle Database
EXEC SQL
SELECT B
INTO :H-VALUE-B
FROM ORATBL01
END-EXEC.
Parent topic: Examples
2.20.4.3 Example for COBOL Programs Accessing to TimesTen Database
Listing 2‑36 Example for COBOL Programs Accessing to TimesTen Database
EXEC SQL DECLARE TTNAME1 DATABASE END-EXEC.
EXEC SQL
AT TTNAME1
SELECT NAME
INTO :H-VALUE-NAME
FROM TTTBL01
END-EXEC.
Parent topic: Examples
2.20.4.4 Example for Preprocessing COBOL Programs
Listing 2‑37 Example for Preprocessing COBOL Programs
export LD_LIBRARY_PATH=${TIMESTEN_HOME}/ttoracle_home/instantclient_11_2;${TIMESTEN_HOME}/ttoracle_home/instantclient_11_2/sdk/procob sqlcheck=syntax lname=SELECTTT.lis iname=SELECTDB.pco oname=SELECTDB.cob
Parent topic: Examples
2.20.4.5 Example for Compiling COBOL Programs (CIT)
Listing 2‑38 Example for Compiling COBOL Programs (CIT)
cobc -fthread-safe -m -g -G -fmf-gnt SELECTDB.cob -w -fixed -ffcdreg -lcitextfh -t SELECTDB.lst -conf=cit.conf
Parent topic: Examples