1 Development Overview

This chapter consists of the following sections:

Application architecture

Figure 1-1 outlines how ASAP components interrelate. This figure displays all ASAP application processes (for example, SRP and SARM) and their appropriate databases.

ASAP contains several processes and each process has its own database. Because of this close coupling of a process to its database, ASAP is distributed efficiently in a network environment.

Figure 1-1 ASAP Processes and Databases

Description of Figure 1-1 follows
Description of "Figure 1-1 ASAP Processes and Databases"

The database engine:

ASAP uses the SQL Server as the database engine. A single SQL Server can contain one or more ASAP databases. In a completely distributed environment, each ASAP database may reside on a separate SQL Server, on a separate machine. This ability to distribute and scale the ASAP application transparently, is a fundamental feature of ASAP's design.

The ASAP control server:

The ASAP Control server manages ASAP's overall operation. The Control server:

  • Starts and stops ASAP applications.

  • Distributes ASAP across many machines.

  • Maintains process and performance statistics about each application.

  • Provides event notification, logging, alarming, and paging facilities that the applications use.

  • Monitors the behavior of application clients and application servers.

  • Issues system alarms to the proper alarm centers if an ASAP application terminates unexpectedly.

Client/server architecture

ASAP consists of a set of multithreaded UNIX Client/Server processes that communicate with each other and the associated database servers.

The Client/Server architecture has several advantages over traditional program architectures:

  • Client/Server applications, such as ASAP, are easily distributed across several, possibly heterogeneous, platforms. The applications have a scalable architecture which you can expand upon to meet your future requirements.

  • Application size and complexity is reduced significantly because common services are handled in a single location, by the server. This simplifies client applications, reduces duplicate code, and makes application maintenance easier.

  • Client/Server architecture enables applications to be developed with distinct components. These components can be changed or replaced without affecting other parts of the application. Such components may be supplied as part of the base product or developed by individual customers to suit their own requirements.

  • Client/Server architecture facilitates communication between varied applications. Client applications that use different communication protocols from the server cannot communicate directly with it; instead they must communicate through a “gateway" server that understands both protocols.

Multithreaded architecture

ASAP's multithreaded architecture makes more efficient use of the available resources than single-threaded architecture in the following ways:

You can combine several concurrent tasks into a single server process as threads of execution within that process. This means that there are fewer operating system processes running. Because of this, the operating system performs less process context-switching. Thread context-switching within a process is much more efficient than process context-switching performed by the operating system.

Since several tasks are running concurrently as threads within the server process, the threads communicate with one another through internal messages that use conventional process memory and semaphores as notification mechanisms. This method of communication is much more efficient than inter-process communication which would be required if each task was run as its own operating system process.

When the operating system allows a process to execute for a period of time on the CPU, the multithreaded server's internal scheduler schedules the various threads within the process over the duration of that period. Therefore, whenever a thread either explicitly yields the processor to another thread, or performs I/O activity (for example, disk and/or network access), the thread scheduler puts that thread to sleep and resumes the next thread in order of priority. As soon as the original thread's results are ready, the thread scheduler resumes that thread.

Due to the improved performance of multithreaded applications, most contemporary operating systems and database servers employ this architecture.

Open Server/Open Client ASAP components

ASAP is composed of three Client/Server systems: the RDBMS server, the Open server, and the Open client. Figure 1-2 outlines the Client/Server composition of ASAP and the methods of communication between components.

Figure 1-2 ASAP Client/Server Composition

Description of Figure 1-2 follows
Description of "Figure 1-2 ASAP Client/Server Composition"
RDBMS Server

The RDBMS Server is a stand-alone database engine. You can only add SQL definitions and data to the RDBMS Server.

Open Client

Open Client applications use the Open Client libraries (Client library and Database library) to communicate with the Open Server and the RDBMS Server. To communicate with these servers, the Open Client application uses either SQL or RPCs. The previous figure outlines how the Open Client communicates with the servers. In turn, the Open Server can receive results, messages, and notifications from the RDBMS Server and Open Client.

Open Server

Open Server applications are built using the Open Server API and Open Client API. They are multithreaded processes containing both event driven threads (for example, handling a client connection) and background or service threads that perform non-event related tasks. They may receive language commands and RPCs from client processes, RPCs from RDBMS Servers, as well as send RPCs to other Open Servers (SARM to NEP) or RDBMS servers (access DB).

The Gateway Server application

A Gateway Server application is an application that incorporates both the client and server functionality. Such an application uses the client component to communicate with other servers and databases, and uses the server component to receive requests and RPCs from client processes and databases. ASAP consists of Gateway applications (for example, SARM, SRP, NEP) together with some client processes and databases. In this manual, every gateway application is referred to as an ASAP application server. The Client/Server configuration appears below.

The client API, provided by Sybase, co-exists in a multithreaded application process that is, it is fully re-entrant and does not invoke any blocking routines. In particular, the client API co-exists within a server application, resulting in a gateway application that possesses both client and server characteristics.

Figure 1-3 Gateway Server Application

Description of Figure 1-3 follows
Description of "Figure 1-3 Gateway Server Application"

Multithreaded environment

One of the principal issues to consider when using the ASAP API is the multithreaded environment that runs under UNIX. In UNIX, many processes run concurrently. Process execution, resource allocation, and CPU usage, for example, are controlled by the UNIX process scheduler.

A multithreaded application using the Open Server API, is one in which there are many Open Server execution threads within the UNIX process. All the Open Server threads share the resources with the host operating system process. The Open Server library that is linked to each Open Server application provides concurrence by periodically suspending the running thread and resuming another. This thread-context switch happens so quickly that the threads appear to run continuously from the point of view of the client application communicating with the Open Server.

Open Server thread scheduler

The Open Server library provides a specialized thread called the thread scheduler. The thread scheduler performs context switches between the threads in the server. A thread has an execution context that includes its stack and register environment. The thread scheduler saves the execution context of the running thread, selects another thread to resume, restores its context, and runs it.

An Open Server application is a UNIX process whose execution is controlled by the UNIX process scheduler. The UNIX process scheduler context-switches the UNIX processes. Within the Open Server process, the thread scheduler context-switches the threads.

With context-switching, CPU resources are used more effectively. Any thread that performs I/O or other time-consuming operations is suspended and another thread runs while the first thread awaits the results of its operation.

Overall, the elements of the process now communicate extremely quickly. There is no data movement at all, only pointers passing within the UNIX process. Because the resources are managed more effectively, the application itself executes more quickly. Therefore, the application is a collection of threads within an Open Server, instead of a number of UNIX processes.

Open Server threads within the process have their own stack and register environments. The threads do, however, share the resources of the operating system process that is executing the server. For example, the standard input and standard output is the same for every thread in the server. If threads regularly write to the standard output, you must be careful not to mix the output of several threads on the standard output.

Programming in a multithreaded environment

Consider the following when programming in a multithreaded environment.

  • Make all code re-entrant – Open Server threads execute the same code image, so the program must not modify itself. If a thread resumes code that was changed while the thread was suspended, the results are difficult to predict and likely to be unrecoverable. In a multithreaded environment, you must take into account the possibility of concurrence within a single UNIX process and not only between UNIX processes.

    For more information about Mutexes, refer to the subsection “Mutually Exclusive Semaphores (Mutexes)" in the section “Inter-Thread Communication" of this chapter.

  • Protect shared resources – Make sure to protect shared resources such as global data, file descriptors, devices, and so on. While updating a shared global data item, do not call a routine that could suspend the thread unless you have taken steps to prevent other threads from accessing the data (such as a mutex). Otherwise, another thread could be working with inconsistent data. Watch for program logic that assumes it has sole access to a resource.

  • Avoid static variables in routines that could be executed by more than one thread – If a thread accesses a static variable that might be accessed by other threads, the variable's value may have been changed since it was last referenced by that thread. It is safer to use automatic variables because each thread has its own stack. When static variables must be used, protect access to them by using the technique for protecting variables as shared resources, outlined in the previous point.

  • Be careful with certain UNIX system calls and C library routines – The standard C library was not written to handle a multithreaded process explicitly. Therefore, certain functions may maintain a global variable invisible to you in order to save some information internal to the function. An example is strtok(). After strtok() makes its initial call, it maintains the pointer to its current position within the string. If two threads are using strtok() at the same time, it is possible that one of the strtok() calls on one string could return the address of some part of the other string, or worse, crash the server.

  • Avoid blocking UNIX system calls – UNIX system calls such as sleep() and read() can block the entire UNIX process, not just the individual thread. Generally, the Open Server API and the ASAP API provide non-blocking versions of these system calls.

  • Be aware of linking in third-party libraries – Some third party libraries may include blocking calls within them that could affect the Open Server operation. Examples of such libraries include LU6.2 API libraries and X.25 libraries.

  • Be aware of the thread stack size – Each thread within an Open Server has its own stack environment. If a particular thread exceeds its stack size, the server terminates. Therefore, do not use recursive routines that may result in excessive stack depth or stack size, such as unbalanced binary trees, large automatic stack objects, and so on.

Multithreaded procedural server initialization

In the initialization of the ASAP application server, the ASAP API installs a SRV_START Open Server event handler which is called by the Open Server library to start the server process. The server process is single-threaded while the SRV_START handler is executing. The server can spawn service threads, create message queues and mutexes, among other things, from within this start handler. Such spawned threads do not begin executing until the handler has finished.

It is not possible to send or receive thread messages, to lock mutexes or to perform any network input/output operations within this handler.

Figure 1-4 Multithreaded Procedural Server Initialization

Description of Figure 1-4 follows
Description of "Figure 1-4 Multithreaded Procedural Server Initialization"

During this handler execution, the application supplied appl_initialize() API routine is called. In appl_initialize(), the application registers one or more initialization routines, by means of the ASC_reg_init_func() routine, to be executed before the application threads start up. Such routines generally load static data from the database into memory tables, and therefore cannot be called directly from appl_initialize() since it is part of the SRV_START handler.

In order to control the multithreaded server initialization, the ASAP API initially locks an initialization mutex and spawns an initialization thread which is allowed to run when the SRV_START handler finishes. All other application threads (except the Sleep Handler and Sleep Wakeup) are blocked by waiting on an initialization mutex. This includes all connections received by this application server.

The initialization thread processes each registered initialization routine in series. Once all such routines have been completed, the initialization thread unlocks the initialization mutex, allowing the application threads to begin execution with the assurance that all the necessary initialization has been performed.

Inter-process communication

The Open Server and Open Client APIs provide inter-process communication facilities that are used by ASAP applications. The principal inter-process communication facilities provided by APIs are RPCs, registered procedures, and language requests.

To handle communications between servers, you are provided with application server driver threads.

RPCs and registered procedures

The RPC/Registered Procedure mechanism is the predominant inter-process communication mechanism used in ASAP.

A Remote Procedure Call (RPC) is a function that is executed in an RDBMS Server or a “C" function that is executed in an application server by an application client or server process. To execute the procedure, the sending server or client first establishes a network connection with the destination server and passes the appropriate information to the destination server in name-value pair format. The sending server then executes the procedure or function, waits for data rows, returns status or text messages to the receiving server, and, finally, ends the execution of the procedure.

Registered procedures provide more efficient processing than RPCs but do not allow options in the procedure call. Registered procedures do, however, give servers the ability to notify clients whenever a registered procedure executes.

This method of communication is fast and efficient, especially when a network connection is already established for the sender to transmit its RPC/Registered Procedure and receive the associated results.

Executing functions:

The following procedures describes how to execute functions in Oracle.

To execute a function in Oracle:

  1. Log in to the database by typing:

    sqlplus $<server_user>/$<server_password>

    <server_user> – is your user name for logging into the server <server_password> – is your password

  2. Execute the function by typing:

    variable retval number;
    exec :retval :=<stored_procedure> (<parameter_name(s)>)
    ;

    <stored_procedure> – is the name of the function and <parameter_name(s)> – is the name of each column identified in the applicable database table

Language requests

To facilitate larger data volume transfers, the sending application client or server transmits a language request to the receiving server process. For these large data volumes, language requests are more efficient than RPCs.

Application server driver threads

In most application servers that communicate with other application servers, a special thread called a driver thread is spawned on the sender side. This driver thread handles the communication between the servers. Any thread within the server that wishes to communicate with the other server interfaces with the driver thread and leaves the driver thread to perform the actual RPC/Registered Procedure call. If the call fails, the driver thread marks the receiving server as down.

Figure 1-5 illustrates that on the receiver side, an RPC/Registered Procedure handler routine interprets the incoming RPC/Registered Procedure and takes the appropriate action.

Figure 1-5 Application Server Driver Threads

Description of Figure 1-5 follows
Description of "Figure 1-5 Application Server Driver Threads"

If a driver thread has been idle for a period of time (for example, one minute), it checks the integrity of the RPC connection just in case the other server process has died. To do so, the driver executes a “heartbeat" RPC called kick_start.

The heartbeat RPC checks the integrity of the network connection. If the connection is down, the driver thread closes the network connection. The driver thread's ability to close the network connection ensures that the application server will start up again. Another detection technique uses callbacks to trigger, when the connection between applications is broken.

If the driver thread does not close the network connection, the driver thread will not release its end of the connection and the UNIX kernel will not free up the socket associated with the network connection. Consequently, when the other server tries to start up and listen on its master network port, it will be unable to do so because the socket upon which it is supposed to listen is already in operation (in other words, it has not yet been released). Therefore, the other application server will not be able to start up.

All driver threads within ASAP employ this mechanism to validate the network connections. ASAP assumes that the RDBMS Server will not terminate unexpectedly and, therefore, does not check network connections to the RDBMS Server.

Inter-thread communication

Within each multithreaded application server, there might be many threads of execution and these threads may need to communicate with each other. The APIs provide a number of tools to allow for this “inter-thread" communication.

Mutually exclusive semaphores (mutexes)

A Mutual Exclusion Semaphore (mutex) is a logical object, much like a semaphore under UNIX. The Open Server API allows the mutex to lock only one thread.

Threads that share global variables, structures, tables, and so forth, may only access these resources to read or update if the threads first lock the mutex associated with them. Once the mutex is locked, only one thread has exclusive access to a resource. Other threads waiting to access that resource can only do so when the current thread is finished using it. Once that thread is finished using the resource, it unlocks the mutex, and grants access to the next thread in line. The new thread must lock the mutex again before accessing the resource. A thread may block waiting on a mutex.

Mutexes are primarily used to protect resources from multiple concurrent updates by more than one thread. Therefore, if there is a resource within an application server that could be updated by more than one thread at a time, you should protect it using a mutex. Such protection is generally encapsulated in a single function which manages the mutex updates.

Thread message queues

Message queues let threads communicate with each other and are often used to send data to other threads within an application server.

The message itself resides in memory shared by the sending and receiving threads. The thread that puts the message into a queue and the thread that reads it must agree on the message format. Be careful that the sending thread does not overwrite the message memory address before the reading thread receives the message.

There are two main types of inter-thread messages employed within ASAP: asynchronous thread messages and synchronous thread messages. These two types of messages are explained in the following subsections.

Asynchronous thread messages:

Asynchronous thread messages are the most common types of messages passed between threads in ASAP. With these messages, the sender allocates memory for the message structure, fills in the message details, sends the message to the receiver, and continues normal operation.

The receiver receives the message, takes the appropriate action on its contents, and then frees the memory area allocated to the message. There is no needed synchronization between the sending and receiving threads. The only prerequisite for this type of communication is that the receiver must have a thread message queue. Figure 1-6 illustrates this process flow.

Figure 1-6 Asynchronous Thread Messages

Description of Figure 1-6 follows
Description of "Figure 1-6 Asynchronous Thread Messages"

Synchronous thread messages:

In some cases, the sender may need to wait before sending the message. For instance, it may need to wait on a status field entered in the message by the receiver. When this happens, the sender must wait until the receiver updates the relevant information in the message before referencing it. In this case, the sender usually declares the message on its stack, fills in the message details, and sends the message to the receiver.

Immediately after sending the message to the receiver, the sender thread goes to sleep and only wakes up when the receiver updates the message and explicitly wakes up the sender. When the receiver gets the message, it reads it and may update certain fields before waking up the thread that was sleeping on this message. At this point, both the sender and receiver continue with normal operations. This process flow appears in the following illustration.

Use the synchronous message method if a message sender requires return parameters or a message status.

Note:

Both the sending and receiving threads must agree on both the message format and the message type (synchronous or asynchronous). If the sender sends an asynchronous message and the receiver expects a synchronous one, then the memory allocated for the message will never be freed. Conversely, if the sender sends a synchronous message and the receiver expects an asynchronous one, the sender will stay asleep and the receiver will try to free the message on the sender's stack. This will lead to unpredictable results.

Figure 1-7 Synchronous Thread Messages

Description of Figure 1-7 follows
Description of "Figure 1-7 Synchronous Thread Messages"

Functions and configuration variables:

There are four functions that must be used when dealing with thread message queues within ASAP: ASC_createmsgq, ASC_deletemsgq, ASC_putmsgq, and ASC_getmsgq. The use of these functions must be consistent throughout the application.

Message queue statistics may also be dumped to a file.

System monitoring tool:

The ASAP Utility Script (asap_utils) is a menu that provides access from UNIX to a set of monitoring utilities for ASAP. You can access sysmon through the Real-time System Monitoring option (109) of the asap_utils menu. You can also monitor multiple servers at the same time by selecting the Real-time System Monitoring option (109) again. Once the data collection time period has passed, sysmon output files will be created in the ASAP systems diagnostic file directory.

Note:

Option 101, View Server Msg Queue Statistics, available in previous versions of ASAP, as well as the configuration parameter DIAG_MSGQUEUES and the RPC diag_msgqueues, have been replaced by the functionality available from option 109, Real-time System Monitoring.

The system monitoring tool is not available to C++ SRPs.

Sample message queue statistics
----------------------
Tuning - Message Queue
----------------------
Description            Count   Total      Min   Max    Average  Mean Deviation
------------           ------  ------     ----  -----  -------  ---- ---------
ASDL Provision Queue 
message read wait time  1056   5995.5     0.9   62.2     5.7    15.1    10.2
messages sent (count)   1056                                       
queue idle-time (ms)    1056   5971905.0  0.0   114374.2 5655.2 23775.0 19062.4
queue size (count)      1056   0.0        0.0   0.0      0.0    0.0     0.0
... 

Group Manager Msg Q
message read wait time  2469   38896.8    2.5   290.5    15.8   61.5    48.0
messages sent (count)   2469
queue idle-time (ms)    2469  18707440.6 0.0  71880.3  7576.9 18294.2 119980.0
queue size (count)      2469   27184.0     0.0   27.1     7.2    1.5     4.5
... 
Device-oriented threads and socketpair messaging

Device-oriented threads typically watch for input from one or more devices using the ASC_poll() API call. Such a thread cannot wait on the internal message queue to receive inter-thread messages from other threads. This thread will create a socketpair and publish one end of the socketpair as the socket device to which the other threads can write messages. The thread adds the other end of the socketpair to the list of devices that are polled for input. If any thread writes to the write-endpoint of the socketpair, the device will detect it via ASC_poll() and then read the message using the read-endpoint. Only asynchronous messages are supported using socketpair messaging.

Both the sending and receiving threads must agree on the format of the message.

This messaging technique is used internally in the ASAP core development. There are no explicit API calls provided as this is an internal feature of the ASAP API. Socketpair messaging is commonly used in applications interfacing with other external systems.

Notes for C++ compilation

The UNUSED macro:

This macro takes a single argument and is used in function calls where one of the arguments is unused. It prevents the C++ compiler from complaining about the unused argument. Refer to the example below.

CS_INT example_func(int arg1, int arg2, int UNUSED(arg3))
{
CS_INT arg4;
.
.
.
arg1 ++;
arg4 = arg2 + arg1;
.
.
.
return arg4;
}

In this example, the C++ compiler will accept the omission of arg3 in example_func.

Library architecture

This chapter outlines ASAP's library architecture. The following sections are included in this chapter:

  • ASAP API Development Structure

  • API Library Structures

  • ASAP API Application Development

  • SRP Server Application Structure

  • Generic NEP Application Structure

  • Multi-Protocol NEP Structure

ASAP API development structure

The ASAP development API builds upon the Open Client and Open Server libraries. ASAP client applications use the ASAP Client API which interfaces with the Open Client. ASAP server applications use the ASAP Server API to communicate with both the Open Client and Open Server libraries.

Figure 1-8 summarizes the API hierarchy.

API library structures

A detailed API structure emphasizing the application APIs appears in Figure 1-9. This figure illustrates the APIs that can be linked to an ASAP application. In general, each application links to a subset of the outlined libraries. For more information about each of the API libraries, refer to the subsections that follow.

Figure 1-9 API Library Structures

Description of Figure 1-9 follows
Description of "Figure 1-9 API Library Structures"
Common API library – libasc

The Common API library, libasc, provides a set of API routines common to both the client and server application libraries: libclient and libcontrol. This library is linked to both application clients and servers.

The libasc library provides you with considerable functionality, including:

  • Diagnostic routines common to both clients and servers

  • System event generation

  • Application configuration parameter determination

  • Network connection management

  • Performance parameter generation

  • Registered Procedures API

  • Remote Procedure Calls API

Client application API library – libclient

The Client Application API library, libclient, has an application client template to which you can add application-dependent functionality. This library, which is employed by every ASAP application client, provides routines specific to client applications.

The primary advantage of libclient is that it enables clients to integrate easily into the ASAP model. This allows the Control Server to start each client in the same manner as it starts server applications.

To use this library, you must define the following functions:

  • appl_initialize() – The application client initialization routine

  • appl_cleanup() – The application client termination cleanup routine

Server application API library – libcontrol

The Server Application API library, libcontrol, provides you with routines specific to server applications. Each ASAP application server links to this library, so it provides a total environment from where you can develop server applications.

Using this library, the Control Server can start each server application and monitor its behavior while it is running. It adds considerable functionality to the server application before any user-specified functionality is added to the server.

This library provides you with considerable functionality including:

  • Memory management

  • Thread spawning and thread administration

  • Performance monitoring and management

  • Non-blocking thread versions of UNIX system calls: sleep(), alarm(), read(), write(), poll(), and so forth

  • Control Agent routines controlling the Control database for performance parameter updates

  • Database administration thread

  • Pools of SQL server network connections to various databases for application use

  • Language requests management

  • Client connection and disconnection handlers

  • Many administrative and diagnostic RPCs

  • Library version registry

To use this library, you must define the following function:

  • appl_initialize() – The application server initialization routine which can add RPC/Registered Procedure/Language Request handlers, spawn application service threads, and so forth.

This server library also provides considerable functionality to any application server which links to this library. Figure 1-10 outlines a schematic of a server without any application functionality.

The Server Application API library also contains an application server template to which you can add application-dependent functionality. This library also spawns a number of threads to manage some common functions.

The following subsections describe some of these service threads, as well as other functions that are not managed by service threads.

The server API discussed here is generally excluded from schematic descriptions of an application server in order to avoid confusion and to emphasize the functionality of the particular server. The functionality of the API is present in all ASAP application servers.

Sleep manager and sleep wakeup handler threads:

The Sleep Manager and Sleep Wakeup Handler threads run constantly in each application server.

The SYBASE Open Server library provides routines that let you put a thread to “sleep" on a particular object (srv_sleep()) and be “woken up" (srv_wakeup()) by another thread within the Open Server. Unlike UNIX, there is no function to put a thread to sleep for a specified time period. Therefore, to put a thread to sleep for a specified time period, you can use a separate API routine, ASC_sleep().

The UNIX alarm() call is a shared resource. When activated, it issues a signal interrupt to the UNIX process, affecting all threads in a multithreaded process. In many cases, a particular thread may wish to alarm only itself, and not impact the operation of other threads in the UNIX process.

For the above cases, the libcontrol library spawns two service threads when the application starts up (refer to the previous diagram). These threads are:

  • sleep manager thread – This thread receives sleep and alarm requests from other threads within the process.

  • sleep wakeup handler thread – This thread handles the UNIX alarm() call and passes the timeout notification to the sleep manager thread, which then forwards the alarm notification to the relevant application thread.

Control agent thread:

When the ASAP application server starts up, the Control library spawns the Control Agent thread, which does the following:

  • Runs constantly in each application server.

  • Opens a connection to the Control database in the SQL Server. Whenever an application thread issues a system event or database audit log or error entry, the thread calls a function in the API which, in turn, interfaces with the Control Agent thread to update the relevant database tables.

  • Installs a UNIX signal handler for the SIGUSR1 signal. When this signal is received by the application server process in the Control Server, the Control Agent thread invokes functions in the Control database and updates that server's process and performance parameter information. The Control Server sends the performance poll signal based on the performance poll-interval period. The poll interval is configured from the Control Server.

Server language driver thread:

In some cases, one application server needs to send a large amount of textual information (NE response files, for example) over the network to another application server, possibly a remote machine.

To allow an application thread to send a text file or a text buffer as a Open Server language request to another server, use the API function ASC_send_text(). When this API function is called to send a text file to a particular application server for the first time, the API spawns a Server Language Driver thread. This thread continues running from that point onwards.

The Server Language Driver thread manages the network connection to the other application server and transmits the textual buffer to the destination application server. The destination application server triggers an Open Server language event.

To properly receive the language buffer, the destination server must have the appropriate language event handlers installed (using API functions). The server language driver thread maintains the network connection to that particular application server, so that any subsequent calls to this function to send textual information to this server are routed to this thread.

If the application calls the API function to send textual information to another application server, the API spawns a new server language driver to manage the new connection and data transmission. Therefore, you must set up one thread on the application server for every connection to a separate application server.

Database administration thread:

The Database Administration thread is a background thread that is present in every application server. It is spawned when the API server starts up. At a daily time that you configure, this thread establishes a connection to the server's primary database and does the following:

  • Performs database administration tasks (for instance, data archiving, purges, and reports) in a function and passes the procedure a parameter.

    You configure the function and the parameters, and specify the database administration tasks to perform.

  • Updates statistics for all tables within the database to ensure the correct execution plan is chosen whenever the functions are recompiled.

  • Recompiles all functions in the database.

When the above tasks are completed, the thread terminates the connection to the primary database.

Poller manager thread:

The Poller Manager thread manages API access to the Sybase srv_poll() routine to provide UNIX-like polling functionality to each application thread. It is invoked by means of the ASC_poll() API call.

Ping Server Thread:

The Ping Server thread is a constantly executing thread within each application server which periodically checks that both the application SQL and Control Server are running. If one or both are not running for any reason, then this thread will terminate the application server.

Interpreter API library – libinterpret

The Interpreter API library, libinterpret, provides routines for application servers that need to use State Tables. This library is generally used by SRPs, and NEPs.

Follow these steps to use this library for an application server:

  1. Initialize the interpreter using the ASC_init_interpreter() API call.

  2. Once initialized, use the ASC_alloc_interpreter() API call to allocate the library.

  3. To deploy the library, use the ASC_interpreter() API call.

  4. When the previous step is complete, free the library using the ASC_free_intrepreter() call.

  5. If you want to add or overwrite State Table actions with customized action handler functions, use the CMD_user_actions() API call.

SRP API library – libsrp

The SRP API library, libsrp, shields each SRP from the details of SARM communication and notification responses. It provides a set of data structures and routines you can use to develop an SRP. The libsrp library has the following main components:

  • Data structures and routines that describe the ASAP work order that the SRP translates into.

  • Data structures and function pointers that process each notification returned to the SRP from the SARM.

  • API routines that retrieve query information related to the processing on a particular ASAP work order from the SARM.

An SRP application links to the SRP API which contains both the Interpreter API library and the SRP API library. When the SRP initializes the SRP library using the SRP_initialize() API call, the API does the following:

  • Initializes the Interpreter within the SRP so that the SRP can employ it (provided libinterpret has been compiled in the SRP application).

  • Spawns SARM driver threads to manage the transmission of ASAP Work Orders to the SARM. The number of driver threads to be sent can be configured.

  • Spawns work order manager threads to receive notification events returned from the SARM and calls user-specified functions to process each notification event it receives. The number of work order manager threads to be sent can be configured. The API also adds all registered procedures and RPCs that the SRP receives from the SARM.

    Note:

    ASAP tuning can increase ASAP performance, especially for work order management and handler threads.

Figure 1-11 shows a typical example of an SRP and SRP API components.

Figure 1-11 SRP and SRP API Components

Description of Figure 1-11 follows
Description of "Figure 1-11 SRP and SRP API Components"
NEP API library – libnep

The NEP API library, libnep, provides you with the tools to write an NEP. To use this library and generate a custom NEP server, you must provide the following API functions to the NEP “core" system:

  • ASC_loadCommParams() – Returns the list of communication parameters for the specified device type, host, and device.

    For more information, see "Interpreter library."

  • CMD_comm_init() – Initializes the communications interface library. When the NEP requires interface-specific state table actions, use this function to register the actions using CMD_user_actions (see State Table Interpreter).

  • CMD_connect_port() – Opens a connection to the device specified by the port information structure. This function also registers an association between the command processor that initiates the connect and the device.

  • CMD_disconnect_port() – Closes the connection to the device specified by the port information structure

Figure 1-12 shows a typical example of an NEP and NEP API components.

Figure 1-12 NEP and NEP API Components

Description of Figure 1-12 follows
Description of "Figure 1-12 NEP and NEP API Components"
Multi-protocol communications API library – libasccomm

The communications library provides the Multi-Protocol Manager (MPM) with access to the various protocol drivers. Any ASAP Application Server that links in the interpreter library and the communications library, can communicate with a remote host using one of the supported device interfaces. Where terminal-based communication is required, the MPM routines manage the writing to and reading from the virtual screen.

The communications library allows one instance of an NEP application to communicate with hosts using multiple device interfaces. One command processor thread is dedicated to each remote host connection. To communicate with the device interface, the command processor invokes the MPM that is part of the communications API. The MPM will determine the appropriate device I/F handler based on the device interface type specified by the command processor.

For device interfaces that access UNIX devices and support non-blocking I/O, all the API functions are within the library. Hardwired or dialup (modem) serial interface, TCP socket, telnet, and SUN X.25 are some device interfaces that fall under this category. The generic driver will communicate with an external device driver using External Device Driver Interface API (libgedd) functions.

For device interfaces that need external device drivers due to a non-UNIX device interface API, or because nonblocking I/O is not possible, the communication library provides a generic driver to interface with the external device driver. For example, communicating to the host via the IBM-X25 API needs this approach because the IBM-X25 API cannot co-exist with the Open Server. The generic driver will communicate with an external device driver using the External Device Driver Interface API (libgedd) functions.

Figure 1-13 Logical NEP Command Processor Structure

Description of Figure 1-13 follows
Description of "Figure 1-13 Logical NEP Command Processor Structure"
Generic external device driver library – libgedd

The UNIX device API is managed from within the NEP using the Communications API. A non-UNIX device API requires an external device driver (EDD) to act as a gateway to transmit data between the NEP and the NE. The Generic External Device Driver API library, libgedd, handles the communication between the NEP and the EDD.

Network element configuration library – libnecfg

This library, libnecfg, provides commonly-used functionality that is NE-specific, MARCH-specific and blackout-related.

ASAP API application development

This section details the API structures used in various ASAP applications.

Client application structure

The Client application structure, as in Figure 1-14, is the API structure used within a typical ASAP client.

Figure 1-14 Client Application Structure

Description of Figure 1-14 follows
Description of "Figure 1-14 Client Application Structure"
Server application structure

The Server application structure, as in Figure 1-15, is the API structure used within a typical ASAP server.

Figure 1-15 Server Application Structure

Description of Figure 1-15 follows
Description of "Figure 1-15 Server Application Structure"

SRP server application structure

The SRP Server application structure, as in Figure 1-16, is the API structure used within a typical ASAP SRP server.

The SRP application is a server application, so it links in libcontrol and libasc, as well as libsrp and libinterpret.

Figure 1-16 SRP Server Application Structure

Description of Figure 1-16 follows
Description of "Figure 1-16 SRP Server Application Structure"

Generic NEP application structure

The Generic NEP application structure, as in Figure 1-17, is the API structure used within a typical Generic NEP server.

The NEP application is a server application, so it links in libcontrol and libasc, as well as in libnep and libinterpret.

Figure 1-17 Generic NEP Application Structure

Description of Figure 1-17 follows
Description of "Figure 1-17 Generic NEP Application Structure"

Multi-protocol NEP structure

The Multi-Protocol NEP structure, as in Figure 1-18, is the API structure used within a typical Multi-Protocol NEP server.

The NEP application is a server application, so it links in libcontrol and libasc, as well as in libsrp and libinterpret.

The Multi-protocol NEP application will link in libnep and libinterpret as is the case with the Generic NEP. In addition, it will link in the Communication library (libasccomm) and the Switch Configuration library (libnecfg).

Figure 1-18 Multi-protocol NEP Structure

Description of Figure 1-18 follows
Description of "Figure 1-18 Multi-protocol NEP Structure"

Development of Cartridges supporting Asynchronous NEs

ASAP executes CSDLs and ASDLs in work orders synchronously. CSDLs and ASDLs are configured sequentially, and an ASDL must complete before the next ASDL can be started.

Some network elements respond to network actions asynchronously. After a request is sent, the receipt of the request may be acknowledged immediately but a response indicating completion of a request may be received from the network element some time later.

An ASDL in which the completion response arrives later is an asynchronous ASDL. A work order may have a mix of synchronous and asynchronous ASDLs mapped to the CSDLs.

Synchronous ASDLs require any previous ASDLs to be completed, so all asynchronous ASDLs run before a synchronous ASDL must complete before the synchronous ASDL starts.

Note:

Asynchronous Dynamic NEs are not supported.

Asynchronous NE interfaces are supported through Java Enabled NEPs, and not through state table programming.

Asynchronous NE Response Handler

Instead of ASAP managing the processing of asynchronous ASDLs in the core, facilities are provided to the cartridge developer to handle them. These facilities include a response handler to manage asynchronous ASDL responses implemented in the java classes ResponseHandler and ResponseHandlerManager. Refer to the ASAP Online Reference for details on these classes.

When configuring asynchronous ASDLs, a 'stop work order' ASDL must follow the asynchronous ASDL(s) to halt work order processing. The NE response handler handles asynchronous responses from the NE, and resumes the work order when all outstanding asynchronous ASDL completion responses have been received.

Asynchronous connections:

Asynchronous NE interfaces have an entry in table tbl_comm_param with a parameter label ASYNC_CONN and parameter value of either TRUE or FALSE. See "tbl_comm_param."

When the NEP starts, for NEs that have an entry in table tbl_comm_param with a parameter label ASYNC_CONN and parameter value of TRUE, a connection is automatically established. If the parameter value is FALSE, the NE connection is not automatically established.

Each distinct asynchronous NE connection may have a distinct response handler. In this case, when an NE connection is released, then the associated response handler also should be stopped and removed from the system.

If multiple asynchronous NE connections use the same response handler, then the response handler may be spawned during the first NE connection and either left to keep running or stopped and removed from the JNEP by the cartridge developer when conditions as determined by the developer are met.

Response handler manager

Each Java-enabled NEP server (JNEP) uses a single instance of a response handler manager to manage all asynchronous response handlers within the JNEP. The response handler manager is accessed through static methods in its class. Typically, response handler creation is requested within the connect() method of the NE connection classes.

The response handler manager is implemented in the ResponseHandlerManager class. For details, refer to the ASAP Online Reference.

A sample implementation is provided with the ASAP product in

$ASAP_BASE/samples/JeNEP/async_ne