7 Managing the Database and File System
This chapter provides information about managing your Oracle Communications ASAP database and file system.
Overview of Managing the Database and File System
The Oracle database and file system management tasks include:
-
Configuring the operating system kernel and Oracle database initialization parameters for optimal system performance.
-
Monitoring database segment and file system size by defining maximum thresholds for both.
-
Using Oracle database management tools to maintain and tune your database.
-
Enabling automated ASAP database administration features.
-
Using scripts to purge the Oracle database tables, clear system events, alarms, and process information, and, in a test environment, to periodically purge all data.
Configuring Kernel and Database Initialization Parameters
This sections outline some important issues for implementing ASAP. These issues are listed in point form for easy scanning. For a complete reference on the subjects, refer to the appropriate Oracle documentation.
-
Before installing the Oracle instance ensure that the default values for kernel and resource settings for the server are set to provide enough shared memory and semaphores. Specifically, check the project.max-shm-memory resource.
See the Oracle Installation Guide for more details.
-
You can change the Oracle Server configuration parameters from their default values to suit the requirements of ASAP. To view the Oracle Server parameters, select from the V$PARAMETER table or look in the INITsid.ora file for the server. Most of the changes to the parameters must be accompanied by shutting down the server and restarting it, in order for the change to take effect.
-
Oracle works best with ASAP when given sufficient resources. There are four parameters that are extremely important to configure correctly: DB_BLOCK_SIZE, DB_BLOCK_BUFFERS, SHARED_POOL_SIZE, and SESSIONS.
1. DB_BLOCK_SIZE
Set the DB_BLOCK_SIZE – 8K is the default. ASAP benefits from a larger block size. This must be done during database creation and cannot be changed.
2. DB_BLOCK_BUFFERS
DB_BLOCK_BUFFERS is the area of the SGA that is used for the storage and processing of data in memory. As users request information, the information is put into memory. If the DB_BLOCK_BUFFERS parameter is set too low, the least recently used data will be flushed from memory. Set this value between 3000 to 5000 depending on the memory limitations of your machine. A value of 5000 with a block size of 8K uses approximately 40MB of memory.
3. SHARED_POOL_SIZE
SHARED_POOL_SIZE is used to process the procedures, packages, and triggers, as well as the library and data dictionary cache. If the SHARED_POOL_SIZE is set too low, you will not get the full advantage of your DB_BLOCK_BUFFERS. Set this value between 90MB to 150MB. For most installations this should be sufficient. Again this value should be adjusted depending on the memory limitations of your machine. Setting the SHARED_POOL_SIZE too high can cause your system to swap excessively.
4. SESSIONS
SESSIONS determines the number of user connections that ASAP can establish. Configure 150-200 sessions for each ASAP environment, more if you are running multiple Service Request Processor (SRP) and Network Element Processors (NEPs). Be sure to allocate enough semaphores at the UNIX level to accommodate the amount of sessions.
In all the cases above, the values given as examples only. The appropriate value for your installation will depend on the machine characteristics and limitations.
-
Oracle, like all databases, benefits significantly by spreading the I/O among it's data files across multiple controllers and drives. You must place your SYSTEM, TEMP, and ROLLBACK tablespaces on separate drives and controllers from your DATA and INDEX. SARM_DATA and SARM_INDEX will benefit the most from I/O optimization since a lot of activity occurs on these files. In addition, employing optimal striping and data placement of hot files will greatly enhance your database performance. Since minimizing I/O is the primary goal of most database tuning, this point cannot be emphasized enough.
About Monitoring Database Segment and File System Size
The Control server has a background thread that monitors database table and log segment sizes. This thread reads a static configuration database table to determine database size thresholds. Should a particular threshold be exceeded, the thread issues the system event specified in the static table tbl_db_threshold in the Control database. This feature provides a warning to the operations center should the threshold be exceeded.
In addition, the Control server has a background thread that monitors operating system file sizes. This thread reads a static configuration database table to determine file system size thresholds. Should a particular threshold be exceeded, the thread issues the system event specified in the static table tbl_fs_threshold in the Control database. This feature allows ASAP to warn the operations center whenever the free disk space drops below the minimum threshold set by you.
The monitoring thresholds function is used to specify thresholds for ASAP databases and transactions affecting it. When these thresholds are defined, an ASAP Control server can monitor the specified database data and transaction log sizes at intervals specified in the ASAP environment variable DB_MONITOR_TIME. If any of the thresholds are exceeded, the Control server issues the system event defined by the user. The event can trigger an alarm or be logged.
See "Configuring System Events and Alarms Using Stored Procedures" for more information on configuring alarms.
An ASAP environment can be partitioned on various machines, and each machine can be monitored by its respective Control server. Typically, one ASAP installation has multiple databases and the ASAP environment can be distributed on one machine or multiple machines. The database and transaction log threshold need to be specified for each database to be monitored.
You can use the DB_MONITOR_TIME and FS_MONITOR_TIME parameters to configure the Control server database and file system monitoring capabilities. For more information about these parameters, see the chapter about the ASAP parameter configuration file (ASAP.cfg) in the ASAP Server Configuration Guide.
Configuring File System Thresholds
To configure file system thresholds:
-
Open a UNIX terminal to your ASAP environment.
-
Using a text editor, set the FS_MONITOR_TIME parameter in ASAP_home/config/ASAP.cfg to a time interval in seconds. For example 120 seconds.
-
Using sqlplus, connect to the Control server.
sqlplus CTRL$/password
Where password is the password for your Control server.
-
Populate the Control server tbl_fs_threshold database table. The syntax is:
INSERT INTO "TEST"."TBL_FS_THRESHOLD" (ASAP_SYS, FILE_SYSTEM, FULL_THRESHOLD, FULL_EVENT) VALUES ('envID', 'fs_path', 'fs_threshold', 'event_type'); COMMIT;
Where the values for the tbl_fs_threshold columns are:
-
envid: The ASAP environment ID. The Control server only reads records from the table with values of asap_sys equal to the currently defined ASAP_SYS environment variable.
-
fs_path: The name of the file system.
-
fs_threshold: The threshold above which the system event is issued (in percentage 0-100).
-
event_type: Specifies the system event you want to trigger. Enter an event listed in the the Control server tbl_event_type table (see the ASAP Developer Guide).
-
-
Populate the Control server tbl_component database table. The syntax is:
INSERT INTO "TEST"."TBL_COMPONENT" (TERRITORY, SYSTEM, COMPONENT) VALUES ('envID', 'territory', 'path'); COMMIT;
Where the values for the tbl_component columns are:
-
envid: The ASAP environment ID.
-
territory: Usually the same name as the ASAP environment ID.
-
fs_path: The path to the file system you want to monitor.
-
-
Restart ASAP for the changes to take effect.
Note:
When a file system size goes beyond the threshold specifies in tbl_fs_threshold, ASAP triggers an event and logs a diagnostic message in the Control server diagnostic file.
Configuring Database Thresholds
To configure database thresholds:
-
Open a UNIX terminal to your ASAP environment.
-
Using a text editor, set the DB_MONITOR_TIME parameter in ASAP_home/config/ASAP.cfg to a time interval in seconds. For example 120 seconds.
-
Using sqlplus, connect to the Control server.
sqlplus CTRL$/password
Where password is the password for your Control server.
-
Populate the Control server tbl_db_threshold database table. The syntax is:
INSERT INTO "TEST"."TBL_DB_THRESHOLD" (ASAP_SYS, DB_NAME, DATA_THRESHOLD, DATA_EVENT) VALUES ('envID', 'DB_server', 'data_threshold', 'data_event_type') COMMIT;
Where the values for the tbl_db_threshold columns are:
-
envid: The ASAP environment ID. The Control server only reads records from the table with values of asap_sys equal to the currently defined ASAP_SYS environment variable.
-
DB_server: The name of the database server tablespace.
-
data_threshold: The data threshold above which the system event is issued (in percentage 0-100).
-
data_event_type: Specifies the system event you want to trigger as a result of crossing the data threshold. Enter an event listed in the Control server tbl_event_type table (see the ASAP Developer Guide).
-
-
Populate the Control server tbl_component database table. The syntax is:
INSERT INTO "TEST"."TBL_COMPONENT" (TERRITORY, SYSTEM, COMPONENT) VALUES ('envID', 'territory', 'tablespace'); COMMIT;
Where the values for the tbl_component columns are:
-
envid: The ASAP environment ID.
-
territory: Usually the same name as the ASAP environment ID.
-
tablespace: The tablespace want to monitor.
-
-
Restart ASAP for the changes to take effect.
Note:
When a tablespace size goes beyond the threshold specifies in tbl_db_threshold, ASAP triggers an event and logs a diagnostic message in the Control server diagnostic file.
Database Management and Tuning Recommendations
-
Oracle can run in two modes ARCHIVELOG or NOARCHIVELOG mode. It is very important to run your production database in the ARCHIVELOG mode. This ensures that you are able to recover up to the minute in case of failure. Oracle uses logs to record transaction information in order to recover from instance and data file failures. Be sure to multi-plex these redo log data files and place them on separate disks and controllers.
-
Oracle offers both online and offline database backup choices. Which one you choose will depend on how much downtime your ASAP installation can tolerate. Online backups involve backing up at the tablespace level and require more planning and consideration. Offline backup involves shutting down the database and backing up the data files, control file, and archive log files to tape. While this is occurring, ASAP is down.
-
Monitor your tablespaces for index and block fragmentation on a regular basis. Index fragmentation occurs due to the deletion and updating of table rows. Rebuilding the index using the ALTER INDEX name REBUILD (where name is the name of the index) can resolve this problem. Block fragmentation can occur due to PCTFREE and PCTUSED parameters set inappropriately. This results in migrated and chained rows which affect database I/O performance.
-
Update your table and index statistics on a regular basis for those tables whose data distributions change dynamically. As mentioned in a previous section, the query optimizer is primarily influenced by these statistics. If a query or transaction that was running fine appears to suddenly slowdown or stop responding, chances are the table and index statistics need to be updated.
-
Tune any custom SQL statements you introduce to your ASAP installation, since they involve both your application and database. To analyze your SQL statements you will need to:
-
Add the line to your initsid.ora file: timed_statistics=true
-
Restart your database.
-
Start the sql tracing for your session using the ALTER SESSION SET SQL_TRACE=TRUE.
-
Run your application
-
Run tkprof against the tracefile created by your application: tkprof tracefile outputfile EXPLAIN=username/passwd
-
Look at the formatted output of the trace command and make sure that your SQL statements are using indexes correctly. You must run explains on the SQL statements to determine the access path they are using.
-
Refer to the Oracle Tuning Guide for more detailed information on tuning SQL statements.
-
Use the UTLBSTAT and UTLESTAT scripts provided by Oracle to monitor your Oracle instance, and diagnose performance problems. These scripts are located in the Oracle_Home/rdbms/admin directory in a UNIX environment. These scripts create a report file report.txt which shows differences in the statistics from the time UTLBSTAT was submitted, until the time that UTLESTAT was submitted. To obtain time-based statistics from UTLBSTAT and UTLESTAT, the TIMED_STATISTICS parameter must be set to TRUE.
Statspack
Statspack is an alternative to the UTLBSTAT/UTLESTAT tuning scripts. Statspack collects more information, storing performance statistics data permanently in the database for later use in analysis and reporting. The data collected can be analyzed using the report provided, which includes an instance health and load summary page, high resource SQL statements, as well as the traditional wait events and initialization parameters.
For more information, see Oracle 11g Database Performance Tuning Guide and Reference Guide.
Enabling Automated ASAP Database Administration Options
The Control application programming interface (API) provides a background thread that performs database administration tasks within ASAP. At a user-configured time of day, every application server process connects to its primary database and calls a user-defined function that performs the following tasks:
-
Gathers customer-defined statistics related to ASAP processing.
-
Archives data about to be purged from the database, if required.
-
Purges data older than a date and time specified by the user.
-
Performs an “orphan" purge of orphaned records to confirm the database's integrity.
The ASAP configuration variable, DB_ADMIN_ON, enables and disables the database administration procedures. This feature is of value to users who may have many application servers defaulting to the same database and do not want them performing database administration procedures independently on it.
For more information on the Control API, refer to the ASAP Developer's Guide.
Purging the Database and File System
To keep disk usage at an acceptable level the ASAP administrator should regularly purge the diagnostic and log files generated by ASAP, as well as the work orders (WOs) received from the service order system. Database purging is controlled by database administrator stored procedures that are automatically run daily.
The diagnostic files are located in the directory ASAP_home/DATA/logs (where ASAP_home is the ASAP installation directory). The flat files of service orders are usually located in the ASAP_home/DATA directory in a dated file.
Purging the Database
Database purging can be performed for the SRP, Service Activation Request Manager (SARM), and control databases, but is most commonly performed in the SARM as this is where the majority of the WO information is stored. The SRP and SARM databases maintain a history of all WOs received, while the ASAP control database maintains a history of alarms, events, performance, and process information.
The purging of the SRP and SARM is based on WO age. The purge age is usually determined by the amount of available disk space. Usually, only orders that have been completed for a certain amount of time are purged.
Database purging is controlled by database administration functions that automatically run according to the following parameters that you can configure in ASAP.cfg:
-
DB_ADMIN_ON – Boolean flag. If set, it enables the database administration thread operation in the application server. This can be disabled in particular servers in situations where multiple servers share the same application database (for example, multiple NEPs) and then only one server is required to perform this database administration. Default = 0.
Note:
If you are using purgeOrders script for purging the SARM database, set this parameter to 0. For more information, see "Purging the SARM Database". -
DB_ADMIN_TIME – The number of minutes after midnight when the database administration tasks are to be performed. This is usually performed at a time of low system activity. Default = 300.
-
DB_ADMIN_PROC – The function the database administration thread calls at a specified time in the day. This function can be configured to perform multiple tasks, including archiving and purging dynamic data. Default = SSP_db_admin.
-
DB_ADMIN_PROC_PARAM – The integer parameter passed to the database administration function. For example, this can specify a purge interval for a particular database. Default = 100.
-
GATHER_STATS –Enables the gathering of statistics for tables and indexes. Default = 0.
-
GATHER_STATS_PROC – Indicates the procedure to use to gather statistics on the SARM database. Default = SSP_gather_asap_stat.
-
DB_PCT_ANALYZE – Percentage of table to analyze when gathering stats. Default = 20.
-
DB_PCT_ANALYZE_IDX – Percentage of index to analyze when gathering stats. Default = 40.
-
GATHER_DEGREE – Degree of parallelism to use when gathering stats. Default = 1.
Note:
You can configure every ASAP server to run similar administration functions, but the SARM is the only server where a default function (SSP_db_admin) has been provided. To perform administration on databases other than the SARM database, you must copy the above configuration parameters to the appropriate section of the ASAP.cfg configuration file, and load the SSP_db_admin to those servers.
Table 7-1 provides recommended data purge frequency and methods.
Table 7-1 Database Purge Frequency and Methods
Data Object Being Purged | Selection Criteria | Frequency of Purge | Method of Purge |
---|---|---|---|
ASAP log files |
All log files older than X days |
Daily |
Cron script. A sample cron script is located on "Sample Cron Script for Clearing Alarms, Events, and Process Information." |
ASAP control database |
All dynamic tables older than X days |
Daily |
Function Sample functions are described in "Sample Database Purge Script." |
SRP database |
All WOs older than X days |
Daily |
Function Sample functions are described in "Sample Database Purge Script." |
SARM database |
All WOs older than X days |
Daily |
Function Sample functions are described in "Sample Database Purge Script." |
Sample Database Purge Script
The following sample script can be used to delete successfully provisioned WOs from tbl_work_ord that are older than a specified number of days. In addition, this script calls the SSP_orphan_purge function, which deletes all orphaned records. When you delete WOs, information related to these WO may remain in other tables. The SSP_orphan_purge function purges information that is related to WOs that have been removed from the database. The affected tables include:
-
tbl_asap_stats
-
tbl_info_parm
-
tbl_srq
-
tbl_srq_csdl
-
tbl_srq_log
-
tbl_asdl_log
-
tbl_srq_parm
-
tbl_srq_asdl_parm
-
tbl_wo_event_queue
-
tbl_wo_audit
-
tbl_usr_wo_prop
-
tbl_aux_wo_prop
Note:
The SSP_orphan_purge function is time-consuming and requires considerable system resources. Therefore, it should not run during peak hours.
create proc SSP_db_admin @days int as begin declare @cutoff_dts datetime if (@days is not null and @days > 0) begin select @cutoff_dts = dateadd(day, -@days, getdate()) delete tbl_wrk_ord where comp_dts < @cutoff_dts and wo_stat = 104 end exec SSP_orphan_purge end
You can customize this script in a variety of ways (for example, you can delete WOs that have successfully completed (104) or orders that have completed and failed due to timeout). When ASAP triggers the DB_admin procedure, ASAP also performs other optimization, such as recompiling stored procedures for optimal database access, and so forth.
In the following example, after the function has deleted each set of 1000 orders, the function performs a commit and the rollback segment is flushed. This prevents the Oracle rollback segment from being exceeded. This example also employs the SSP_orphan_purge function described for the previous example.
CREATE OR REPLACE FUNCTION SSP_db_admin( days INTEGER ) RETURN INTEGER AS StoO_selcnt INTEGER; StoO_error INTEGER; StoO_rowcnt INTEGER; StoO_errmsg VARCHAR2(255); cutoff_dts DATE; retval integer; BEGIN IF (SSP_db_admin.days IS NOT NULL AND SSP_db_admin.days > 0) THEN BEGIN SSP_db_admin.cutoff_dts := SYSDATE-SSP_db_admin.days; BEGIN StoO_error := 0; StoO_rowcnt := 0; -- Created a loop to split the deletion of orders in portions of 1000 -- to remove the risk of reaching the rollback segment limit -- NOTE! The orphans are deleted by a trigger defined on the -- tbl_wrk_ord table and not by the SSP_orphan_purge function call -- below LOOP DELETE FROM tbl_wrk_ord WHERE comp_dts < SSP_db_admin.cutoff_dts AND wo_stat = 104 AND rownum <= 1000; EXIT WHEN SQL%ROWCOUNT = 0; COMMIT; END LOOP; StoO_rowcnt := SQL%ROWCOUNT; COMMIT WORK; -- EXCEPTION -- WHEN OTHERS THEN -- StoO_error := SQLCODE; END; END; END IF; BEGIN retval := SSP_orphan_purge; EXCEPTION WHEN OTHERS THEN StoO_error := SQLCODE; StoO_errmsg := SQLERRM; END; RETURN 0; END; /
Sample Cron Script for Clearing Alarms, Events, and Process Information
The following is a sample cron script that clears alarm entries, event logs and process information.
########################################################################### # # ASAP database and log housekeeping script # # Call this script with one parameter, specifying the number of days. # Alarm entries, event logs and process info # logs are cleared. Log files and directories are also cleared. # # The script will exit if the number of days is less than 5 # ############################################################################ . $HOME/.profile > /dev/null PROG_NAME=`basename "$0"` if [ "$1" == "" ] then echo "Usage $PROG_NAME <admin_days>" exit fi let ADMIN_DAYS=$1+0 if [ "$?" != "0" ] then echo "Number of days must be numeric" exit fi if [ $ADMIN_DAYS -lt 5 ] then echo "Cannot run with less than 5 days lead-time" exit fi ############################################################################ # Clear CTRL entries ############################################################################ echo "Deleting alarm logs, event logs and process info from CTRL more than $ADMIN_DAYS days old" sqlplus -s $CTRL_USER/$CTRL_USER << HERE var admin_days number; exec :admin_days := $ADMIN_DAYS; delete from TBL_ALARM_LOG where start_dts < sysdate - :admin_days; commit; delete from TBL_EVENT_LOG where event_dts < sysdate - :admin_days; commit; delete from TBL_PROCESS_INFO where info_dts < sysdate - :admin_days; commit; HERE ############################################################################ # Truncate ASAP.Console and ControlProgramOutput so that old entries can be # cleared with the diagnostic files ############################################################################ cd $LOGDIR FILE1=ASAP.Console FILE2=ControlProgramOutput CUR_DTS=`date +'%Y_%b_%d-%T'` if [ -f $FILE1 ]; then NEWFILE=$FILE1.diag.$CUR_DTS echo Copying current $FILE1 to $NEWFILE cp $FILE1 $NEWFILE cp /dev/null $FILE1 else echo $LOGDIR/$FILE1 not found fi if [ -f $FILE2 ]; then NEWFILE=$FILE2.diag.$CUR_DTS echo Copying current $FILE2 to $NEWFILE cp $FILE2 $NEWFILE cp /dev/null $FILE2 else echo $LOGDIR/$FILE2 not found fi ############################################################################ # Clear log files ############################################################################ echo "Clearing old ASAP log files..." find . -type f -name '*diag*' -atime +$ADMIN_DAYS -exec echo Removing file {} \; -exec rm -f {} \; ############################################################################ # Clear log directories not accessed within admin days ############################################################################ echo "Clearing old ASAP log directories..." find . -type d -name '2*' -atime +$ADMIN_DAYS -exec echo Removing directory {} \; -exec rm -rf {} \; cd - echo "$PROG_NAME finished"
Purging Test Systems
The cleandata script is designed to clean data from the test system during system testing. It is intended for use in development environments.
Note:
You must stop the Oracle WebLogic Server before running the cleandata script. Otherwise, a JDBCStoreException is thrown, and a WebLogic Server error is logged.
Usage
cleandata [-d]
The -d option instructs the script to retrieve password information from the credential store factory (CSF) wallet, located in ASAP_Home/install/cwallet.sso file, where ASAP_Home is the directory in which ASAP is installed. The cwallet.sso file is typically used only in development environments.
This script does the following:
-
Deletes events, process information, and alarms from the Control (CTRL) database (specifically, truncates tbl_event_log, tbl_process_info, tbl_alarm_log, tbl_unid)
-
deletes WOs from the ASAP database and runs an orphan purge (specifically, truncates tbl_wrk_ord, truncates tbl_ne_monitor, tbl_ne_event, tbl_unid, tbl_label_value, tbl_srt_correlation, tbl_srt_ctx, temp_wrk_ord)
Note: The tbl_ne_monitor table has been deprecated from ASAP 4.6.x onwards.
-
Deletes WOs from the SRP database (tbl_wrk_ord)
-
Deletes performance data from Admin database (truncates tbl_perf_order, tbl_perf_csdl, tbl_perf_asdl, tbl_perf_ne, tbl_perf_ne_asdl, tbl_aims_rpc, tbl_aims_rpc_param, tbl_aims_rpc_dest, tbl_aims_audit_log)
-
Deletes Java Message Service (JMS) messages from the Admin database
-
Deletes diagnostic and log files
Note:
Restart the WebLogic Server for the domain after running the cleandata script.
Purging the SARM Database
To purge the SARM database, you can also use the purgeOrders script. This script calls a multithreaded Java application which makes purging faster. This application purges the SARM database only.
Usage
You run the purge application by running the purgeOrders script. This script is located in the $ASAP_BASE/scripts directory. You must source the ASAP Environment_Profile before running this script.
The command-line syntax of the purgeOrders script is:
purgeOrders [-t <number of threads>] [-l <diag_level>] [-f] [-h]
where:
-t <number of threads> (optional) is the number of threads. If specified, it overrides the value of DB_PURGE_THREADS in ASAP.cfg file.
-l <diag_level> (optional) specifies the diagnostic level (KERN, LOW, SANE, or PROG). The default is SANE.
-f forces the script to run in non-interactive mode. Without this option, the script displays an interactive prompt to confirm running the purge application.
-h prints the usage information.
Configuration Parameters
The purge application reads the following configuration parameters from the SARM section of the ASAP.cfg file.
-
DB_PURGE_DAYS : Number of days to keep completed orders. Completed orders older than this number of days will be purged. Orders are not purged if the value is 0.
Valid Range: >=0
Default = 100
-
DB_PURGE_THREADS: Number of threads to use. The purge application spawns this number of threads and runs the purge stored procedure in each thread. This allows you to run the purge procedure at different times of the day using different levels of parallelism.
Valid Range: 1 to 50
Default: 10
Note:
The -t parameter in the command line overrides this value specified in ASAP.cfg. -
DB_PURGE_COMMIT_ROWS: Number of rows to be deleted before a COMMIT is issued.
Valid Range: >=1
Default: 1000
-
DB_PURGE_ORPHANS_ENABLED: Enable or disable the SSO_orphan_purge procedure which removes orphaned data definitions in the SARM database.
Valid Range: 0 or 1
Default: 0
-
DB_PURGE_MAX_TIME: Maximum purge time. The maximum amount of time in minutes that the purge operation is allowed to run. If this parameter is set to 0, the purge operation is run without a time limit.
Valid Range: >=0
Default: 0
-
DB_PURGE_GET_RANGE_PROC: The stored procedure that the purge application should call to get the SRQ_ID range (MIN, MAX) of the orders to be purged.
The signature of the stored procedure is as follows:FUNCTION SSP_get_srqid_purge_range( days IN INTEGER, min_srqid OUT INTEGER, max_srqid OUT INTEGER) RETURN INTEGER
Valid Range: N/A
Default: SSP_get_srqid_purge_range
-
DB_PURGE_RANGE_PROC: The stored procedure that each thread should call to purge the work orders in a SRQ_ID range.
The signature of the procedure is as follows:FUNCTION SSP_purge_wo_in_srqid_range( days IN INTEGER, commit_rows IN INTEGER, start_id IN INTEGER, end_id IN INTEGER, max_time IN INTEGER, rows_deleted OUT INTEGER) RETURN INTEGER.
Valid Range: N/A
Default: SSP_purge_wo_in_srqid_range
-
DB_PURGE_LOG_INTERVAL: The interval, in minutes, to log messages about the progress of the purge operation.
Valid Range: >=1
Default: 5
Logging and Diagnostics
The purge application logs diagnostic messages to the $ASAP_BASE/DATA/logs/<date>/SARM<ENV_ID>_PURGE.diag file.
The purge application logs progress messages to the diagnostic file while the purge threads are running, and after all threads are finished. The diagnostic messages show the elapsed time, number of orders deleted, and the purging rate.
Scheduling Purge Jobs
Since the purge application is a Java application that runs outside of the SARM server, you need to schedule the new purge application to be run outside of SARM. This is done using the UNIX or Linux cron utility. When you run the purge application from cron, you should use the -f command line option so that the purgeOrders script runs in non-interactive mode.
Purge Conflict Resolution
If there is another instance of the purge application running against the same SARM database, the purgeOrders script prints the following error message and stops:
Error: There is another AsapParallelPurge job running for SARM<ENV_ID>. Exiting ...
If the old purging mechanism is enabled, by setting DB_ADMIN_ON to 1 in the SARM section of ASAP.cfg, the new purge application logs the following warning message to the diagnostic file and continues to run.
WARNING: The SARM database administration thread is enabled (DB_ADMIN_ON=1 in ASAP.cfg). This may result in multiple purge jobs being run concurrently.
Customization
You can provide your own stored procedures for determining the SRQ ID range of the work orders and purging the work orders. You do this by specifying the names of your stored procedures in the DB_PURGE_GET_RANGE_PROC and DB_PURGE_RANGE_PROC parameters in the ASAP.cfg.
Using the Purge Application
To use the purge application:
- Disable the existing SARM database purge operation by setting DB_ADMIN_ON to 0 in the SARM section of the ASAP.cfg file.
- In the SARM section of the ASAP.cfg file, review the default settings of the
DB_PURGE* parameters and change them if necessary. For example:
- Change DB_PURGE_DAYS to comply with your company's data-retention policy.
- Change DB_PURGE_THREADS to increase or decrease the level of parallelism.
- Change DB_PURGE_MAX_TIME if you want to specify a time limit for purging.
- Change DB_PURGE_GET_RANGE_PROC and DB_PURGE_RANGE_PROC if you need to use your own stored procedures.
- Schedule the purgeOrders script to be run regularly using an external scheduling tool.