4 Monitoring and Managing Unified Inventory Management
This chapter provides monitoring and managing activities that you may need to perform after installing or upgrading the Oracle Communications Unified Inventory Management (UIM) software.
Monitoring and Managing Overview
The following list includes tasks that you may need to perform on both a single server environment and a clustered server environment.
Managing UIM Metrics
UIM provides a sample Grafana dashboard that can be used to visualize UIM metrics available from a Prometheus data source. UIM relies on Prometheus to scrape and expose these metrics.
See the following topics for further details:
Configuring Prometheus for UIM Metrics
Configure the scrape job in Prometheus for UIM as follows:
- job_name: 'job_name'
#Scheme defaults to 'http'
metrics_path: '/Inventory/metrics'
scrape_interval: 5s
basic_auth:
username: username
password: password
static_configs:
# Repeat this pattern for each managed server
- targets: ['MS1_hostname:MS1_port']
labels:
namespace: sr
server_name: ms1
- job_name refers to a particular job. For example, UIM_Production, UIM_UAT, and so on. Use this to distinguish the various UIM instances such as UIM_Production, UIM_Pre-prod, UIM_QA, UIM_UAT, and so on. Each of these instances will have its own job in the scrape configuration.
- MS1_hostname refers to managed server 1.
- The
namespace
label enables multiple related instances to be grouped. - The
server_name
label must match the server name configured in WebLogic server.
Viewing UIM Metrics Without Using Prometheus
UIM metrics can be viewed at:
http://hostname:port/Inventory/metrics
This only provides metrics of the managed server that is serving the request. It does not provide the consolidated metrics for the entire cluster. Only Prometheus Query and Grafana dashboards can provide the consolidated metrics.
Viewing UIM Metrics in Grafana
UIM service metrics scraped by Prometheus can be made available for further processing and visualization.
Exposed UIM Service Metrics
The following UIM metrics are exposed via Prometheus APIs.
Note:
- All metrics are per managed server. Prometheus Query Language can be used to combine or aggregate metrics across all managed servers.
- All metric values are short-lived and indicate the number of requests in a particular state since the managed server was last restarted.
- When a managed server restarts, all the metrics are reset to 0.
Interaction Metrics
Table 4-1 lists interaction metrics exposed via Prometheus APIs.
Table 4-1 Interaction Metrics Exposed via Prometheus APIs
Name | Type | Help Text | Notes |
---|---|---|---|
uim_sfws_capture_requests | Summary | Summary that tracks the duration of
sfws capture requests.
|
This metric is observed for the CaptureInteraction request. The action can be CREATE or CHANGE. |
uim_sfws_process_requests | Summary | Summary that tracks the duration of
sfws process requests.
|
This metric is observed for the ProcessInteraction request. The action can be PROCESS. |
uim_sfws_update_requests | Summary | Summary that tracks the duration of
sfws update requests.
|
This metric is observed for the UpdateInteraction request. The action can be APPROVE, ISSUE, CANCEL, COMPLETE or CHANGE. |
uim_sfws_requests | Summary | Summary that tracks the duration of
sfws requests.
|
This metric is observed for the capture, process, and update interaction requests. |
Labels For All Interaction Metrics
Table 4-2 lists labels for all interaction metrics.
Table 4-2 Labels for All Metrics
Label Name | Sample Value |
---|---|
action | The values can be CREATE, CHANGE, APPROVE, CANCEL, COMPLETE, and CANCEL. |
Service Metrics
Table 4-3 lists the metrics captured for completion of a business interaction.
Table 4-3 Service Metrics Captured for Completion of a Business Interaction
Name | Type | Help Text | Summary |
---|---|---|---|
uim_services_processed | Counter | Counter that tracks the number of services processed. | This metric is observed for suspend, resume, complete, and cancel of a service. |
Labels for all Service Metrics
A task metric has all the labels that a service metric has. Table 4-4 lists the labels for all service metrics.
Table 4-4 Labels for All Service Metrics
Label | Sample Value | Notes | Source of Label |
---|---|---|---|
spec | VoipServiceSpec | The service specification name. | UIM Metric Label Name/Value |
status | IN_SERVICE |
The service status. The values can be IN_SERVICE, SUSPEND, DISCONNECT, and CANCELLED. |
UIM Metric Label Name/Value |
Generic Labels for all Metrics
Table 4-5 lists the generic labels for all metrics.
Table 4-5 Generic Labels for all Metrics
Label Name | Sample Value | Source of the Label |
---|---|---|
server_name | ms1 | Prometheus Static Configs |
job | cmcn | Prometheus Static Configs |
namespace | sr | Prometheus Static Configs |
Managing WebLogic Monitoring Exporter Metrics
UIM provides a sample Grafana dashboard that you can use to visualize WebLogic server metrics available from a Prometheus data source. You use WebLogic Monitoring Exporter to expose the WebLogic server metrics. WebLogic Monitoring Exporter is part of the WebLogic Kubernetes toolkit. It is an open source project, based at: https://github.com/oracle/weblogic-monitoring-exporter. While the metrics are available via WME Restful Management API endpoints, UIM relies on Prometheus to scrape and expose these metrics. This version of UIM supports WebLogic Monitoring Exporter 1.3.0. See the WebLogic Monitoring Exporter documentation for details on configuration and the exposed metrics.
Deploying WebLogic Monitoring Exporter in UIM
To deploy WebLogic Monitoring Exporter:
- Generate the WebLogic Monitoring Exporter WAR file by running the
following
command:
mkdir -p ~/wme cd ~/wme curl -x $http_proxy -L https://github.com/oracle/weblogic-monitoring-exporter/releases/download/v1.3.0/wls-exporter.war -o wls-exporter.war curl -x $http_proxy https://raw.githubusercontent.com/oracle/weblogic-monitoring-exporter/v1.3.0/samples/kubernetes/end2end/dashboard/exporter-config.yaml -o config.yaml jar -uvf wls-exporter.ear config.yaml
This command updates the wls-exporter.war file with the exporter-config.yaml configuration file.
- Deploy the WAR file by running the following
command:
java -cp path_to_weblogic_server_lib weblogic.Deployer -adminurl t3://host_name:adminserver_port -user wls_admin_username -password wls_admin_password -deploy -name name_of_the WME_WAR_file -source path_to MWE_WAR_file -targets wls_server_targets_list ### Example : java -cp /../../Oracle/WLS/12_2_1_4/wlserver/server/lib/weblogic.jar weblogic.Deployer -adminurl t3://localhost:7001 -user weblogic -password password -deploy -name wls-exporter -source /../../wme/wls-exporter.war -targets AdminServer,Cluster_Name
Configuring the Prometheus Scrape Job for WebLogic Monitoring Exporter Metrics
- job_name: 'wme_job_name'
metrics_path: wls-exporter/metrics
basic_auth:
username: weblogic_username
password: weblogic_password
static_configs:
- targets: [AdminServer_hostname:AdminServer_port]
labels:
# The namespace label enables multiple related instances to be grouped.
# For a given WebLogic server, the namespace used in a WME Prometheus job must match
# the namespace used for the corresponding UIM Prometheus job.
# In a sample Grafana dashboard, the specified namespace is displayed under the Project
# drop-down menu.
namespace: namespace
# The weblogic_domainUID label uniquely identifies a UIM instance within a given namespace.
# For a given WebLogic server, the weblogic_domainUID used in a WME Prometheus job must match
# the weblogic_domainUID used for the corresponding UIM Prometheus job.
# In a sample Grafana dashboard, the specified weblogic_domainUID is displayed under the
# Instance drop-down menu.
weblogic_domainUID: weblogic_domainUID
# The weblogic_serverName label must match the server name configured in WebLogic.
weblogic_serverName: AdminServer
# Repeat this pattern for each managed server
- targets: [MSn_hostname:Msn_port]
labels:
namespace: namespace
weblogic_domainUID: weblogic_domainUID
weblogic_serverName: MSn
The namespace
label enables multiple related
instances to be grouped. For a given WebLogic server, the namespace used in a WebLogic
Monitoring Exporter Prometheus job must match the namespace used for the corresponding
UIM Prometheus job. In a sample Grafana dashboard, the specified namespace is displayed
under the Project drop-down menu.
The weblogic_domainUID
label uniquely
identifies a UIM instance within a given namespace. For a given WebLogic server, the
weblogic_domainUID
label used in a WebLogic
Monitoring Exporter Prometheus job must match the weblogic_domainUID
used for the corresponding UIM Prometheus job. In a sample Grafana dashboard, the
specified weblogic_domainUID
is displayed under the Instance drop-down
menu.
The weblogic_serverName
label must match the
server name configured in WebLogic server.
namespace
and weblogic_domainUID
labels have been
added to the corresponding scrape job definition for UIM metrics. With these new labels,
you can reuse the dashboard's JSON files between UIM traditional and UIM cloud native
deployments.- job_name: 'uim_job_name'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'
metrics_path: Inventory/metrics
static_configs:
- targets: [MSn_hostname:Msn_port]
labels:
namespace: namespace
weblogic_domainUID: weblogic_domainUID
Viewing WebLogic Monitoring Exporter Metrics Without Using Prometheus
http://adminserver_host:adminserver_port/wls-exporter/metrics
http://managedServerN_host:managedServerN_port/wls-exporter/metrics
Viewing WebLogic Monitoring Exporter Metrics in Grafana
UIM provides sample Grafana dashboards to get you started with visualizations. The sample UIM and WebLogic by Server dashboard provides a combined view of UIM cloud native and WebLogic Monitoring Exporter metrics for one or more managed servers for a given instance in the selected project namespace.
Import the sample weblogic_dashboard.json dashboard file from GitHub into your Grafana environment, selecting Prometheus as the data source:
Sharing JAR Files
After you install UIM, you need to share specific JAR files with Oracle Communications Service Catalog and Design - Design Studio for use with cartridges. Each individual UIM system administrator must determine the best method for sharing these JAR files, based on your company's standard practices.
Note:
These JAR files change with each new patchset or maintenance release. The JAR files need to be re-distributed each time UIM is upgraded with a patchset or maintenance release and the Design Studio system administrator needs to be notified.
For more information on sharing JAR files with Design Studio, see the chapter on “Using Design Studio to Extend UIM" in "Using Design Studio to Extend UIM" in UIM Developer's Guide.
Disabling the HTTP Port
After you install UIM, you can disable the HTTP (non-SSL) port if it was enabled during installation.
To disable the HTTP port:
-
Ensure you are logged into the WebLogic Administration Console.
-
Click Lock & Edit.
-
In the Domain Structure tree, expand Environment, and then click Servers.
The Summary of Servers page appears.
-
Select the AdminServer.
The Settings for AdminServer page appears.
-
Deselect the Listen Port Enabled setting.
Note:
If you disable this port, then you must enable the SSL port.
-
Click Save.
-
Click Activate Changes.
Setting the Database Row Prefetch Size
You can specify the number of result set rows to prefetch.
-
Ensure you are logged into the WebLogic Administration Console.
-
Click Lock & Edit.
-
In the Domain Structure tree, expand Services and then click Data Sources.
The Summary of JDBC Data Sources page appears.
-
Click the InventoryDataSource data source.
The Settings for InventoryDataSource page appears.
-
Under Configuration, click the Connection Pool tab.
-
In the Properties field, enter the following:
defaultRowPrefetch=50
-
Click Save.
-
Click Activate Changes.
-
Restart the WebLogic Application Server.
Modifying the Default File Encoding
The UIM installer automatically sets the default file encoding to UTF8 for both full installations and upgrades. Check the startup script to verify that the default file encoding is set to UTF8. If this setting is incorrect, you can manually change the default file encoding setting in the CUSTOM SECTION segment of the startup script.
The following example shows the correct command syntax:
JAVA_OPTIONS="${JAVA_OPTIONS}-Dfile.encoding=UTF-8"
Modifying the Time Zone
For full installations and upgrades, the UIM installer automatically sets the time zone for your locale. You should check your startup script to verify that the time zone setting for your locale is correct. If this setting is incorrect, add a line to the CUSTOM SECTION segment of your startup script. Enter the time zone ID in a format that is recognizable by the java.util.TimeZone object. The following example shows the command syntax:
JAVA_OPTIONS="${JAVA_OPTIONS} -Duser.timezone=Asia/Shanghai"
To view a list of valid time zone values, run the following command:
import java.util.*; public class TimeZoneList { public static void main(String[] args) { String[] sZoneIds = TimeZone.getAvailableIDs(); List lZoneIdList = Arrays.asList(sZoneIds); Collections.sort(lZoneIdList); System.out.println(lZoneIdList); } }
Note:
-
If your application server and database server are located in different time zones, set the application server's user.timezone value to match the database server's time zone. The application server and database server time zones must match.
-
The application server time zone is defaulted to the underlying operating system time zone. To configure a different time zone for the application server, add the following value to the startup script at Domain_Home/bin/setUIMenv.sh. The valid time zone values are defined in java.util.TimeZone.
JAVA_OPTIONS="${JAVA_OPTIONS} -Duser.timezone=
timezone
"
where timezone is a valid string value defining the time zone ID such as GMT or EST.
Configuring Your Server's Timers
You can create and configure timers for:
-
Monitoring whether the server that manages the cluster-aware timers is still running
-
Custom extensions
-
Cleaning up expired reservations
-
Cleaning up expired entity row locks
-
Recalling disconnected IP resources
-
Detecting telephone number jeopardy and publishing notification events
You configure the timers for your servers in the UIM_Home/config/timers.properties file. For more information, see the comments in the timers.properties file.
For UIM cloud native instance, add the timer properties to the $UIM_CNTK/charts/uim/config/system-config/custom-config.properties file. See Monitoring and Managing a UIM Cloud Native Deployment for more information.
Restart the corresponding UIM traditional or UIM cloud native application after you update the timer property files.
Registering Entities to the LifeCycle Listener
You can register all or a subset of entities for create, retrieve, update, and delete (CRUD) events. For example, you can specify that create events are generated when any entity is created. Likewise, you can specify that update events are generated only when Equipment and TelephoneNumber entities are updated.
Configuring Exception-Type-to-Error-Code Mappings
You can map error codes to exception types to help the persistence framework manage validation exceptions. For example, you can map error codes to DuplicateEntityException or to AttributeRequiredException.
You map error codes to exception types by using the UIM_Home/config/resources/logging/exception.properties file. For more information, see the comments in the exception.properties file.
Localizing UIM Error Messages
You can localize UIM error messages and items by modifying properties files in the UIM_Home/config/resources/logging directory.
Table 4-6 lists each property's file name, error ID range, and the error messages or items it localizes.
Table 4-6 Properties Files for Localizing UIM Error Messages and Items.
Property File Name | Error ID Range | Error Message or Item It Localizes |
---|---|---|
addressrange.properties |
N/A |
Property names for the address range cartridge |
businessInteraction.properties |
270000-279999 |
Error messages generated by the business interaction module |
capacity.properties |
320000-329999 |
Error messages generated by the capacity module |
configaction.properties |
240000-249999 |
Error messages generated by the configuration actions |
configuration.properties |
240000-249999 |
Tree node label names |
connectivity.properties |
260000-269999 |
Error messages generated by the connectivity module |
consumer.properties |
220000-229999 |
Error messages generated by the consumer module |
countries.properties |
N/A |
Error messages generated by the countries module |
custom.properties |
280000-289999 |
Error messages generated by the custom module |
enum.properties |
N/A |
Error messages generated by enumeration |
equipment.properties |
210000-219999 |
Error messages generated by the equipment module |
exception.properties |
N/A |
Error messages generated by the framework module |
extensibility.properties |
180000-189999 |
Error messages generated by the extensibility module |
flowidentifiers.properties |
620000-629999 |
Error messages generated by the packet connectivity module |
importExport.properties |
160000-169999 |
Error messages generated by the import/export module |
inventoryGroup.properties |
190000-199999 |
Error messages generated by the inventory group module |
inventoryimport.properties |
34000100 - 34000999 |
Error messages generated by the inventory group module |
ip.properties |
610000-619999 |
Error messages generated by the IP address module |
location.properties |
420000-420999 |
Error messages generated by the location module |
logicaldevice.properties |
290000-299999 |
Error messages generated by the logical device module |
media.properties |
350000-359999 |
Error messages generated by the media module |
mediaResource.properties |
360000-369999 |
Error messages generated by the mediaResource module |
network.properties |
300000-309999 |
Error messages generated by the network module |
networkaddress.properties |
620000-629999 |
Error messages generated by the network address module |
number.properties |
120000-129999 |
Error messages generated by the number module |
party.properties |
230000-239999 |
Error messages generated by the party role module |
place.properties |
250000-259999 |
Error messages generated by the place module |
product.properties |
390000-399999 |
Error messages generated by the product module |
project.properties |
140000-149999 |
Error messages generated by the project module |
resource.properties |
330000-339999 |
Resource entity names and resource-related error messages |
role.properties |
90000-99999 |
Error messages generated by the role module |
service.properties |
110000-119999 |
Error messages generated by the service module |
signal.properties |
310000-319999 |
Error messages generated by the connectivity signal module |
specification.properties |
130000-139999 |
Error messages generated by the specification module |
status.properties |
N/A |
Error messages generated by the status module |
subscriber.properties |
150000-159999 |
Error messages generated by the subscriber module |
system.properties |
100000-109999 |
Error messages generated by the framework module |
topology.properties |
340000-349999 |
Error messages generated by the topology module |
workflow.properties |
N/A |
Error messages generated by the workflow module |
wsservice.properties |
400000-409999 |
Error messages generated by the wsservice module |
For more information on how to localize UIM, see "Overview" in UIM Developer's Guide.
Localizing the UIM Server and the Application Server
By default, the UIM and application server software display information in English. You can set the software to display information in another language by localizing text strings in the UIM properties files. For more information, see "Overview" in UIM Developer's Guide.
Shutting Down an Application Server
UIM provides a script to shut down an application server. Use the following command or the kill command on the machine running the server to be shut down:
stopWebLogic.sh
AdminUserID
AdminPassword
ServerName
AdminServerURL
where AdminServerURL
is in the format: t3://
ServerName
:
PortNumber
For example:
stopWebLogic.sh weblogic password server03 t3://wplsnroyall:7101
Deploying the Inventory Enterprise Application
UIM's core functionality runs as an Enterprise Application on the application server under the deployment name oracle.communications.inventory. The application file associated with the inventory enterprise application is the inventory.ear file. The following describes the steps for deployment:
Note:
You must ensure the application is un-deployed before doing a deploy. Optionally, ensure the temporary files for the WebLogic Server are cleaned up when the server is shut down, so that they cannot be used as cached information.
-
Start the WebLogic administration server.
-
Start the WebLogic Server Administration Console using the following URL:
http://serverName:port/console
where
-
serverName is the host name for UIM
-
port is the port number of the machine on which UIM is installed
-
-
Enter the administration user name and password and click Login.
-
In the Change Center of the administration console, click Lock & Edit.
-
In the left Domain Structure pane of the console, select Deployments.
-
In the right pane under Deployments, click Install.
-
In the Install Application Assistant, navigate to or enter the directory path location of the inventory.ear file.
-
Click the radio button next to the inventory.ear file, and click Next.
The Choose targeting style window appears.
-
Select Install this deployment as an application and click Next.
-
Ensure the deployed name of the application is set to the following:
oracle.communications.inventory
and click Next.
-
Review the configuration settings you have chosen and click Finish.
If you chose to change the deployment configuration later, the console returns to the Deployments table.
-
To activate the changes, under the Change Center area of the console, click Activate Changes.
Configuring the SSL Policy/Certificate
This section describes the configuration of SSL with Oracle WebLogic server. You must configure the new self-signed certificate in the WebLogic Administration Console.
To pass custom certificates to Weblogic AdminServer:
- Navigate to the WL_home/server/lib directory and run the following
command to create key and
certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.pem \ -subj "/CN=<UIM_HOSTNAME> /ST=TL /L=HYD /O=ORACLE /OU=CAGBU" -extensions san \ -config <(echo '[req]'; echo 'distinguished_name=req'; echo '[san]';echo 'subjectAltName=@alt_names';echo '[alt_names]'; \ echo 'DNS.1=localhost'; \ echo 'DNS.2=<UIM_HOSTNAME>'; \ echo 'DNS.3=<UIM_HOSTNAME2>'; \ echo 'DNS.4=<TOPOLOGY_HOSTNAME>'; \ )
Note:
You can customize the certificate data entries as per your requirement. - Create keyStore using above created key and certificate as
follows:
#create keystore in pkcs format openssl pkcs12 -export -in cert.pem -inkey key.pem -out keyStore.p12 -name "<ALIAS_NAME>" #convert keystore fromat from pkcs to jks keytool -importkeystore -srckeystore keyStore.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS
To configure the new self-signed certificate in the WebLogic Administration Console:
-
Log in to the WebLogic server Administration Console using the Administrator credentials.
The Home page appears.
-
Click Lock & Edit.
-
In the Domain Structure tree, expand Environment and then click Servers.
The Summary of Servers page appears.
-
In the Servers table, click AdminServer.
The Settings for AdminServer page appears.
The General tab is displayed by default.
-
Select SSL Listen Port Enabled.
-
In the SSL Listen Port field, update the value as appropriate.
-
Click Save.
-
Click the Keystores tab.
-
Click Change and then from the Keystores list, select Custom Identity and Java Standard Trust.
-
Do the following:
-
In the Custom Identity Keystore field, enter the full path to your JKS file as follows:
WL_Home/server/lib/keystore.jks
-
In the Custom Identity Keystore Type field, enter jks.
-
In the Custom Identity Keystore Passphrase field, enter the keystore password.
-
Leave the Java standard trust key as the default.
-
Click Save.
-
-
Click the SSL tab.
-
Do the following:
-
From the Identity and Trust Locations list, select Keystores.
-
In the Private Key Alias field, enter the alias name.
-
In the Private Key Passphrase field, enter the private key password.
-
Click Save.
-
Click Advanced.
-
From the Two Way Client Cert Behavior list, select Client Certs Requested But Not Enforced.
-
Click Save.
-
-
Click Activate Changes in the Change Center in the left pane.
For more information on SSL configuration, see the WebLogic Server Administration Console Help.
Note:
-
To replace a self-signed certificate with a production-quality certificate, you can use production-quality certificate to create KeyStore.p12 and then convert it to JKS format.
- If you have multiple servers such as MS1,MS2,PROXY, and so on. you
should add the above created certificate to cacerts of
JAVA_HOME used by the corresponding
server:
keytool -import -alias <ALIAS_NAME> -keystore $JAVA_HOME/jre/lib/security/cacerts
-
If you import a trusted CA certificate, no existing entry for alias should be in the truststore. While accessing the application, the browser prompts to install the certificate. Install the certificate in Trusted Root Certification Authorities.
-
In case of having multiple servers, consider performing above mentioned steps To configure the new self-signed certificate in the WebLogic Administration Console for other servers, you must copy the WL_home/server/lib/keystore.jks from AdminServer to other servers and then add the location to the corresponding servers.
-
Resetting/Changing the WebLogic Server's Database Connections
You may need to reset the WebLogic server's database connections when the following occurs:
-
The database goes down while UIM is active
-
UIM is started when the database is down
You reset the database connections by resetting the following JDBC data sources in the WebLogic server administration console: InventoryDataSource, InventoryTxDataSource, CMDSInventoryPersistentDS, InventoryMapDataSource, InvJMSPersistentDS, mds-commsRepository, opss-audit-DBDS, opss-audit-viewDS, opss-data-source, LocalSvcTblDataSource, and UIMAdapterDS.
To reset/change the database connections:
-
Log in to the WebLogic server administration console at:
http://ServerName:PortNumber/console
-
Click Lock & Edit.
-
In the Domain Structure tree, expand Services and then click Data Sources.
The Summary of JDBC Data Sources page appears.
-
Click InventoryDataSource.
The Settings for InventoryDataSource page appears.
-
Click the Control tab.
-
Select the check box next to the data source instance that you want to reset.
-
Click Reset.
-
Click Yes.
-
Click the Connection Pool tab.
-
Modify the following fields to match your environment:
-
URL
-
Properties
-
Password
-
Confirm Password
-
-
Repeat steps 4 through 10 for all the remaining data sources.
Setting the Default Telephone Number Edit Mask
The default telephone number edit mask defines the length format for telephone numbers entered into the UIM system. This value is used when a Telephone Number specification does not specify a ruleset extension point to customize the edit mask. See "Overview" in UIM Developer's Guide for more information on customizing the telephone number edit mask.
The initial default value of ########## (ten digits) is specified in the numbers.properties file, which you can modify.
When a custom ruleset or modified properties file does not specify a default edit mask, UIM uses the initial default edit mask from the number.properties file.
To modify the default telephone number edit mask:
-
Open UIM_Home/config/resources/logging/number.properties.
-
Find the following entry:
number.defaultEditMask=##########
-
Change ########## to the desired length.
For example, enter ############ to set the telephone number length to 12 digits. Each pound sign symbol (#) represents one digit.
Setting the Default Place Type
Place entities can be of several different types:
-
Location
-
Address
-
Address Range
-
Site
You can specify the default type by setting the value of the place.defaultPlaceType property in the place.properties file. This default value determines which type appears first in the Place Type list when you create a Place entity. By default, the value is set to Address.
To modify the default place type:
-
Open UIM_Home/config/resources/logging/place.properties.
-
Find the following entry:
place.defaultPlaceType
-
Change the value to the desired place type.
Load Balancing a Clustered Server
The two methods for load balancing a clustered server include a hardware-based load balancer and a software-based proxy server.
Note:
Oracle recommends using the hardware-based load balancer in production environments. Use either the hardware-based load balancer or the software-based proxy server in test or development environments.
Depending on the type of environment being deployed, do one of the following:
-
Configure the load balancer
-
Configure the proxy server
Configuring the Load Balancer
The requirement for the load balancer service is server affinity, also known as a sticky session. For example, a user starts a new session and it is load balanced to server #2. The subsequent HTTP requests in this session is always routed to server #2 until server #2 fails.
For information on load balancer requirements, refer to the WebLogic document: Using WebLogic Server Clusters (see Load Balancing in a Cluster).
Configuring Topology Updates
To configure topology updates, see the following topics:
Configuring Asynchronous Topology Updates
By default, the UIM topology is disabled. You must enable it to use the topology UI, maps, Service Topology, and Path Analysis. Messages related to the changes in UIM connectivity, devices, locations, and networks are delivered to the ATA microservice from UIM. See "About Unified Inventory Management" in UIM Concepts and "Overview" in UIM Developer's Guide for more information about topology.
You can configure UIM to update the topology synchronously or asynchronously.
In the synchronous model, topology updates are performed immediately after the UIM transaction is complete. This synchronous model uses REST APIs to process the business model updates.
The synchronous model:
- Is processed immediately using REST APIs.
- Requires the ATA microservice to be running.
- Does not persist the messages.
- Does not retry in case of a failed transaction.
In the asynchronous model, topology updates are processed as messages immediately after the UIM transaction is complete and utilize Kafka to process the business model updates.
The asynchronous model:
- Is processed immediately but sent to Kafka.
- Does not require the ATA microservice to be running.
- Retries in case of a failed transaction.
- Persists the messages.
- Provides improved scalability
Oracle recommends you to use the asynchronous model.
To configure UIM for asynchronous topology updates:
-
Stop the UIM application server.
-
Open the UIM_home/config/topologyProcess.properties file.
-
The processSynchronous value is false by default. If the value is true, change it to false.
-
Save the file.
-
Restart the UIM application server.
Turning Off Topology Updates
You can turn off topology updates entirely if you do not want to use topology.
To turn off topology updates:
-
Stop the UIM application server.
-
Open the UIM_home/config/topologyProcess.properties file.
-
Change the value of the disableTopology entry to true.
-
Save the file.
-
Restart the UIM application server.
Migrating Topology
If you have turned off topology updates, you must migrate the topology before you can use any topology-related features, such as path analysis or visualization. You should schedule this as a maintenance task during a time when no changes to the inventory will take place.
Caution:
When you perform the migration, the old topology is deleted and a new topology is created. You should back up your old topology to ensure that you can return to it if necessary.
You should schedule topology migrations during times when no changes to the inventory will take place.
See "" in Unified Inventory and Topology Deployment Guide for more information on migrating topology.
Configuring a Geocode Service
To configure a geocode service, see the following topics:
About Oracle eLocation
UIM uses Oracle eLocation as the default geocode service, but you may opt to use a different geocode service. This section describes Oracle eLocation, and provides information about configuring UIM to use a different geocode service.
UIM interfaces with Oracle eLocation through an XML API request that is sent when you click Validate Address from within UIM when creating a location. Oracle eLocation returns an XML API response to UIM, indicating whether or not the address sent in the request was a valid address. For valid addresses, the response includes a geocode, which is a specific latitude and longitude that represents the location.
Using a Geocode Service other than Oracle eLocation
Upon installation, UIM is configured to use the Oracle eLocation geocode service. However, you can configure UIM to use a geocode service other than the default Oracle eLocation. For example, you may opt to use a third-party geocode service, or create a custom geocode service to use.
UIM is tightly coupled with Oracle eLocation. As a result, when you click Validate Address from within UIM when creating a location, UIM creates an XML request based on what the Oracle eLocation geocode service is expecting. Similarly, UIM expects an XML response based on what the Oracle eLocation geocode service returns. You can find detailed information about the eLocation XML request and response structures at the following Web site:
Using a Third-Party Geocode Service
To use a third-party geocode service, you can host your own eLocation service that:
-
Handles the input XML request from UIM
-
Creates a new XML request based on what the third-party geocode service is expecting
-
Maps the data from the input XML request to the new XML request
-
Sends the new XML request to the third-party geocode service
-
Handles the response from the third-party geocode service
-
Creates a new XML response based on what UIM is expecting
-
Maps the data from the XML response to the new XML response
-
Sends the new XML response to UIM
In this scenario, the eLocation service is just a middle tier that performs XML mapping, allowing UIM and the third-party geocode service to communicate.
For information on how to host your own eLocation service, see Oracle Spatial eLocation Quick Start Guide:
http://download.oracle.com/otndocs/products/spatial/pdf/elocation_quickstart.pdf
Using a Custom Geocode Service
To use a custom geocode service, you can host your own eLocation service that:
-
Handles the input XML request from UIM
-
Performs custom address analysis based on input XML request data to determine the geocode
-
Creates an XML response based on what UIM is expecting
-
Sends the new XML response to UIM
In this scenario, the eLocation service hosts the custom geocode service.
For information on how to host your own eLocation service, including how to develop the custom geocode service that runs on your eLocation service, see Oracle Spatial eLocation Quick Start Guide:
http://download.oracle.com/otndocs/products/spatial/pdf/elocation_quickstart.pdf
Configuring UIM
After your eLocation service is up and running, you must configure the UIM_Home/config/system-config.properties file to point to your eLocation service. This file defines several properties related to the geocode service that UIM is using, such as host name, user ID, password, and so forth. See "Setting System Properties" for more information.
Purging UIM Entities
This section describes how to perform an entity purge in UIM.
The purge tool is available as part of the ora_uim_dbtools.jar file, located in the UIM_Home/util/ folder.
Note:
Oracle recommends that you stop the UIM application before starting the purge process. After the purge process is completed, start the UIM application.
WARNING:
Performing a purge deletes database records permanently. You must back up the database before performing any purge operation.
UIM Entity Purge Scripts
This section provides information about the UIM entities that you can purge, and the scripts you use to purge those entities. The entity purge process also purges entities that are referred as entity link characteristics; however, you can prevent the purging of such entities. See "Preventing the Purging of Entities Referred as Entity Link Characteristics" for more information.
The purge functionality enables you to purge the following entities of UIM using purge scripts specific to each entity:
-
Service: You can purge services that are in Disconnected or Cancelled status, using the following scripts:
-
servicePurge.sh (Linux)
-
servicePurge.cmd (Windows)
See "UIM Service Purge Scenarios" for more information.
-
-
Service Configuration Version: You can purge service configuration versions that are in Cancelled or Completed status, using the following scripts:
-
scvPurge.sh (Linux)
-
scvPurge.cmd (Windows)
-
-
Logical Device: You can purge logical devices (including their logical device interfaces) that are in Unassigned and Installed status, and that are not associated, linked, or referenced to any entities, using the following scripts:
-
ldPurge.sh (Linux)
-
ldPurge.cmd (Windows)
-
-
Logical Device Account: You can purge logical device accounts that are in Unassigned and Installed status, and that are not associated, linked, or referenced to any entities, using the following scripts:
-
ldaPurge.sh (Linux)
-
ldaPurge.cmd (Windows)
-
-
Party: You can purge parties that are not associated to any entities, using the following scripts:
-
partyPurge.sh (Linux)
-
partyPurge.cmd (Windows)
-
-
Place: You can purge places that are not associated to any entities, using the following scripts:
-
placePurge.sh (Linux)
-
placePurge.cmd (Windows)
-
-
Business Interaction/Engineering Work Order: You can purge business interactions and engineering work orders that are in Cancelled or Completed status, using the following scripts:
-
biPurge.sh (Linux)
-
biPurge.cmd (Windows)
-
-
Connectivity Design Version Purge: You can purge connectivity design versions that are Cancelled or Completed, using the following scripts:
-
connectivityDesignVersionPurge.sh (Linux)
-
connectivityDesignVersionPurge.cmd (Windows)
-
-
Connectivity Purge: You can purge connectivities that are not associated, using the following scripts:
-
connectivityPurge.sh (Linux)
-
connectivityPurge.cmd (Windows)
-
UIM Service Purge Scenarios
The purge tool purges services in the following scenarios:
-
Cancelled services without In Service child services.
-
Disconnected services without In Service child services.
-
Cancelled services with cancelled child services.
-
Disconnected services with disconnected child services.
-
Cancelled services with disconnected child services.
-
Disconnected services with cancelled child services.
-
Disconnected or Cancelled services without configuration items in the Transitional or Disconnected status for the following configuration item entities:
-
Telephone Number
-
IPv4Subnet
-
IPv6Subnet
-
IPv4Address
-
IPv6Address
-
Note:
The purge tool does not purge a child service in Disconnected status whose parent service is in Pending status. However, if the disconnected child service is unassigned from its parent service (in Pending status), the purge tool purges the child service that is in Disconnected status.
Prerequisites
Before you perform a UIM entity purge, do the following:
-
Gather the statistics of the schema before and after running the purge scripts. You use the following command to retrieve the statistics:
EXEC DBMS_STATS.gather_schema_stats(uim_db_schema_username);
-
Provide admin privileges to the database user. For UIM cloud native, providing admin privileges to the database user is automated.
-
Back up the database before running the scripts. The scripts delete the records matching specified criteria permanently.
-
Ensure you have the correct version of Java installed. See "UIM Software Compatibility" in UIM Compatibility Matrix for software version requirements.
Configuring the UIM Entity Purge Environment
You set up the entity purge tool environment by performing the following tasks:
Extract Entity Purge Files from ora_uim_dbtools.jar
Extract the ora_uim_dbtools.jar from the UIM Installer. Use the following command to extract contents of the JAR file:
jar -xvf ora_uim_dbtools.jar
The JAR file contains SQL scripts and also command files for the purge tool. Save the path of these extracted JAR files as the dbtools_extracted_dir path value which is referenced in this section for the additional steps.
Set Up the Entity Purge Tool Script
After the files are extracted, edit the entityPurge.sh file or entityPurge.cmd files in the root directory (where entity is the name of the entity, such as service, SCV, or party), and set the following variables:
-
Set JAVA_HOME to the directory of your JDK.
-
Modify these parameters to point to the database:
-
DB_HOSTNAME - host name of the database
-
DB_PORT - database port
-
DB_SERVICE_NAME - database service name
-
(Optional) DB_USER_NAME – database username
-
(Optional) DB_PASSWD – database password
-
-
Set the reportFilePath variable to the location where you want the purge report files to be generated.
Set Up Entity Purge Tables and Procedures
Before you use the purge tool, you must run a SQL script to set up the required new database tables and procedures. Run PurgeScripts.sql on the database. This SQL script is located in the ora_uim_dbtools.jar/sqlscripts directory. To run this SQL script, use SQL Plus and perform the following steps:
-
Log in to SQL Plus.
-
Run the following command:
@dbtools_extracted_dir/sqlscripts/PurgeScripts.sql
where dbtools_extracted_dir is the directory for the extracted contents of the ora_uim_dbtools.jar file.
-
Run the corresponding purge scripts for the entities you want to purge. Table 4-7 shows the purge type and the corresponding purge script names.
Table 4-7 Purge Type and Purge Script Names
Purge Type | Purge Script Name |
---|---|
Service or ServiceConfigurationVersion |
ora_uim_dbtools\sqlscripts\servicePurgeScripts.sql |
LogicalDevice |
ora_uim_dbtools\sqlscripts\ldPurgeScript.sql |
LogicalDeviceAccount |
ora_uim_dbtools\sqlscripts\ldaPurgeScripts.sql |
Party |
ora_uim_dbtools\sqlscripts\partyPurgeScript.sql |
Place |
ora_uim_dbtools\sqlscripts\placePurgeScript.sql |
BI or EWO |
ora_uim_dbtools\sqlscripts\biPurgeScripts.sql |
Connectivity Design Version |
ora_uim_dbtools\sqlscripts\connectivityDesignVersionScript.sql |
Connectivity |
ora_uim_dbtools\sqlscripts\connectivityPurgeScript.sql |
You can run this SQL script more than once if you want to drop and recreate all the purge audit and error log tables.
Database Tables
The PurgeScripts.sql script creates the following tables to capture the purge audit and error details:
-
Purge_Error_Log
-
Purge_Audit
-
Purge_Helper (Internal only)
-
Purge_Log (Internal only)
Purge_Error_Log
This table stores error or failure information from the purge. The purge can create errors. Errors are created if any invalid data is detected and these issues are recorded in this table. Table 4-8 shows the columns in the Purge_Error_Log table:
Table 4-8 Purge_Error_Log Columns
Column Name | Description |
---|---|
ID |
ID for the table entry and primary key. |
ERROR_CODE |
Error code for the entry which can be a SQL error code. |
ERROR_MESSAGE |
Error message text which can be a SQL error message. |
REPORTED_DATE |
Time when the error is recorded or persisted in the table. |
Purge_Audit
This table records the purge reporting information. Table 4-9 shows the columns in the Purge_Audit table:
Table 4-9 Purge_Audit Columns
Column Name | Description |
---|---|
JOBID |
For every purge, a new record is created in this table. This is primary key for the table. |
PURGETYPE |
Valid values: SERVICE, SCV, LD, LDA, PARTY, PLACE, and BI/EWO. |
STARTDATE |
The date and time when the purge is initiated. In the case of a scheduled purge, the value is set to the scheduled time and once the process starts the process updates this value with the time when the process is initiated. |
ENDDATE |
The date and time when the purge process is completed or cancelled. |
CRITERIA |
The criteria string that is generated by the API using criteria specified by the caller. You can specify information about parallel processes and the batch size. For example: (ADMINSTATE LIKE 'CANCELLED') AND LASTMODIFIEDDATE <= to_date('07/30/2014:23:59:59','mm/dd/yyyy:hh24:mi:ss'):10:1000 In this example the first portion is the search criteria followed by the number of parallel processes 10 and batch size 1000. |
STATUS |
The status of the purge. Has one of the following values:
|
PARENTJOB |
The parent JOBID record for each new child purge record. Having a parent and child job exists when a purge is suspended and later resumed. For example, if a purge is started and later suspended, there is a record for this job with a status of SUSPENDED. When the purge is resumed, the original record is updated with a status of COMPLETED. A new record is created which refers to the completed parent job record in the JOBID column. This provides you with a history of the purge requests. |
USERNAME |
The database schema user name that performs the purge. |
REPORTNAME |
The report name generated for the purge. |
Operations
The entity purge functionality can be requested with the following operations:
Report
You use the report operation to run a sample version of the purge, but this operation does not delete entities. You specify criteria and the tool determines the number of records that are affected. These records are later deleted with the Execute operation. With this number, you can then estimate the amount of freed disk space. This operation provides information, but does not actually purge or delete any records.
See the following sections for information about the arguments that you can use with the report operation for an entity purge:
Mandatory/Optional Arguments for Entity Purge Types with Report Operation
This table lists the mandatory and optional arguments for entity purge types when using the report operation.
Table 4-10 Mandatory and Optional Arguments for Entity Purge Types
Purge Type | Mandatory Arguments | Optional Arguments |
---|---|---|
SERVICE |
-ed |
-sd -status -spec |
SCV |
-sspec -scvspec -retain -status |
-ed -sd |
LOGICALDEVICE |
-ed -spec or -ldid or -ldname |
-sd -ldpcr |
LOGICALDEVICEACCOUNT |
-ed -ldaspec or -ldaid |
-sd |
PARTY |
-ed -spec |
-sd |
PLACE |
-ed -spec |
-sd |
BI/EWO |
-bispec or -ewoworkflow -ed -status |
-sd |
CONNECTIVITYDESIGNVERSION |
-spec -retain -status |
-ed -sd |
CONNECTIVITY |
-ed |
-sd -connectivityIdentifier -spec |
Specifying Entity Specifications and Entity Names Containing Spaces
If the entity specification or the entity name contains a space (for example, "Service Order"), then you must specify the arguments as follows:
On Linux:
./biPurge.sh report -bispec \'Service Order\' -status completed -ed 02/01/2016
On Windows:
./biPurge.cmd report -bispec 'Service Order' -status completed -ed 02/01/2016
Common Arguments for Entity Purge Types with Report Operation
This section lists and describes the arguments that are common to all the entity purge types for the report operation.
Note:
In the examples listed in this section, entity in entityPurge.sh refers to the entity type, such as servicePurge.sh (for service entities), scvPurge.sh (for service configuration versions), ldPurge.sh (for logical devices), and so on.
The following arguments can be used during the report operation:
-
-status: Use this argument to specify the status of entities. The purge tool considers only the entities in the specified status for purging. For example:
./entityPurge.sh report -status disconnected -ed 02/21/2012
where entity is the entity type; for example, servicePurge.sh, scvPurge.sh, or biPurge.sh.
The following list shows the only entities for which the status argument is applicable to, including the statuses that you can specify for each entity:
-
Services in Disconnected or Cancelled status
-
Service Configuration Versions in Cancelled or Completed status
-
Business interactions and engineering work orders in Cancelled or Completed status
-
-
-ed: Use this argument to specify an end date. The purge tool considers only the entities with a “last modified date" on or before this end date for purging. You must specify the date with the following format: MM/DD/YYYY. For example:
./entityPurge.sh report -ed 02/21/2012
-
-sd: Use this argument to specify the start date. The purge tool considers only the entities with a “last modified date" on or after this start date for purging. You must specify the date with the following format: MM/DD/YYYY. For example:
./entityPurge.sh report -ed 02/21/2012 -sd 02/21/2010
Entity Purge Reports
You use the report operation to generate the following reports for different purge types:
Service Purge Report
For information about the arguments that you can use with the report operation for purge type SERVICE, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
SCV Purge Report
The following arguments are specific to purge type SCV:
-
-sspec: Use this argument to specify the Service specification on which the service configuration versions that you want to purge are based on. The purge tool considers all the service configuration versions that are based on the specified Service specification for purging.
-
-scvspec: Use this argument to specify the Service Configuration specification on which the service configuration versions that you want to purge are based on. The purge tool considers all the service configuration versions that are based on the specified Service Configuration specification for purging.
-
-retain: Use this argument to specify the number of completed service configuration versions that you want to retain for each service after the purge process is completed. This argument is not applicable for service configuration versions having a status of Cancelled.
The following is an example of using the sspec, scvspec, and retain arguments:
./scvPurge.sh report –sspec BATServiceSpec –scvspec BATServiceConfigSpec -status completed -retain 3
For information about the other arguments that you can use with the report operation for purge type SCV, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Logical Device Purge Report
By default, the purge process includes logical devices in Unassigned and Installed status. The following arguments are specific to purge type LOGICALDEVICE:
-
-spec: Use this argument to specify the Logical Device specification on which the logical devices that you want to purge are based on. The purge tool considers all the logical devices that are based on the specified Logical Device specification for purging.
If you specify the spec argument before any other argument, then specifying the ed argument is mandatory. For example:
./ldPurge.sh report -spec LDSpec -ed 01/01/2018
-
-ldid: This argument is mandatory when you do not specify the Logical Device specification (-spec). Use this argument to specify the IDs of the logical devices that you want purged.
If you specify the ldid argument before any other argument, then specifying the spec and ed arguments is optional. For example:
./ldPurge.sh report -ldid 575001,525004
In addition, if you specify the ldid argument before the spec argument, then specifying the ed argument is optional. For example:
./ldPurge.sh report -ldid 575001,525004 -spec LDSpec
-
-ldname: This argument is mandatory when you do not specify either the Logical Device specification (-spec) or the logical device ID (-ldid). Use this argument to specify the names of the logical devices that you want purged.
If you specify the ldname argument before any other argument, then specifying the spec and ed arguments is optional. For example:
./ldPurge.sh report -ldname logicaldevice1
In addition, if you specify the ldname argument before the spec argument, then specifying the ed argument is optional. For example:
./ldPurge.sh report -ldname logicaldevice1 -spec LDSpec
If the logical device name contains a space, then you must specify the ldname argument as follows:
./ldPurge.sh report -ldname \'logical device1\'
-
-ldpcr: This flag indicates whether the logical device parent-child relationship should be considered for the purge operation or not. If you set this flag to true, the logical device parent-child hierarchies are also considered for purge. If this flag is set to false or if you exclude this flag from the report operation, the logical device parent-child hierarchies are not considered for purge. By default, this flag is set to false. For example:
./ldPurge.sh report -ldname logicaldevice1 -ldpcr true
For information about the other arguments that you can use with the report operation for purge type LOGICALDEVICE, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Logical Device Account Purge Report
By default, the purge process includes logical device accounts in Unassigned and Installed status. The following arguments are specific to purge type LOGICALDEVICEACCOUNT:
-
-ldaspec: Use this argument to specify the Logical Device Account specification on which the logical devices that you want to purge are based on. The purge tool considers all the logical devices that are based on the specified Logical Device Account specification for purging. For example:
./ldaPurge.sh report -ldaspec BATLDASpec -ed 01/01/2018
-
-ldaid: This argument is mandatory when you do not provide the Logical Device Account specification (-ldaspec). Use this argument to specify the IDs of the logical device accounts that you want purged. For example:
./ldaPurge.sh report -ldaid 575001,525004 -ed 01/01/2018
For information about the other arguments that you can use with the report operation for purge type LOGICALDEVICEACCOUNT, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Party Purge Report
The following argument is specific to purge type PARTY:
-
-spec: Use this argument to specify the Party specification on which the parties that you want to purge are based on. The purge tool considers all the party entities that are based on the specified Party specification for purging. For example:
./partyPurge.sh report -spec BATPartySpec -ed 01/01/2018
For information about the other arguments that you can use with the report operation for purge type PARTY, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Place Purge Report
The following argument is specific to purge type PLACE:
-
-spec: Use this argument to specify the Place specification on which the place entities that you want to purge are based on. The purge tool considers all the place entities that are based on the specified Place specification for purging. For example:
./placePurge.sh report -spec BATPlaceSpec -ed 01/01/2018
For information about the other arguments that you can use with the report operation for purge type PLACE, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
BI/EWO Purge Report
The following argument is specific to purge type BI or EWO:
-
-bispec: This argument is optional if you specify the ewoworkflow argument. Use the bispec argument to specify the Business Interaction specification on which the business interaction entities that you want to purge are based on. The purge tool considers all the business interaction entities that are based on the specified Business Interaction specification for purging. For example:
./biPurge.sh report -bispec BATBISpec -status completed -ed 01/01/2018
-
-ewoworkflow: This argument is optional if you specify the bispec argument. Use the ewoworkflow argument to specify the engineering work order (EWO) workflows for purging. For example:
./biPurge.sh report -ewoworkflow BATWorkFlow -status completed -ed 01/01/2018
For information about the other arguments that you can use with the report operation for purge type BI/EWO, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Connectivity Design Version Purge Report
The following arguments are specific to purge type CONNECTIVITYDESIGNVERSION:
-
-sspec: This argument is mandatory as the Purge tool considers only the PCVs of the Connectivity with the given Pipe Specification for purging.
-
-status: This argument is mandatory. You use this argument to get the Pipe Configuration Admin status. The PCVs with COMPLETED or CANCELLED status are considered for purging.
-
-retain: This argument is mandatory if
status
is completed. It provides the number of COMPLETED PCVs to be retained in the final purge for each pipe. This argument is not applicable if the status is CANCELLED.
For information about the other arguments that you can use with the report operation for purge type CONNECTIVITYDESIGNVERSION, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Connectivity Purge Report
The following argument is specific to purge type CONNECTIVITY:
-
-sspec: This argument is mandatory as the Purge tool considers only the PCVs of the Connectivity with the given Pipe Specification for purging.
-
-connectivityIdentifier: This argument is mandatory as the Purge tool considers only the PCVs of the Connectivity with the given Connectivity Identifier for purging.
For information about the other arguments that you can use with the report operation for purge type CONNECTIVITY, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Execute
WARNING:
A purge operation deletes database records permanently. You must back up the database before performing any purge operation.
The Execute operation enables you to purge entities using the specified criteria. The purge deletes rows from several tables using the specified criteria. The Execute operation always creates a report. You are prompted for a confirmation if the purge end date specified is within one year from the current date.
You cannot run more than one Execute operation at a time. If you need to start a new Execute operation, then the old Execute operation must be cancelled or completed. In the case of a suspended purge operation, no new Execute operations can be initiated until the suspended operation is also cancelled or completed.
When an Execute purge operation is performed, a new record with a status of INPROGRESS is created in the Purge_Audit table. When the Execute operation completes successfully, the status is updated to COMPLETED.
See the following sections for information about the arguments that you can use with the Execute operation for an entity purge:
Mandatory/Optional Arguments for Entity Purge Types with Execute Operation
This table lists the mandatory and optional arguments for entity purge types when using the Execute operation.
Table 4-11 Mandatory and Optional Arguments for Entity Purge Types
Purge Type | Mandatory Arguments | Optional Arguments |
---|---|---|
SERVICE |
-ed |
-status -sd -s -c -t -force |
SCV |
-sspec -scvspec -retain -status |
-s -c -t -force |
LOGICALDEVICE |
-ed -spec or -ldid or -ldname |
-ldpcr -sd -s -c -t -force |
LOGICALDEVICEACCOUNT |
-ed -ldaspec or -ldaid |
-sd -s -c -t -force |
PARTY |
-ed -spec |
-sd -s -c -t -force |
PLACE |
-ed -spec |
-sd -s -c -t -force |
BI/EWO |
-bispec or ewoworkflow -ed -status |
-sd -s -c -t -force |
CONNECTIVITYDESIGNVERSION |
-spec -retain -status |
-ed -sd -s -c -t -force |
CONNECTIVITY |
-spec -connectivityIdentifier -ed |
-sd -s -c -t -force |
Specifying Entity Specifications and Entity Names Containing Spaces
If the entity specification or the entity name contains a space (for example, "Service Order"), then you must specify the arguments as follows:
On Linux:
./biPurge.sh execute -bispec \'Service Order\' -status completed -ed 02/01/2016
On Windows:
./biPurge.cmd execute -bispec 'Service Order' -status completed -ed 02/01/2016
Common Arguments for Entity Purge Types with Execute Operation
This section lists and describes the arguments that are common to all the entity purge types for the Execute operation.
Note:
In the examples listed in this section, entity in entityPurge.sh refers to the entity type, such as servicePurge.sh (for service entities), scvPurge.sh (for service configuration versions), ldPurge.sh (for logical devices), and so on.
The following arguments can be used during the Execute operation:
-
-status: Use this argument to specify the status of entities. The purge tool considers only the entities in the specified status for purging. For example:
./entityPurge.sh execute -status disconnected -ed 02/21/2012
where entity is the entity type; for example, servicePurge.sh, scvPurge.sh, or biPurge.sh.
The following list shows the only entities for which the status argument is applicable to, including the statuses that you can specify for each entity:
-
Services in Disconnected or Cancelled status
-
Service Configuration Versions in Cancelled or Completed status
-
Business interactions and engineering work orders in Cancelled or Completed status
-
-
-ed: Use this argument to specify an end date. The purge tool considers only the entities with a “last modified date" on or before this end date for purging. You must specify the date with the following format: MM/DD/YYYY. For example:
./entityPurge.sh execute -ed 02/21/2012
-
-sd: Use this argument to specify the start date. The purge tool considers only the entities with a “last modified date" on or after this start date for purging. You must specify the date with the following format: MM/DD/YYYY. For example:
./entityPurge.sh execute -ed 02/21/2012 -sd 02/21/2010
-
-force: Use this argument to avoid the purge operation prompting you for confirmations. For example:
./entityPurge.sh execute -ed 02/21/2012 -force
-
-s: Use this argument to specify a start date and time for the purge to run. You must specify the date with a format of MM/DD/YYYY:hh:mm:ss. For example:
./entityPurge.sh execute -ed 02/21/2012 -s 06/26/2012:19:30:00
-
-c: Use this argument to set the commit size for the purge. By default, the commit size is set to 1000. The maximum value is 10000. If you specify a value greater than 10000, the purge ignores the argument value and uses the maximum value of 10000. For example:
./entityPurge.sh execute -ed 02/21/2012 -c 200
-
-t: Use this argument to set the number of parallel processes allowed. By default, the number of parallel processes is set to 10. The maximum value you can specify is 100. If you specify a value greater than 100, the purge ignores the argument value and uses the maximum value of 100. For example:
./entityPurge.sh execute -ed 02/21/2012 -t 15
Entity Purge Executions
You use the Execute operation to purge entities by running the following entity purge executions:
Service Purge Execution
The following is the list of tables that are affected by the service purge execution:
-
Service
-
Service_Char
-
Party_ServiceRel
-
Place_ServiceRel
-
ServiceAssignment
-
ServiceConsumer
-
ServiceReservation
-
ServiceCondition
-
ServiceConfigurationVersion
-
BusinessInteraction
-
ConfigurationInput
-
TopologyProfile
-
TopologyProfileEdge
-
TopologyProfileNode
-
ServiceConfigurationItem
-
ServiceConfigurationItem_Char
-
BusinessInteractionItem
-
EntityConsumer
-
EntityAssignment
-
EntityConfigRef
where Entity is the entity type of the related resource. In the list of affected tables, the EntityConsumer, EntityAssignment, and EntityConfigRef tables are applicable to the following entity resources, which can be consumed by a Service:
-
Custom Network Address
-
Custom Object
-
Device Interface
-
Equipment
-
Equipment Holder
-
Geographic Location
-
Geographic Site
-
Logical Device Account
-
Logical Device
-
Network
-
Physical Connector
-
Physical Device
-
Physical Port
-
Pipe
-
Service
-
Telephone Number
-
Service Purge Execution Arguments
For information about the arguments that you can use with the Execute operation for purge type SERVICE, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
SCV Purge Execution
The following is the list of tables that are affected by the SCV purge execution:
-
ServiceConfigurationVersion
-
BusinessInteraction
-
ConfigurationInput
-
TopologyProfile
-
TopologyProfileEdge
-
TopologyProfileNode
-
ServiceConfigurationItem
-
ServiceConfigurationItem_Char
-
BusinessInteractionItem
-
EntityConsumer
-
EntityAssignment
-
EntityConfigRef
where Entity is the entity type of the related resource. In the list of affected tables, the EntityConsumer, EntityAssignment, and EntityConfigRef tables are applicable to the following entity resources, which can be consumed by a Service:
-
Custom Network Address
-
Custom Object
-
Device Interface
-
Equipment
-
Equipment Holder
-
Geographic Location
-
Geographic Site
-
Logical Device Account
-
Logical Device
-
Network
-
Physical Connector
-
Physical Device
-
Physical Port
-
Pipe
-
Service
-
Telephone Number
-
SCV Purge Execution Arguments
The following arguments are specific to purge type SCV:
-
-sspec: Use this argument to specify the Service specification on which the service configuration versions that you want to purge are based on. The purge tool considers all the service configuration versions that are based on the specified Service specification for purging.
-
-scvspec: Use this argument to specify the Service Configuration specification on which the service configuration versions that you want to purge are based on. The purge tool considers all the service configuration versions that are based on the specified Service Configuration specification for purging.
-
-retain: Use this argument to specify the number of completed service configuration versions that you want to retain for each service after the purge process is completed. This argument is not applicable for service configuration versions having a status of Cancelled.
The following is an example of using the sspec, scvspec, and retain arguments:
./scvPurge.sh execute –sspec BATServiceSpec –scvspec BATServiceConfigSpec -status completed -retain 3
For information about the other arguments that you can use with the Execute operation for purge type SCV, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
Logical Device Purge Execution
By default, the purge process includes logical devices in Unassigned and Installed status. The following is the list of tables that are affected by the logical device purge execution:
-
DEVICEINTERFACECONFIGREF
-
DEVICEINTERFACE_CHAR
-
DEVICEINTERFACE_CHAR_EXT
-
DEVICEINTERFACE
-
LOGICALDEVICECONFIGREF
-
LOGICALDEVICE_CHAR
-
LOGICALDEVICE_CHAR_EXT
-
LOGICALDEVICE_LOGICALDEVICEREL (if the ldpcr flag is set to true)
-
LOGICALDEVICE
Logical Device Purge Execution Arguments
The following arguments are specific to purge type LOGICALDEVICE:
-
-spec: Use this argument to specify the Logical Device specification on which the logical devices that you want to purge are based on. The purge tool considers all the logical devices that are based on the specified Logical Device specification for purging.
If you specify the spec argument before any other argument, then specifying the ed argument is mandatory. For example:
./ldPurge.sh execute -spec LDSpec -ed 01/01/2018
-
-ldid: This argument is mandatory when you do not provide the Logical Device specification (-spec). Use the ldid to specify the IDs of the logical devices that you want purged.
If you specify the ldid argument before any other argument, then specifying the spec and ed arguments is optional. For example:
./ldPurge.sh execute -ldid 575001,525004
In addition, if you specify the ldid argument before the spec argument, then specifying the ed argument is optional. For example:
./ldPurge.sh execute -ldid 575001,525004 -spec LDSpec
-
-ldname: This argument is mandatory when you do not specify either the Logical Device specification (-spec) or the logical device ID (-ldid). Use this argument to specify the names of the names of the logical devices that you want purged.
If you specify the ldname argument before any other argument, then specifying the spec and ed arguments is optional. For example:
./ldPurge.sh execute -ldname logicaldevice1
In addition, if you specify the ldname argument before the spec argument, then specifying the ed argument is optional. For example:
./ldPurge.sh execute -ldname logicaldevice1 -spec LDSpec
If the logical device name contains a space, then you must specify the ldname argument as follows:
./ldPurge.sh execute -ldname \'logical device1\'
-
-ldpcr: This flag indicates whether the logical device parent-child relationship should be considered for the purge operation or not. If you set this flag to true, the logical device parent-child hierarchies are also considered for purge. If this flag is set to false or if you exclude this flag from the report operation, the logical device parent-child hierarchies are not considered for purge. By default, this flag is set to false. For example:
./ldPurge.sh execute -ldname logicaldevice1 -ldpcr true -ed 01/01/2018
For information about the other arguments that you can use with the Execute operation for purge type LOGICALDEVICE, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
Logical Device Account Purge Execution
By default, the purge process includes logical device accounts in Unassigned and Installed status. The following is the list of tables that are affected by the logical device account purge execution:
-
LOGICALDEVICEACCOUNTCONFIGREF
-
LDACCOUNTASSIGNMENT
-
LDACCOUNTCONSUMER
-
LDACCOUNT_CHAR
-
LDACCOUNT_CHAR_EXT
-
LOGICALDEVICEACCOUNT
Logical Device Account Purge Execution Arguments
The following arguments are specific to purge type LOGICALDEVICEACCOUNT:
-
-ldaspec: Use this argument to specify the Logical Device Account specification on which the logical devices that you want to purge are based on. The purge tool considers all the logical devices that are based on the specified Logical Device Account specification for purging. For example:
./ldaPurge.sh execute -ldaspec BATLDASpec -ed 01/01/2018
-
-ldaid: This argument is mandatory when you do not provide the Logical Device Account specification (-ldaspec). Use the ldaid to specify the IDs of the logical device accounts that you want purged. For example:
./ldaPurge.sh execute -ldaid 575001,525004 -ed 01/01/2018
For information about the other arguments that you can use with the Execute operation for purge type LOGICALDEVICEACCOUNT, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
Party Purge Execution
The following is the list of tables that are affected by the party purge execution:
-
PARTYCONFIGREF
-
PARTY_CHAR
-
PLACE_CHAR_EXT
-
PARTY
Party Purge Execution Arguments
The following argument is specific to purge type PARTY:
-
-spec: Use this argument to specify the Party specification on which the parties that you want to purge are based on. The purge tool considers all the party entities that are based on the specified Party specification for purging. For example:
./partyPurge.sh execute -spec BATPartySpec -ed 01/01/2018
For information about the other arguments that you can use with the Execute operation for purge type PARTY, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
Place Purge Execution
The following is the list of tables that are affected by the place purge execution:
-
GEOGRAPHICSITECONFIGREF
-
GEOGRAPHICLOCATIONCONFIGREF
-
GEOGRAPHICADDRESSCONFIGREF
-
GEOADDRESSRANGECONFIGREF
-
PLACE_CHAR
-
PLACE_CHAR_EXT
-
GEOGRAPHICPLACE
Place Purge Execution Arguments
The following argument is specific to purge type PLACE:
-
-spec: Use this argument to specify the Place specification on which the place entities that you want to purge are based on. The purge tool considers all the place entities that are based on the specified Place specification for purging. For example:
./placePurge.sh execute -spec BATPlaceSpec -ed 01/01/2018
For information about the other arguments that you can use with the Execute operation for purge type PLACE, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
BI/EWO Purge Execution
The following is the list of tables that are affected by the BI/EWO purge execution:
-
BUSINESSINTERACTIONATTACHMENT
-
BUSINESSINTERACTIONITEM*
-
BIITEM_BIITEM
-
BUSINESSINTERACTION_CHAR
-
BUSINESSINTERACTION_CHAR_EXT
-
ACTIVITY
-
ACTIVITYITEM
-
ACTIVITY_CHAR
-
ACTIVITY_CHAR_EXT
-
BUSINESSINTERACTION
*During the BI purge Execute operation, the tables that will be affected depend on the entities/relationships that are created or associated under a BI context. When you create a new entity or add an existing entity under a BI context, UIM creates a version record in the entity tables or relationship tables for that entity.
For example, if you add a custom object entity under a BI context, UIM does the following:
-
Creates a record for the custom object in the BUSINESSINTERACTIONITEM table.
-
Creates a version record for the custom object in the CUSTOMOBJECT table.
If you add a logical device parent-child hierarchy under a BI context, UIM does the following:
-
Creates a record for the logical device parent-child hierarchy in the BUSINESSINTERACTIONITEM table.
-
Creates a version record for the logical device parent-child hierarchy in the LOGICALDEVICE_LOGICALDEVICEREL table.
In this case, when you run the BI purge Execute operation, the following occurs:
-
For the custom object entity, UIM purges the record from the BUSINESSINTERACTIONITEM table and the version record from the CUSTOMOBJECT table.
-
For the logical device parent-child hierarchy, UIM purges the record from the BUSINESSINTERACTIONITEM table and the version record from the LOGICALDEVICE_LOGICALDEVICEREL table.
BI/EWO Purge Execution Arguments
The following arguments are specific to purge type BI/EWO:
-
-bispec: This argument is optional if you specify the ewoworkflow argument. Use the bispec argument to specify the Business Interaction specification on which the business interaction entities that you want to purge are based on. The purge tool considers all the business interaction entities that are based on the specified Business Interaction specification for purging. For example:
./biPurge.sh execute -bispec BATBISpec -status completed -ed 01/01/2018
-
-ewoworkflow: This argument is optional if you specify the bispec argument. Use the ewoworkflow argument to specify the engineering work order (EWO) workflows for purging. For example:
./biPurge.sh execute -ewoworkflow BATWorkFlow -status completed -ed 01/01/2018
For information about the other arguments that you can use with the Execute operation for purge type BI/EWO, see "Common Arguments for Entity Purge Types with Execute Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Execute Operation" for the list of mandatory and optional arguments for entity purge types when using the Execute operation.
Connectivity Design Version Purge Execution
The following is the list of tables that are affected by the CONNECTIVITYDESIGNVERSION purge execution:
-
PIPECONFIGREF
-
PIPECONFIGURATIONITEM
-
BUSINESSINTERACTIONITEM
-
PIPE
-
PIPECONFIGITEM_CHAR
-
PIPE_CHAR
-
PIPETERMINATIONPOINT_CHAR
-
PIPECONFIGURATIONVERSION
-
CONFIGURATIONINPUT
-
TOPOLOGYPROFILEEDGE
-
TOPOLOGYPROFILENODE
-
TOPOLOGYPROFILE
-
Pipe Assignment Tables
-
PipeTerminationPoint Assignment Tables
Connectivity Design Version Purge Execution Arguments
The following arguments are specific to purge type CONNECTIVITYDESIGNVERSION:
-
-spec: This argument is mandatory as the Purge tool considers only the PCVs of the Connectivity with the given Pipe Specification for purging.
-
-status: This argument is mandatory. You use this argument to get the Pipe Configuration Admin status. The PCVs with COMPLETED or CANCELLED status are considered for purging.
-
-retain: This argument is mandatory if
status
is completed. It provides the number of COMPLETED PCVs to be retained in the final purge for each pipe. This argument is not applicable if the status is CANCELLED.
For information about the other arguments that you can use with the report operation for purge type CONNECTIVITYDESIGNVERSION, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Connectivity Purge Execution
The following is the list of tables that are affected by the CONNECTIVITY purge execution:
-
PIPE
-
PLACE_PIPEREL
-
PIPE_CHAR
-
PIPEROLE
-
PIPEREL
-
PIPETPPIPETPREL
-
Pipe Assignment Tables
-
PipeTerminationPoint Assignment Tables
-
DEVICEINTERFACE
-
PIPEPIPETPREL
-
PLACE_PIPETERMINATIONPOINTREL
-
PARTY_PIPETPREL
-
PIPETERMINATIONPOINT_CHAR
-
PIPEDIRECTIONALITY
-
PIPETERMINATIONPOINT
-
PIPECAPACITYCONSUMPTION
-
PIPECAPACITYREQUIRED
-
PIPECAPACITYPROVIDED
-
ATTACHMENT
-
PIPEREL
-
PIPETPPIPETPREL
-
TRAILPIPERELPIPEREL
-
TRAILPIPERELTRAILPATHREL
-
TRAILPATH
-
TRAILPIPEREL
-
PIPECONFIGURATIONITEM
-
BUSINESSINTERACTIONITEM
-
PIPECONFIGURATIONVERSION
-
BUSINESSINTERACTION
-
CONNECTIONTERMINATIONPOINT
-
PROCESSINGSIGNAL
-
TRAILTERMINATIONPOINT
-
SIGNALTERMINATIONPOINT
-
INTERFACE_INTERCONNECTION
Connectivity Purge Execution Arguments
The following arguments are specific to purge type CONNECTIVITY:
-
-spec or -connectivityIdentifier: The Purge tool considers the Connectivity with a given Pipe Specification (
-spec
) for purging. Otherwise, you can specify comma separated list of Connectivity IDs to be purged. When you provide both-spec
and-connectivityIdentifier
,-connectivityIdentifier
is be used for filtering Connectivity entities.Note:
The given Connectivity must be inUNAVAILABLE
state. Otherwise, purge tool does not consider it for purging. - -ed: Use this argument to specify an end date. The purge tool considers only the
entities with the last modified date on or before this end date, for purging. You must
specify the date in the format:
MM/DD/YYYY
.
For information about the other arguments that you can use with the report operation for purge type CONNECTIVITY, see "Common Arguments for Entity Purge Types with Report Operation".
See "Mandatory/Optional Arguments for Entity Purge Types with Report Operation" for the list of mandatory and optional arguments for entity purge types when using the report operation.
Status
The status option shows information for in-progress and suspended purge processes. It also provides the following information related to the purge:
-
Active purge information.
-
Number of entities purged.
-
All the jobs related to entity purge.
-
Report file name which is generated while entities are purged.
If no active purge processes are present, the Status operation displays the status of the last completed purge.
Suspend
The suspend operation suspends the purge process and allows active parallel processes to continue to run and complete. No new processes can be created, however. Before suspending an active purge process, the suspend operation provides the following information:
-
Active purge information.
-
Number of entities purged.
-
All the jobs related to entity purge.
-
Report file name which is generated while entities are purged.
Note:
One purge operation can create multiple software processes to perform the requested purge.
A suspended operation can be cancelled or resumed, but once the purge is suspended, no new purge operations can be initiated. After an Execute operation is suspended, the Purge_Audit STATUS record value is updated to COMPLETED and a new record is created with a status of SUSPENDED. The suspend option is not applicable to the purge process that is in POPULATE_INPROGRESS status.
Please note that there are processes which are still in RUNNING status when a purge operation is suspended. After these processes complete execution, the processes update to DISABLED. When all the processes have changed to DISABLED status, no new processes are created.
Resume
The resume option restarts the purge operation using the specified arguments. In this case, the Purge_Audit STATUS value is updated to INPROGRESS for the record that was suspended. The resume option is not applicable to the purge process that is in POPULATE_INPROGRESS status. The following arguments can be specified when resuming a purge operation:
-
-s: This argument is optional. Use the s argument to specify a start date and time for the purge to run. You must specify the date with a format of MM/DD/YYYY:hh:mm:ss. For example:
./entityPurge.sh resume -s 06/26/2014:19:30:00
where entity is the name of the entity, such as service, SCV, party, and so on.
-
-c: This argument is optional. Use the c argument to set the commit size for the purge. By default, the commit size is set to 1000. The maximum value is 10000. If you specify a value greater than 10000, the purge ignores the argument value and uses the maximum value of 10000. For example:
./entityPurge.sh resume -c 200
-
-t: This argument is optional. Use the t argument to set the number of parallel processes allowed. By default, the number of parallel processes is set to 10. The maximum value you can specify is 100. If you specify a value greater than 100, the purge ignores the argument value and uses the maximum value of 100. For example:
./entityPurge.sh resume -t 15
Cancel
The cancel option terminates all purge processes with a status of INPROGRESS or SUSPENDED. It also provides the following information related to purge process, before requesting confirmation:
-
Active purge operation.
-
Number of entities purged.
-
All the jobs related to entity purge.
-
Report file name which is generated while entities are purged.
After this information is provided, you must confirm the cancellation of in-progress or suspended operations. When the purge process is cancelled, the Purge_Audit STATUS value is updated to CANCELLED for records with an INPROGRESS or SUSPENDED status.
Preventing the Purging of Entities Referred as Entity Link Characteristics
The entities that are considered for purging could be referred as entity link characteristics in other entities. The entity link characteristics purge is applicable to the following entities:
-
Logical device
-
Logical device account
-
Party
-
Place
When you run an entity purge, UIM displays the following warning messages, which inform you that the purge process will permanently delete the entities that are referred as entity link characteristics in other entities:
Warning Message During Entity Purge Report
Warning!! Please backup data before executing the purge. All records matching specified criteria will be permanently deleted. Warning!! Specification provided is referenced as an Entity Link Characteristic. Purge will delete all the instance data which are referred on other entity instances.Refer System Administrator guide, section ‘Purging UIM Entities' to find out how to avoid the purge of this instance data.
Warning Message During Entity Purge Execution
Warning 1: Please backup data before executing the purge. All records matching specified criteria will be permanently deleted. Warning 2: <entity name, id, spec etc> provided are referenced as an Entity Link Characteristic. Purge will delete all the instance data which are referred on other entity instances. System Administrator guide, section ‘Purging UIM Entities' to find out how to avoid the purge of this instance data. Are you sure you want to continue with entity purge? (Y|N)
Clicking Yes in the warning message deletes all the entities, including the entities that are referred as entity link characteristics. Clicking No terminates the entity purge process.
Script to Prevent the Purging of Entity Link Characteristics
UIM provides the elcharScripts.sql script, which prevents the deletion of entities that are referred as entity link characteristics.
The elcharScripts.sql script is available as part of the ora_uim_dbtools.jar file, located in the UIM_Home/util/ folder.
Ensure that you run the elcharScripts.sql script before running an entity purge to prevent the deletion of entities that are referred as entity link characteristics.
Configuring Email Addresses and User Data
To support the message notification functionality, you maintain users and user groups along with their contact information. You manage this information through the embedded Lightweight Directory Access Protocol (LDAP) server within Oracle WebLogic Server or optionally through another LDAP-compliant product. For information about managing the embedded LDAP server within Oracle WebLogic Server, see the following web site:
https://docs.oracle.com/en/middleware/fusion-middleware/weblogic-server/12.2.1.4/secmg/ldap.html
Alternatively, there are additional products such as Oracle Identity Management - Oracle Internet Directory that may be chosen depending on the required scale of the installation. For more information about message notification functionality, see "Overview" in UIM Developer's Guide.
Configuring UIM to Evaluate System Configuration Compliance
UIM includes the UIM compliance tool, which captures a snapshot of your UIM configuration and evaluates it against established rules. These rules are based on best practices and guidelines using which the compliance tool analyzes the UIM configuration and generates an evaluation result that enables you to optimally configure your UIM environment.
The UIM compliance tool captures snapshots of the following:
-
WebLogic domain
-
UIM configuration parameters
-
Database configuration parameters
The compliance tool uses a set of compliance rules to determine if a configuration value is properly set or, if it allows a range of valid values, whether the configured value falls within that range. The tool also verifies that required or recommended patches have been applied.
For every compliance rule, reports include a description of the rule, an indication of whether the rule passed or failed, and the rationale for the compliance rule. For non-compliant results, a severity level and the reason for the failure are also included.
The UIM compliance tool is packaged with the UIM software, which you can download from the My Oracle Support website at:
Setting Up the UIM Compliance Tool
To set up the UIM compliance tool:
-
Download the compliance-1.2.1.zip file.
-
Create a local directory; for example, Compliance_Home.
-
Extract the contents of the compliance-1.2.1.zip file into the Compliance_Home directory.
-
Download the required third-party software for the compliance tool by doing the following:
-
Navigate to the Compliance_Home/config directory and update the proxy.settings file to include any proxy settings that are required to access the Internet.
-
Add the Ant binary directory to the PATH environment variable by doing one of the following:
-
On Windows, run the following command:
set %PATH%=%ANT_HOME%/bin;$PATH
-
On Linux, run the following command:
export PATH=$ANT_HOME/bin:$PATH
-
-
Navigate to the Compliance_Home/bin directory and run the ant command, which downloads the required third-party software for the compliance tool.
-
-
Generate a wlfullclient.jar file by doing the following:
-
Navigate to the WL_HOME/server/lib directory.
-
Run the following command to generate the wlfullclient.jar file in the WL_Home/server/lib directory (where WL_Home is the directory in which the WebLogic Server is installed):
java -jar wljarbuilder.jar
-
Copy the wlfullclient.jar file into the Compliance_Home/lib directory.
-
Running the UIM Compliance Tool
To run the UIM compliance tool:
-
Navigate to the Compliance_Home/config directory
-
Create a new compliance.properties file.
-
Copy the contents of the compliance-sample.properties.file into the compliance.properties file.
-
In compliance.properties file, update the following properties with the UIM Administration Server details:
weblogic.hostname=WL_HostName weblogic.port=WL_Port weblogic.username=WL_UserName
where:
-
WL_HostName is the host name of the WebLogic Administration Server.
-
WL_Port is the port number of the WebLogic Administration Server.
-
WL_UserName is the user name used to log in to the WebLogic Administration Server.
-
-
Navigate to the Compliance_Home/bin directory and do the following:
On Windows, run the following script:
compliance.bat
On Linux, run the following script:
compliance.sh
The compliance tool generates the evaluation results in the Compliance_Home/result directory.
Note:
For more information about the compliance tool, see the compliance tool documentation, which becomes available at the following location after you install the compliance tool:
Compliance_Home/doc/index.html
Preventing a ZIP Bomb When Uploading Ruleset Files
In some scenarios, you may be required to upload ruleset files in a ZIP file. You use the properties in the UIM_Home/config/importExport.properties file to prevent a ZIP bomb when uploading ruleset files in a ZIP file.
Table 4-12 lists and describes the properties in the importExport.properties file.
Table 4-12 Properties in the importExport.properties File
Property | Description |
---|---|
import.fileUploadWhiteListMimeTypes |
This property validates the MIME type of the ZIP file that you are uploading. For example: import.fileUploadWhiteListMimeTypes=text/plain, text/csv,application/zip, application/x-zip-compressed |
import.fileUploadMaxSizeAfterUnzip |
The property controls the maximum size of the ZIP file that you can upload (in bytes). The default value is 100 MB. For example: import.fileUploadMaxSizeAfterUnzip=104857600 |
import.fileUploadNestedFileLimit |
The property controls the number of nested levels allowed within a ZIP file. The default value is 1. A value of 1 indicates that a ZIP file cannot contain another ZIP file. A value of two indicates that a ZIP file can contain only one ZIP file within it. For example: import.fileUploadNestedFileLimit=1 |
import.fileUploadZipsLimitPerLevel |
This property controls the number of ZIP files allowed at each level within the ZIP file. The default value is 0. A value of 0 indicates that every nested level within the ZIP file can contain only one ZIP file. For example: import.fileUploadZipsLimitPerLevel=0 |
outage.fileTempLocation |
This property provides the location for the outage file. For example: outage.fileTempLocation |
Importing Inventory Entities in Bulk
Entity bulk operations enable you to perform various bulk operations on entities at a time, based on the data you provide as input in a spreadsheet file. You can use different tabs in the spreadsheet for each entity bulk operation. You provide the required entity information in the spreadsheet, upload it into UIM, and subsequently UIM processes the spreadsheet file and completes the bulk operation.
You can create the following entities in bulk by specifying the required information in the spreadsheet file:
-
Property locations
-
Network entity codes
-
Logical devices
-
Physical devices
-
Equipment
-
Connectivities
-
Networks
-
Places
-
Pipes
-
Complete device assemblies, which include logical device, physical device and equipment at the same time.
You can perform the following entity-specific actions in bulk by specifying the required information in the spreadsheet file:
-
Map physical devices to equipment and map physical devices to logical devices at the same time
-
Add shelves in racks
-
Add cards in shelves
-
Map physical devices to logical devices
-
Map device interfaces to ports
-
Add connectivities to networks
-
Add network nodes
-
Add network edges
For entity bulk operations, UIM provides a sample spreadsheet file that contains the following worksheet tabs:
-
Locations: Use this tab to create property locations. See Table 4-13 for more information.
-
NetworkEntityCodes: Use this tab to create network entity codes. See Table 4-14 for more information.
-
LogicalDevices: Use this tab to create logical devices. See Table 4-15 for more information.
-
PhysicalDevices: Use this tab to create physical devices. See Table 4-16 for more information.
-
Equipments: Use this tab to create equipment. See Table 4-17 for more information.
-
DeviceMappings: Use this tab to map a physical device to equipment, and map a physical device to a logical device. See Table 4-18 for more information.
-
Devices: Use this tab to build the entire device assembly at one time, which includes physical devices, logical devices and equipment. The name, network location, and NEC are shared between the physical device, logical device and equipment. See Table 4-19 for more information.
-
InsertShelfs: Use this tab to add shelves in a rack. See Table 4-20 for more information.
-
InsertCards: Use this tab to add cards in a shelf. See Table 4-21 for more information.
-
PortMappings: Use this tab to create ports and interfaces, and to map the ports to the interfaces. See Table 4-22 for more information.
-
Connectivities: Use this tab to create connectivities. See Table 4-23 for more information.
-
Networks: Use this tab to create networks. See Table 4-24 for more information.
-
AddConnectivityEdges: Use this tab to create connectivity edges. See Table 4-25 for more information.
-
NetworkNodes: Use this tab to create network nodes. See Table 4-26 for more information.
-
NetworkEdges: Use this tab to create network edges. See Table 4-27 for more information.
-
AssociatePlace: Use this tab to associate places. See Table 4-28 for more information.
-
Pipes: Use this tab to create pipes. See Table 4-29 for more information.
-
Places: Use this tab to create places. See Table 4-30 for more information.
- ConnectivityPipeEnablement: Use this tab to enable pipes and connectivities. See Table 4-31 for more information.
Note:
You have the option of including only the mandatory columns within various tabs in the spreadsheet. If a column is not required and you do not want to enter a value, then you can remove that column from the spreadsheet. This allows you to build your own custom spreadsheet templates based on your business requirements.
Table 4-13 describes the columns defined for the Locations worksheet tab.
Table 4-13 Locations Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
PropertyName |
Name of the property location. |
Street |
The street address of the property location. |
City |
The city of the property location. |
State |
The state of the property location. |
PostalCode |
The postal code of the property location. |
Country |
The country of the property location. |
NetworkLocationCode |
An alphanumeric string that uniquely identifies a property location in a network. |
GeoCodeAddress |
Valid values: true or false. |
IsServiceLocation |
Valid values: true or false. |
IsNetworkLocation |
Valid values: true or false. |
Table 4-14 describes the columns defined for the NetworkEntityCodes worksheet tab.
Table 4-14 NetworkEntityCodes Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
NetworkLocationCode |
An alphanumeric string that uniquely identifies a property location in a network. |
NetworkEntityCode |
A string that uniquely identifies a network entity (such as a logical device) within a network location. |
Table 4-15 describes the columns defined for the LogicalDevices worksheet tab.
Table 4-15 LogicalDevices Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Name |
The name of the logical device. |
Specification |
The specification used to create the logical device. |
NetworkLocation |
The network location associated to the Logical Device entity. |
NetworkEntityCode |
A string that uniquely identifies a logical device within a network location. |
DeviceIdentifier |
Displays the device identifier of the logical device associated with the network entity code. |
LDcharacteristic |
The characteristic name/value pair per cell for the Logical Device entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, LDcharacteristic1, LDcharacteristic2,.....LDcharacteristicN. Valid format: LD_Char1=value, LD_Char2=value,....LD_CharN=value. |
Table 4-16 describes the columns defined for the PhysicalDevices worksheet tab.
Table 4-16 PhysicalDevices Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Name |
The name of the physical device. |
Specification |
The specification used to create the physical device. |
NetworkLocation |
The network location associated to the Physical Device entity. |
SerialNumber |
The serial number of the physical device. |
PDcharacteristic |
The characteristic name/value pair per cell for the Physical Device entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, PDcharacteristic1, PDcharacteristic2,.....PDcharacteristicN. Valid format: PD_Char1=value, PD_Char2=value,....PD_CharN=value. |
Table 4-17 describes the columns defined for the Equipments worksheet tab.
Table 4-17 Equipments Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Name |
The name of the equipment. |
Specification |
The specification used to create equipment. |
NetworkLocation |
The network location associated to the Equipment entity. |
SerialNumber |
The serial number of the equipment. |
EQcharacteristic |
The characteristic name/value pair per cell for the Equipment entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, EQcharacteristic1, EQcharacteristic2,......EQcharacteristicN. Valid format: EQ_Char1=value, EQ_Char2=value,....EQ_CharN=value. |
Table 4-18 describes the columns defined for the DeviceMappings worksheet tab.
Table 4-18 DeviceMappings Worksheet Tab Column Headers
Column Header | Description |
---|---|
PhysicalDevice |
The name of the physical device. |
LogicalDevice |
The name of the logical device. |
Equipment |
The name of the equipment. |
Table 4-19 describes the columns defined for the Devices worksheet tab.
Table 4-19 Devices Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Name |
The generic name for logical device, physical device, and equipment. |
DeviceIdentifier |
Specify the device identifier in the format: NetworkLocation.NetworkEntityCode. |
LogicalDeviceSpecification |
The specification used to create the logical devices. |
PhysicalDeviceSpecification |
The specification used to create the physical devices. |
EquipmentSpecification |
The specification use to create the equipment. |
NetworkLocation |
A property location that has been assigned a network location code. |
NetworkEntityCode |
A string that uniquely identifies a network entity (such as a logical device) within a network location. |
SerialNumber |
The serial number for the physical device, logical device, and equipment. |
MACAddress |
The MAC address that uniquely identifies a device. |
PDcharacteristic |
The characteristic name/value pair per cell for the Physical Device entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, PDcharacteristic1, PDcharacteristic2,......PDcharacteristicN. |
LDcharacteristic |
The characteristic name/value pair per cell for the Logical Device entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, LDcharacteristic1, LDcharacteristic2,......LDcharacteristicN. |
EQcharacteristic |
The characteristic name/value pair per cell for the Equipment entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, EQcharacteristic1, EQcharacteristic2,......EQcharacteristicN. |
Table 4-20 describes the columns defined for the InsertShelfs worksheet tab.
Table 4-20 InsertShelfs Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
RackName |
The name of the rack in which you want to create a shelf. |
ShelfSpecification |
The specification used to create the shelf. |
ShelfName |
The name of the shelf. |
SerialNumber |
The serial number of the Equipment entity (that represents the shelf). |
ShelfCharacteristic |
The characteristic name/value pair per cell for the Equipment entities (that represent shelves) you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, ShelfCharacteristic1, ShelfCharacteristic2,......ShelfCharacteristicN. Valid format: Shelf_Char1=value, Shelf_Char2=value,....Shelf_CharN=value. |
Table 4-21 describes the columns defined for the InsertCards worksheet tab.
Table 4-21 InsertCards Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
ShelfName |
The name of the shelf in which you want to create a card. |
CardSpecification |
The specification used to create the card. |
Slot |
The slot number on the shelf to install the card, based on the equipment holder position number. |
Name |
The name of the card. |
Abbreviation |
The abbreviation for the card (for example, PWR) used to derive the names of the ports. |
CardCharacteristic |
The characteristic name/value pair per cell for the Equipment entities (that represent cards) you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, CardCharacteristic1, CardCharacteristic2,......CardCharacteristicN. Valid format: Card_Char1=value, Card_Char2=value,....Card_CharN=value. |
Table 4-22 describes the columns defined for the PortMappings worksheet tab.
Table 4-22 PortMappings Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Type |
Valid values:
|
Name |
Name of the physical device or equipment. |
PortSpecification |
The specification used to create the port. |
PortName |
The name of the port. |
InterfaceSpecification |
The specification used to create the device interface. |
InterfaceName |
The name of the device interface. |
LogicalDeviceName |
The logical device to which the device interface is associated. |
Table 4-23 describes the columns defined for the Connectivities worksheet tab.
Note:
The valid values for Rate Code, Technology, Function, and so on, are determined based on the Connectivity Summary or Connectivity Search pages in the UIM application.
Table 4-23 Connectivities Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Technology |
The technology that applies to this connectivity. |
Specification |
The specification used to create the connectivity. |
Format |
The identification format for this connectivity. Specify only the identification formats that are valid for the connectivity specification. Valid values:
|
ALocation |
The network location code or network entity code for the A side of the connectivity. |
ZLocation |
The network location code or network entity code for the Z side of the connectivity. |
RateCode |
The rate code that applies to the connectivity. |
OverSubscription |
The oversubscription value for the connectivity. |
Function |
The function that applies to the connectivity. |
Identifier |
You can leave this column blank. |
Concharacteristic |
The characteristic name/value pair per cell for the Connectivity entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, Concharacteristic1, Concharacteristic2,.....ConcharacteristicN. Valid format: Con_Char1=value, Con_Char2=value,....Con_CharN=value. |
AutoTermination |
Specify a value of true or false. If you specify true, UIM searches for the logical device with network entity code specified in the ALocation and ZLocation columns and terminates with the device interface if an interface with the same rate code exists. |
Adevice |
Name of the logical device for the A side of the connectivity. |
Zdevice |
Name of the logical device for the Z side of the connectivity. |
GapMessage |
Specify a message for accepted connectivity gaps. |
Table 4-24 describes the columns defined for the Networks worksheet tab.
Table 4-24 Networks Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
NetworkName |
The name of the network you want to create. |
NetworkSpecification |
The specification used to create the network. |
Topology |
The network topology. |
NWCharacteristic |
The characteristic name/value pair per cell for the Network entities you are creating. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for an entity. For example, NWCharacteristic1, NWCharacteristic2,......NWCharacteristicN. Valid format: NW_Char1=value, NW_Char2=value,....NW_CharN=value. |
Table 4-25 describes the columns defined for the AddConnectivityEdges worksheet tab.
Table 4-25 AddConnectivityEdges Worksheet Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
NetworkName |
The name of the network in which you want to create the connectivity. |
EntityType |
Valid value: CONNECTIVITY |
EntityName |
The name of the connectivity. |
Table 4-26 describes the columns defined for the NetworkNodes worksheet tab.
Table 4-26 NetworkNodes Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
NetworkName |
The name of the network in which you want to create the network node. |
EntityType |
Valid value: LOGICAL_DEVICE |
EntityName |
The name of the logical device. |
Table 4-27 describes the columns defined for the NetworkEdges worksheet tab.
Table 4-27 NetworkEdges Worksheet Tab Column Headers
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
NetworkName |
The name of the network in which you want to create the network edge. |
EntityType |
Valid value: CONNECTIVITY |
EntityName |
The name of the connectivity. |
FromNode |
The originating logical device name for the connectivity that the network edge represents. |
ToNode |
The terminating logical device name for the connectivity that the network edge represents. |
Table 4-28 describes the columns defined for the AssociatePlace worksheet tab.
Table 4-28 Column Headers in the AssociatePlace Worksheet
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
PlaceType |
Valid values:
|
PlaceName |
A generic name for the place. |
EntityType |
The type of entity that you want to associate with the place. Valid values:
|
EntityName |
A generic name of the entity that you want to associate with the place. |
Table 4-29 describes the columns defined for the Pipes worksheet tab.
Table 4-29 Column Headers in the Pipes Worksheet
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
Specification |
The corresponding specification used to create the pipe. |
Name |
A generic name for the pipe. |
Medium |
The connection medium. Note: The default value is Fiber for CWDM and DWDM pipes. |
TransmissionSignalType |
The type of signal to use. For example, specify Optical to use optical transmission signal type. Note: The default value is Optical for CWDM and DWDM pipes. |
ParentPipe |
The name of the parent pipe. Note: This is required for creating child pipes only. |
StartingWavelength |
The starting wavelength for the CWDM pipe. |
StartingFrequency |
The starting frequency for the DWDM pipe. |
NumOfChannels |
Number of channels (child pipes) to be created. |
AutoTermination |
The option to terminate a pipe automatically, after it is created. Valid values:
Note: Auto-termination of a parent pipe does not terminate the associated child pipes. |
AEntityType |
The entity type for the A side of the pipe. Valid values:
Note: Ignore this if AutoTermination is FALSE |
ZEntityType |
The entity type for the Z side of the pipe. Valid values:
Note: Ignore this if AutoTermination is FALSE |
AEntityId |
The entity ID for the A side of the pipe. Note: Ignore this if AutoTermination is FALSE |
ZEntityId |
The entity ID for the Z side of the pipe. Note: Ignore this if AutoTermination is FALSE |
AEntityName |
The entity name for the A side of the pipe. Note: Ignore this if AutoTermination is FALSE |
ZEntityName |
The entity name for the Z side of the pipe. Note: Ignore this if AutoTermination is FALSE |
Pipecharacteristic |
The characteristic name/value pair for the pipe. You can specify any number of characteristics. You can add any number of columns for specifying multiple characteristics for a pipe with the column name Pipecharacteristic. |
Table 4-30 describes the columns defined for the Places worksheet tab.
Table 4-30 Column Header in the Places Worksheet
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
PlaceType |
Valid values:
|
Specification |
The specification to be used for creating places. |
Name |
A generic name for the place. |
Latitude |
The corresponding latitude of the place, which is between -90.0000 and 90.0000 decimal degrees. |
Longitude |
The corresponding longitude of the place, which is between -180.0000 and 180.0000 decimal degrees. |
Vertical |
The North American V & H system vertical coordinate with a positive or negative numeric value. |
Horizontal |
The North American V & H system horizontal coordinate with a positive or negative numeric value. |
PlaceCharacteristic |
The characteristic name/value pair for the place. You can specify any number of characteristics. You can add any number of column headers for specifying multiple characteristics for a place with the column name PlaceCharacteristic. |
GridType |
The type of the grid. Valid values are:
|
FlexGridChannelSize |
The Flex Grid channel size. |
Table 4-31 describes the columns defined for the ConnectivityPipeEnablement worksheet tab.
Table 4-31 Column Headers in the ConnectivityPipeEnablement Worksheet
Column Header | Description |
---|---|
Action |
This column must have the value of CREATE. |
TrailType |
The type of the trail you want to associate. Valid values:
|
TrailName |
The name or identifier of the pipe or connectivity that you want it to be enabled by the pipe or connectivity mentioned in the TDMFacility column. |
TDMFacility |
The name or identifier of the pipe or connectivity with which you want to enable the pipe or connectivity mentioned in the TrailName column. You should provide the values in this column as follows:
|