16 Key Use Cases of OARM for Enhanced Security and Risk Mitigation
OARM offers a streamlined and a robust interface for Administrators to proactively determine the risk of an access request and to configure the appropriate outcomes to prevent any fraud or misuse.
Topics
You can use the OARM out-of-the-box User Authentication activity and the rules referenced in Out-of-the-Box User Authentication Rules Supported to perform a wide range of tasks. This section describes the following example use cases:
See Configuring a Custom Activity Use Case in Oracle Adaptive Risk Management tutorial for detailed instructions on configuring a custom activity in OARM.
Details on how OARM processes custom user activities and provides values for API operations from the client application can be found in Understanding the Sequence of User Activity Runtime API Calls.
16.1 Configuring a Risky IP Use Case
IP address is one of the most significant data point that Administrators analyze to take prompt action to prevent any fraudulent user activity.
To configure this use case, perform the following steps:
Note:
To learn how to configure factors, see Managing Factors in the Self-Service Portal.Note:
To learn how to create a rule effectively, see Add New Rule.16.2 Configuring a Geo-Velocity Based Use Case
OARM allows you to configure geo-velocity as a rule that grants an added layer of security and consequently a higher level of protection to an organization.
Geo-velocity is usually calculated as maximum miles-per-hour. This allows you to determine how fast a user can travel from one place to another to successfully sign in within a specific time duration.
A pre-requisite to implement the geo-velocity use case is it to have the geo-location data. The geo-location feature allows you to identify the physical location of the user. This is usually determined by obtaining the IP address of the device being used by a user to attempt a login. This data is then used to calculate the distance between two consecutive login attempts.
It is possible for a user to log in to an application from a device, then take a flight to another country, and once again log in to the same application using the same device. However, if the calculated velocity is greater than the configured velocity, then an appropriate action and an alert is triggered. Consider a scenario, where a user logs in from India at 9 am (IST), and then two hours later again tries to login from Australia at 11 am (IST). Even with the fastest mode of transportation, the user cannot travel this distance in two hours. It is a clear indication that two different people are trying to log in. This indicates a fraudulent user activity and requires an appropriate action.
The Administrator can use the Challenge based on Device Maximum Velocity out-of-the-box rule to detect such type of fraudulent user activity, trigger an alert, and challenge the user from successfully signing in. This is accomplished in conjunction with the geo-location data. The Administrator can monitor and view these alerts, actions, rules, and other user-related information through the Monitor User Sessions dashboard.
How the Rule Works
The Device Maximum Velocity rule has two values that the Administrator can configure to calculate the geo-velocity before the rule is triggered. Those value fields are called Last login within (Seconds) and Miles Per Hour is more than. Using these two field values you can customize the geo-velocity that a physical device can travel before an alert is triggered.
You must bear in mind while setting the Device Maximum Velocity that you cannot change one of the preceding values without considering that the other needs to be updated as well. In other words, you cannot only set the Last login within (Seconds) value and not properly adjust the Miles Per Hour is more than value. These two values work in conjunction to calculate the device velocity. The relationship between these two settings is an AND.
Let us see how the rule works.
- The rule first obtains the last successful login within (Seconds).
- The rule then obtains the last login city and the current login city to calculate the distance between them.
- The calculated distance between the two cities divided by the time difference in the login times is used to calculate the velocity.
- If the calculated velocity is greater than the configured velocity, the rule triggers.
Note:
Assumptions to implement this rule are as follows:- The geo-location data must have been loaded in the OARM server. See Loading Geo-Location Data.
- The user must login from the same device.
- The authentication status of the user is successful in the previous login (N seconds ago).
Note:
The steps in this use case are also shown in the tutorial Configuring a Geo-Velocity Based Use Case in Oracle Adaptive Risk Management.Note:
To learn how to configure factors, see Managing Factors in the Self-Service Portal.16.3 Loading Geo-Location Data
OARM leverages geo-location data for detecting fraudulent user activity and reporting.
OARM supports IP geo-location data from the following providers:
- Neustar Version 7
- Neustar (formerly Quova)
- Maxmind
The OAA management container is typically used to load IP geo-location data into the OARM database schema. You can, however, load geo-location data from outside the OAA management container also.
Prerequisite: You must ensure that OARM is installed and running, before you perform the steps to load geo-location data.
Note:
- Loading geo-location data can be time consuming, but it happens in the background while the service continues to function.
- If you have enabled archival logs, make sure you back them up periodically (at least every half an hour) and that the backed up logs are purged.
Loading Geo-location Data from Within the OAA Management Container
Perform the following steps:
- Download the geo-location data to a working directory of your choice, for instance,
$WORKDIR/geoData
. - Copy the geo-location data to the NFS volume
<NFS_VAULT_PATH>
, so that it can be accessed by the OAA management container.$ cd <NFS_VAULT_PATH> $ mkdir -p geoData $ sudo cp $WORKDIR/geoData/*.* <NFS_VAULT_PATH>/geoData
Note:
You can copy the data files in any location inside the <NFS_VAULT_PATH>. It is not mandatory to place it under the geoData folder. - Set the files permissions as follows:
$ cd <NFS_VAULT_PATH>/geoData $ chmod 444 *.*
- Enter a bash shell for the OAA management pod if not already inside one:
kubectl exec -n <namespace> -ti <oaamgmt-pod> -- /bin/bash
- Ensure that the geo-location files are visible inside the management container:
ls -l /u01/oracle/service/store/oaa/geoData
For example, for Neustar version 7 the files will look similar to the following:
--r--r--r-- 1 oracle staff 3673477337 Jan 26 15:22 oracletest_cgp_v1133.csv.gz
- Navigate to the
/u01/oracle/oaa_cli/bharosa_properties
directory inside the container.cd /u01/oracle/oaa_cli/bharosa_properties
- Edit the
bharosa_location.properties
file to reflect the location data provider and the location of data files.For Neustar, IP location loader related properties are defined here:
### IP location loader specific properties go here ### Specify the data provider: neustarV7 or maxmind or quova(for quova legacy format) location.data.provider=neustarV7 ### Specify the data file, for neustarV7 or maxmind or quova(for quova legacy format) location.data.file=/u01/oracle/service/store/oaa/geoData/test_cgp_v1114.csv.gz ### Specify the reference file for quova (for data provided by quova/neustar in legacy format).For NeustarV7, this property can be commented (optional). location.data.ref.file=/u01/oracle/service/store/oaa/geoData/test_08132006.ref.gz ### Specify the anonymizer data file for quova (for data provided by quova/neustar in legacy format).For NeustarV7, this property can be commented (optional). location.data.anonymizer.file=/u01/oracle/service/store/oaa/geoData/test_anonymizer.dat.gz
For Maxmind, IP location loader related properties are defined here:
### Specify the data provider: maxmind or quova (for data provided by neustar) location.data.provider=maxmind ### Specify the location data file, for maxmind location.data.location.file=/u01/oracle/service/store/oaa/geoData/GeoIP2-Enterprise-Locations-en.CSV ### Specify the blocks data file, for maxmind location.data.blocks.file=/u01/oracle/service/store/oaa/geoData/GeoIP2-Enterprise-Blocks-IPv4.CSV ### Specify the country code data file, for maxmind location.data.country.code.file=/u01/oracle/service/store/oaa/geoData/ISO_3166_CountryCode.csv ### Specify the sub country code data file, for maxmind location.data.sub.country.code.file=/u01/oracle/service/store/oaa/geoData/FIPS_10_4_SubCountryCode.csv
Note:
Regardless of whether you are using Neustar or Maxmind, leave all the values uncommented. For example, if using Neustar and you set the Neustar properties accordingly, you must leave the Maxmind properties uncommented even though they are not being used.Oracle recommends using the default values for the remaining parameters:
### Specify the number of database threads location.loader.database.pool.size=16 ### Specify the maximum number of location records to batch before issuing a database commit location.loader.database.commit.batch.size=100 ### Specify the maximum time to hold an uncommitted batch location.loader.database.commit.batch.seconds=30 ### Specify the maximum number of location records to be kept in queue for database threads location.loader.dbqueue.maxsize=5000 ### Specify the maximum number of location records to be kept in cache location.loader.cache.location.maxcount=5000 ### Specify the maximum number of location split records to be kept in cache location.loader.cache.split.maxcount=5000 ### Specify the maximum number of anonymizer records to be kept in cache location.loader.cache.anonymizer.maxcount=5000 ### Specify the maximum number of ISP records to be kept in cache location.loader.cache.isp.maxcount=5000
- Load the data by running the
loadIPLocationData.sh
script.cd /u01/oracle/oaa_cli ./loadIPLocationData.sh
Note:
When running this script, the properties file can be passed as an optional parameter as follows:cd /u01/oracle/oaa_cli ./loadIPLocationData.sh -f ./../scripts/settings/installOAA.properties
The preceding file parameter (-f) is optional. If this value is not provided, then the property file's default value is fetched from
/u01/oracle/scripts/settings/installOAA.properties
.You can override this file value by setting the environmental variable
INSTALL_PROP_FILE
insetCliEnv.sh
. The file must contain the following information:database.host
=<Database Host Name>database.port
=<Database Port Number>database.schema
=<Database User Name>database.schemapassword
=<Database Schema Password>: This property is optional.database.svc
=<Database Service Name>database.name
=<Database Name>
You must keep in mind that one of the two options,
database.svc
ordatabase.name
, must be present. If both are present in the file, thedatabase.svc
value takes precedence.Note:
Enter the password to the schema if prompted.
The data can take several hours to load.
Loading Geo-location Data from Outside the OAA Management Container
Perform the following steps:
- Copy recursively all files and folders under OAA management container's
<BHCLI_HOME>
folder to the new location on the target compute node from where you intend to run the geo-location loading script.Note:
The environmental variable<BHCLI_HOME>
is set to indicate the loader home folder. The value is/u01/oracle/oaa_cli
by default. - Check that the data file to be loaded is located at the path specified in
<BHCLI_HOME>/bharosa_properties/bharosa_location.properties
.Update the properties file as necessary.
- Set the following environment variable to the desired location to create a log file.
LOGS_DIR=/public/geoDatLogs/logs
- Install Java on the external compute and change the
JAVA_HOME
environmental variable to the management container's Java version.Note:
You must ensure that theinstallOAA.properties
file from the management container is either visible or copied over to the external compute. Perform the load, providing parameters as needed. - Load the data by running the
loadIPLocationData.sh
script.
16.4 Understanding the Sequence of User Activity Runtime API Calls
This section demonstrates how OARM processes user activities and provides values for API operations from the client application.
The following is a general sequence of API calls.
- Create an OARM session using createSession (session-POST API). It creates a
requestId
, which is required for Create Custom User Activity API. - Client application then provides information about Custom User Activity by invoking Create Custom User Activity API (which uses the
request ID
created in Step 1). - Review to make sure the status of the Create Custom User Activity is successful before obtaining the transaction ID from the response.
- Client can then call the processRules API to trigger the fraud policies/rules associated to the Transaction checkpoint. This step results in triggering the rules engine that would execute the policies and rules associated to this checkpoint and creating alerts if the associated rules trigger. The output of this API is a set of actions and risk score as returned by the policies and rules.
- Based on the outcome of the processRules API call, the client application can choose to call the Update Custom User Activity API to set the transaction status or to update data in the existing transaction.
Note:
Ensure that the Custom User Activity status is updated. This is due to the fact that some rules may use the status of previous transaction (user activity) as a data point. - In some cases, client applications can choose to execute a processRules API with a Pre Transaction checkpoint first and then Post Transaction kind of checkpoint that has policies/rules that have to be executed after a transaction is created. This can help application to figure out if transaction is good to execute, and then after execution any additional rules that may be required.