Activate and Run the Recipe

After you've configured the connections and other resources, you can activate and run the recipe.

  1. Activate the recipe. See Activate a Recipe or Accelerator
  2. Run the recipe.
    Ensure that a sample input file with employee bank details in the correct format is uploaded to the input folder (in this case, HCM_To_Payroll_Input).
    1. Run the SR_BulkDownload_RequestPersister_ATP integration flow.
      1. In the Integrations section of the project workspace, click Actions Actions icon on the integration flow, then select Run.
      2. On the Configure and run page, click Run.

      You've now successfully submitted the integration for execution. The integration now writes the input file to a local input folder (/Input). The information from the file, that is, the file metadata is persisted to the parking lot table (PAYLOAD_PARKING_LOT_TAB) in Oracle ATP.

      Note:

      You can also schedule this integration to run at a date, time, and frequency of your choosing. See Define the Integration Schedule.
    2. Run the SR_ScheduledDispatcher_CSVBatch integration flow.
      1. In the Integrations section of the project workspace, click Actions Actions icon on the integration flow, then select Run.
      2. On the Configure and run page, enter a value in the throttling parameter MaxRecords_fromDB to specify the number of records that should be fetched per run from the parking lot table (PAYLOAD_PARKING_LOT_TAB) in Oracle ATP.
      3. On the Configure and run page, click Run.

      You've now successfully submitted the integration for execution. The integration now reads the data from the parking lot table, and dispatches the data to the asynchronous integration flow SR_OneWay_Processor_HCM_To_Payroll, thus triggering the integration flow to process the batch records fetched from the parking lot table (PAYLOAD_PARKING_LOT_TAB). In the parking lot table, the status of the batch records that were submitted for processing changes from NEW to PROCESSED.

      Note:

      You can also schedule this integration to run at a date, time, and frequency of your choosing. See Define the Integration Schedule.

      The SR_OneWay_Processor_HCM_To_Payroll integration flow processes the batch files and calls the downstream application via a REST API call. It updates the batch statistics (BATCH_STATISTICS) table in ATP with the status of each record that is being processed.

      The records that the downstream application could not process are written to the payload error table (PAYLOAD_ERRORS_TAB) in ATP with the status as ERRORED. When the errors are rectified, the status of the records changes to READY.

      The SR_ScheduledDispatcher_PayrollErrors integration flow can be run to fetch the READY records from the payload error table (PAYLOAD_ERRORS_TAB), process them, and send them to the downstream application.

  3. Monitor the running of the integration flows in Oracle Integration. See Monitor Integrations