Map/Reduce Script Deployment Record
You need to create a deployment record before running a map/reduce script.
Map/reduce script deployment records are similar to those for other script types. However, map/reduce deployments have some extra fields. Some of these fields are specific to SuiteCloud Processors, which are used to run map/reduce scripts. Others are specific to map/reduce features. This topic covers all the available fields.
You can access a map/reduce deployment record in these ways:
-
To edit an existing deployment record, go to Customization > Scripting > Script Deployments, find the record, and click Edit.
-
To create a new deployment record, open the script record, and click Deploy Script button. For help creating a script record, see Script Record Creation.
Body Fields
The following table summarizes the fields available on the map/reduce script deployment record. Note that some fields are available only when you edit or view an existing deployment record.
Field |
Description |
---|---|
Script |
A link to the script record associated with the deployment. This value cannot be changed, even on a new deployment record. If you begin the process of creating a deployment and realize that you selected the wrong script record, you must start the process over. |
Title |
The user-defined name for the deployment. |
ID |
A unique identifier for the deployment. You can customize the ID by entering a value in the ID field on a new record. Customize the ID if you plan to bundle the deployment to avoid naming conflicts. IDs must be lowercase and space-free. Use underscores to separate words. If you don't enter an ID, the system generates one. In both cases, the system automatically adds the prefix You can change the ID on an existing deployment by clicking Change ID, but it's not preferred. |
Deployed |
A setting that shows whether the deployment is active. Check this box to run the script. Otherwise, the system uses the following behavior:
|
A value that determines how and when a script deployment can be submitted for processing. The primary options are:
Remember, the system only submits the deployment if the "Deployed" box is checked, regardless of the Status. For more details on this choice, see Status. |
|
See Instances |
A link to the Map/Reduce Script Status Page, filtered for all instances of this deployment record, for the current day. You can adjust the filters as needed. For details on working with the Map/Reduce Script Status page, see Map/Reduce Script Status Page. |
A value that determines what type of log messages are displayed on the Execution Log of both the deployment record and associated script record. The available levels are:
For more details on each level, see Log Level. |
|
Execute As Role |
The role used to run the script. For map/reduce, this is always set to Administrator and can't be changed. |
A measure of how urgently this script should be processed relative to other map/reduce and scheduled scripts that have been submitted. This value applies to each job associated with the deployment. The priority affects when the SuiteCloud Processors sends jobs to the processor pool. For more details, see Priority.
Important:
Understand SuiteCloud Processors before making changes. For details, see SuiteCloud Processors Priority Levels. |
|
Determines the number of SuiteCloud Processors that can be used to process the jobs associated with the script deployment. For more details on this field, see Concurrency Limit. |
|
Controls whether jobs are created for all map/reduce stages simultaneously. Only clear this field for low-priority script deployments. For more details on this field, see Submit All Stages At Once. |
|
A soft time limit on how long the script deployment’s map and reduce jobs may run before yielding. Enter a value between 3 and 60. The system checks this time limit after each function invocation. If exceeded, the job yields to let other jobs run. A new job takes over the work from the yielded job. For more details on this field, see Yield After Minutes. |
|
A value that indicates how many key-value pairs a map or reduce job can process before information about the job’s progress is saved to the database. A low Buffer Size reduces the risk of duplicate processing. Leave this value at 1 unless you have a specific reason to change it. For more details on this field, see Buffer Size. |
Status
The Status field controls when a script deployment can be submitted. The default value is Not Scheduled.
Regardless of how the script deployment is submitted, it does not necessarily execute at the exact time scheduled, or at the exact time that it's manually invoked. There may be a short system delay, even if no scripts are before it. If there are scripts already waiting to be executed, the script may need to wait until others have completed. For details on this behavior, see SuiteCloud Processors.
Scheduled
When a deployment’s Status is set to Scheduled, the script runs on a one-time or recurring schedule. You set the schedule using the deployment record’s Schedule Subtab. Once you save, the deployment will be submitted automatically.
Note also:
-
If you schedule a recurring submission with an end date, or a one-time submission, the status stays Scheduled even after the script finishes running.
-
You can't run the script on-demand when it's Scheduled.
See also Scheduling a Map/Reduce Script Submission.
Not Scheduled
When a deployment’s Status is set to Not Scheduled, the deployment can be submitted on-demand. If you want the deployment to be submitted for processing, you must manually submit it, either through the NetSuite UI or programmatically. You can use:
-
The Save and Execute option on the deployment record. See also Submitting an On-Demand Map/Reduce Script Deployment from the UI.
-
The task.ScheduledScriptTask API. See also Submitting an On–Demand Map/Reduce Script Deployment from a Script.
You can only submit the script if there's no other instance already running. To run multiple instances at once, create multiple deployment records. For details, see Submitting Multiple Deployments of the Same Script.
Testing
When a deployment’s Status is set to Testing, only the script owner can test and debug without submitting for processing. You have several ways to test and debug your Map/Reduce script. For more information about Map/Reduce Script testing, see Map/Reduce Script Testing and Troubleshooting.
Log Level
The Log Level field determines what type of log messages are displayed in the Execution Log.
For testing, use the Debug log level. This option includes more messages than the other log levels, including messages created by log.debug(options), log.audit(options), log.error(options), and log.emergency(options).
For production scripts, use one of these levels:
-
Audit — This level includes a record of events that have occurred during the processing of the script (for example, “A request was made to an external site”). This level includes log messages created by log.audit(options), log.error(options), and log.emergency(options).
-
Error — A log level set to Error shows only unexpected script errors, including log messages created by log.error(options) and log.emergency(options).
-
Emergency — Includes only the most critical messages, including log messages created by log.emergency(options).
The default value is Debug.
Priority
When multiple scripts are submitted at the same time, some might need to wait. Use the Priority field to manage script processing. The Priority field determines how quickly scripts are processed relative to others submitted at the same time. The deployment's priority applies to all its jobs.
You have the following options:
-
High — For critical deployments that need immediate processing. The scheduler sends high-priority jobs to the processor pool first.
-
Standard — This is the default setting. This is a medium priority level. Medium-priority jobs are sent to the processor pool if there are no high-priority jobs waiting.
-
Low — For deployments that can wait longer. Low-priority jobs are sent to the processor pool if there are no high- or medium-priority jobs waiting.
You must understand SuiteCloud Processors before you change this setting. See SuiteCloud Processors Priority Levels.
Concurrency Limit
The map/reduce script type allows parallel processing. With parallel processing, multiple SuiteCloud Processors can work together to run a single script instance. You can control the number of processors used for each script instance using the Concurrency Limit field on the script deployment record.
This setting only applies to the map and reduce stages. Only these stages allow parallel processing.
For example, if you specify a concurrency limit of 5, the system creates five map jobs and five reduce jobs. If you don't set a limit, the system uses the maximum number of processors available to your account. The default value is 2. For more information, see SuiteCloud Processors Processor Allotment Per Account.
When using a SuiteCloud project to modify the concurrency limit, SDF automatically adjusts it if the value exceeds the target account's limit. For example, if you set a concurrency limit of 10 and deploy to an account with a limit of 5, the Concurrency Limit field is set to 5. The XML representation of the map/reduce script remains at the original value of 10. For more information, see Setting a Concurrency Limit on Your Map/Reduce Script Deployment in SDF.
The Concurrency Limit field was introduced in 2017.2, as part of the SuiteCloud Processors feature. If you are editing a deployment record that was created prior to 2017.2, be aware that when your account was upgraded, the Concurrency Value field was initially set to a value that corresponds to the number of queues that had been saved for the Queues field.
Submit All Stages At Once
Each map/reduce script deployment instance is processed by multiple jobs.At least one job is created for each stage. Every map/reduce script must use either four or five stages: getInputData, shuffle, summarize, and either map or reduce (or both).However, the jobs for each stage aren't necessarily submitted at the same time. This behavior is controlled by the Submit All Stages at Once option.
Map/reduce stages must happen in a specific order. When Submit All Stages at Once is disabled, the system waits for each stage's prerequisite job to complete before submitting the next job.
In contrast, when Submit All Stages at Once is enabled, the system submits all stage jobs at the same time. This behavior increases the likelihood that all jobs associated with the script deployment instance finish, without gaps, before another script begins executing. However, be aware that this option does not guarantee that no gaps occur. For example, because a map/reduce job can yield, a long-running job may be forced to end, and a job associated with another script may begin executing in its place. Don't rely on this option if you need a strict script execution order. To enforce a strict sequence, have one script schedule another during the summarize stage. You can schedule a script programmatically using the task.create(options) method. For more details, see Submitting an On–Demand Map/Reduce Script Deployment from a Script.
The Submit All Stages at Once option is enabled by default. In general, you should leave this option enabled.
Yield After Minutes
The Yield After Minutes field prevents long-running map or reduce jobs from taking over a processor.
Here's how it works: During map and reduce stages, the system checks the job's runtime after each function call. If the amount of elapsed time has surpassed the number of minutes identified in the Yield After Minutes field, the job gracefully ends its execution, and a new job is created to take its place. The new job has the same priority, but a later timestamp. This is called yielding.
Yield After Minutes defaults to 60, but you can set it between 3 and 60.
The system never interrupts a function invocation for this limit. The system only ends a job after the limit has been exceeded. For that reason, the degree to which the limit is surpassed varies depending on the duration of your function invocation. For example, if the Yield After Minutes limit is 3 minutes, but your function takes 15 minutes to complete, then in practice the job yields after 15 minutes, not 3 minutes.
Yielding is also affected by a governance limit. This limit is 10,000 usage units for each map and reduce job. This limit works similarly to the Yield After Minutes limit: The system waits until after each function invocation ends to determine whether the usage-unit limit has been surpassed. If it has, the job yields, even if the Yield After Minutes limit has not been exceeded.
See also Map/Reduce Yielding and Soft Limits on Long-Running Map and Reduce Jobs.
Buffer Size
The Buffer Size controls how often a map or reduce job saves progress data. Generally, leave this field set to 1.
To understand this, let's review how map and reduce jobs work: The job first flags key-value pairs that need processing. Then it processes the flagged pairs. Then it saves data about the work done. This data includes details like processed pairs, usage points consumed, and more. This process repeats until the job yields or all pairs are processed.
The Buffer Size field determines how many pairs are flagged at one time. So if you leave this field set to its default of 1, the job flags one pair, processes it, and saves the data. Then it repeats the cycle.
You can set Buffer Size to any of the following values: 1, 2, 4, 8, 16, 32, or 64. However, a higher number increases the risk of processing key-value pairs twice if the job is interrupted by an application server restart. On the other hand, a higher number can be more efficient in certain situations.
Use the following guidance:
-
Generally, leave it set to 1, especially when processing records.
-
Choose a higher buffer size if the script performs fast algorithmic operations or if special circumstances require it.
Schedule Subtab
The following table summarizes the fields on the Schedule subtab. These settings are honored only if the deployment record’s Status is set to Scheduled and the Deployed box is checked.
Field |
Description |
---|---|
Single Event |
The map/reduce script deployment is submitted only one time. Use this option if you want to schedule a future one-time submission. |
Daily Event |
The map/reduce script deployment is submitted every x number of days. If you schedule the submission to recur every x minutes or hours, the schedule starts over on the next scheduled day. For example, your deployment is set to submit daily, starting at 3:00 am and recurring every five hours. A scheduled script instance is submitted at 3:00 am, 8:00 am, 1:00 pm, 6:00 pm, and 11:00 pm. The schedule resets at midnight, with the next submission at 3:00 am. |
Weekly Event |
The map/reduce script deployment is submitted at least one time per scheduled week. If you schedule the submission to recur every x minutes or hours, the schedule starts over on the next scheduled day. For example, a deployment set to submit on Tuesday and Wednesday, starting at 3:00 am and recurring every 5 hours. The deployment is submitted on Tuesday at 3:00 am, 8:00 am, 1:00 pm, 6:00 pm, and 11:00 pm. On Wednesday, the schedule starts over and the next submission is at 3:00 am. |
Monthly Event |
The map/reduce script deployment is submitted at least one time per month. |
Yearly Event |
The scheduled script deployment is submitted at least one time per year. |
Start Date |
The first submission occurs on this date. This field is required if a one-time or recurring schedule is set. |
Start Time |
If a value is selected, the first submission happens at that time. The time displayed when viewing the deployement is the time zone of the last user who edited the script deployment, based on that user's setting in Home > Set Preferences. When editing, the time updates to the current user's time zone. For example, user A is in (GMT -8:00) Pacfic Time (US & Canada) and creates the script deployment with a start time of 8:00 am. User B edits the script deployment and has a time zone set to (GMT -5:00) Eastern Time (US & Canada). Even if User B does not change the start time, the time will now display User B's time zone for all users, 11:00 am. |
Repeat |
If a value is selected, the first submission happens on that date and time. A new instance is created and submitted every x minutes or hours until the end of the start date. If applicable, the schedule resets on the next scheduled day. For example, your deployment is set to submit on Tuesday and Wednesday, starting at 3:00 pm and recurring every five hours. Submissions occur on Tuesday at 3:00 pm and 8:00 pm. On Wednesday, the schedule resets, and the next submission is at 3:00 pm. |
End By |
If a value is entered, the last submission ends by this date. If you schedule the submission to recur every x minutes or hours, a new script deployment instance is created and submitted every x minutes or hours until the end date. |
No End Date |
The schedule has no set end date. |