llm.generateText(options)
The content in this help topic pertains to SuiteScript 2.1.
Method Description |
Returns the response from LLM for a given prompt. |
Returns |
|
Supported Script Types |
Server scripts For more information, see SuiteScript 2.x Script Types. |
Governance |
100 |
Module |
|
Since |
2024.1 |
Parameters
Parameter |
Type |
Required / Optional |
Description |
Since |
---|---|---|---|---|
|
string |
required |
Prompt for the LLM. |
2024.1 |
|
optional |
Chat history to be taken into consideration. |
2024.1 |
|
|
enum |
optional |
Specifies the LLM to use. Use llm.ModelFamily to set the value. If not specified, the Cohere Command R LLM is used.
Note:
JavaScript does not include an enumeration type. The SuiteScript 2.x documentation uses the term enumeration (or enum) to describe a plain JavaScript object with a flat, map-like structure. In this object, each key points to a read-only string value. |
2024.2 |
|
Object |
optional |
Parameters of the model. For more information about the model parameters, refer to the Chat Model Parameters topic in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
number |
optional |
A penalty that is assigned to a token when that token appears frequently. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
The maximum number of tokens the LLM is allowed to generate. The average number of tokens per word is 3. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
A penalty that is assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. Similar to |
2024.1 |
|
number |
optional |
Defines a range of randomness for the response. A lower temperature will lean toward the highest probability tokens and expected answers, while a higher temperature will deviate toward random and unconventional responses. A lower value works best for responses that must be more factual or accurate, and a higher value works best for getting more creative responses. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
Determines how many tokens are considered for generation at each step. See Model Parameter Values by LLM for valid values. |
2024.1 |
|
number |
optional |
Sets the probability, which ensures that only the most likely tokens with total probability mass of |
2024.1 |
|
Object |
optional |
Configuration needed for unlimited usage through OCI Generative AI Service. Required only when accessing the LLM through an Oracle Cloud Account and the OCI Generative AI Service. SuiteApps installed to target accounts are prevented from using the free usage pool for N/llm and must use the OCI configuration. |
2024.1 |
|
string |
optional |
Compartment OCID. For more information, refer to Managing Compartments in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Endpoint ID. This value is needed only when a custom OCI DAC (dedicated AI cluster) is to be used. For more information, refer to Managing an Endpoint in Generative AI in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Fingerprint of the public key (only a NetSuite secret is accepted—see Creating Secrets ). For more information, refer to Required Keys and OCIDs in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Private key of the OCI user (only a NetSuite secret is accepted—see Creating Secrets ). For more information, refer to Required Keys and OCIDs in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Tenancy OCID. For more information, refer to Managing the Tenancy in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
User OCID. For more information, refer to Managing Users in the Oracle Cloud Infrastructure Documentation. |
2024.1 |
|
string |
optional |
Preamble override for the LLM. A preamble is the Initial context or guiding message for an LLM. For more details about using a preamble, refer to About the Chat Models in Generative AI (Chat Model Parameters section) in the Oracle Cloud Infrastructure Documentation.
Note:
Only valid for the Cohere Command R model. |
2024.1 |
|
number |
optional |
Timeout in milliseconds, defaults to 30,000. |
2024.1 |
Errors
Error Code |
Thrown If |
---|---|
|
The |
|
Both |
|
One or more unrecognized model parameters have been used. |
|
One or more unrecognized parameters for OCI configuration have been used. |
|
The |
|
The |
|
A parameter was used with a model that does not support the parameter. (For example, this error would be returned if the |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The number of parallel requests to the LLM is greater than 5. |
Syntax
The following code sample shows the syntax for this member. It is not a functional example. For a complete script example, see N/llm Module Script Samples.
// Add additional code
...
const response = llm.generateText({
// preamble is optional for Cohere and must not be used for Meta Llama
preamble: "You are a successful salesperson. Answer in an enthusiastic, professional tone.",
prompt: "Hello World!",
modelFamily: llm.ModelFamily.COHERE_COMMAND_R, // uses COHERE_COMMAND_R when modelFamily is omitted
modelParameters: {
maxTokens: 1000,
temperature: 0.2,
topK: 3,
topP: 0.7,
frequencyPenalty: 0.4,
presencePenalty: 0
},
ociConfig: {
// Replace ociConfig values with your Oracle Cloud Account values
userId: 'ocid1.user.oc1..aaaaaaaanld….exampleuserid',
tenancyId: 'ocid1.tenancy.oc1..aaaaaaaabt….exampletenancyid',
compartmentId: 'ocid1.compartment.oc1..aaaaaaaaph….examplecompartmentid',
// Replace fingerprint and privateKey with your NetSuite API secret ID values
fingerprint: 'custsecret_oci_fingerprint',
privateKey: 'custsecret_oci_private_key'
}
});
...
// Add additional code