ORACLE CONFIDENTIAL. For authorized use only. Do not distribute to third parties.

Pre-General Availability: 2025-12-16

Components in an Agent Builder

Components, also referred to as nodes, are the building blocks in an Agent Builder flow.

This section explains when to use each node, their inputs and outputs, and how to configure them to reliably build your flows.

Common data types across nodes include:

Core Orchestration

LLM

The LLM node executes prompts using a Large Language Model configured in LLM Management. This node serves as the primary reasoning and completion engine for many flows.

When to use?

How to add and configure?

  1. Drag the LLM node onto the canvas.
  2. Choose a configured model from LLM Management.
  3. Supply prompts or instructions via an upstream Prompt or Message node.
  4. Set the temperature to a low value for more predictable and deterministic responses from the LLM, or to a high value for more varied and creative output.
  5. Connect the output Message as input to Agent, Output, or Processing components per the flow logic.

Inputs:

Outputs:

Agent

An Agent node uses an LLM configured in LLM Management to carry out instructions, answer questions, and orchestrate complex multi-step tasks using hierarchical manager/worker orchestration and sub-agents.

When to use?

How to add and configure?

  1. Drag the Agent node onto the canvas.
  2. Select a base LLM from LLM Management.
  3. Optionally, attach relevant Tools.
  4. Optionally, connect other Agent nodes to the Sub-agents connector to set up manager/worker orchestration.
  5. Provide instructions through a Prompt or Message input.
  6. Optionally, set the temperature to a low value for more predictable and deterministic responses from the LLM, or to a high value for more varied and creative output.
  7. Connect the output Message as input to Agent, Output, or Processing components per the flow logic.

Inputs:

Outputs:

Tools

Bug Tools

The Bug Tools node is a specialized component for interacting with your organization’s bug or issue trackers.

When to use?

How to add and configure?

  1. Drag the Agent node (or another compatible node that supports tool calls) onto the canvas.
  2. Attach Bug Tools in the Agent’s Tools configuration.
  3. Provide the required credentials or configuration for your bug tracking system.
  4. Use prompts or instructions within your Agent to invoke Bug Tools.

Inputs: Tool parameters supplied by Agent prompts or upstream data

Outputs: Message summaries or structured JSON results returned by the tool

MCP Server

The MCP Server node integrates an MCP server to expose external capabilities as tools to Agents. It is a specialized component for interfacing with AI models using the Model Context Protocol (MCP) with Server-Sent Events (SSE) transport.

When to use?

How to add and configure?

  1. Configure your MCP Server and build the server URL. See MCP Server.

  2. Drag and drop an MCP Server node from the Tools category

  3. Set the URL of MCP server to https://<host>:<port>/mcp or https://<host>:<port>/sse. The URL fully defines the endpoint path, there are no extra path fields.

  4. Choose from any of the authentication types for accessing MCP server:
    • Basic auth Uses HTTP Basic authentication with a username and password.
    • Bearer token: Uses your OAuth 2.0 access token. Leave the Basic user and Basic password fields empty.
    • None: No authentication. Leave all authentication fields empty.

    Note: Leave the Bearer token field in the node empty for Basic auth. For Bearer token authentication, leave Basic user and Basic password fields empty. Leave all authentication fields empty for None Auth type.

  5. Connect the MCP Server node’s Tools output to downstream Agent nodes that consume the discovered MCP tools. Trigger tool usage through Agent prompts or tool-calling logic.

  6. Test the MCP Server node using Playground. If the server requires authentication and if the configured credentials are missing or invalid, a friendly error appears in the chat: “The MCP server requires Authorisation, please check the Auth details on the MCP Server Node in the created workflow.” If the authentication is successful, the node discovers the tool list and the downstream node can call the selected tool.

Inputs: Agent instructions and any necessary tool arguments

Outputs: JSON responses from MCP tools and message summaries generated by the Agent

See FAQs and Troubleshooting for MCP server related questions.

REST API Tools

The REST API node allows you to call REST APIs directly from your flows, often through an Agent.

When to use?

How to add and configure?

  1. Configure REST API connections in Data Sources.
  2. Drag the REST API node onto the canvas and connect it to an Agent or any other compatible tool-enabled node.
  3. Specify the endpoints, headers, authentication, and HTTP method (GET, POST, PUT, or DELETE).
  4. Use Agent prompts to determine which endpoint and parameters to use.

Inputs: URL/endpoint and parameters or body data, often passed from previous node outputs

Outputs: JSON responses from the API and message summaries created by the Agent

Inputs

Chat Input

Chat Input node captures conversational user input in a chat interface.

Note: A custom flow can contain a maximum of one Chat Input component.

When to use:

How to add and configure:

  1. Drag the Chat Input node onto the canvas.
  2. Connect its Message output to downstream nodes such as Prompt, Agent, or LLM.

Inputs: None, as the user provides input at runtime

Outputs: Message containing the user’s text

Text Input

The Text Input node captures plain text input to pass to the next node.

When to use?

How to add and configure?

  1. Drag the Text Input node onto the canvas.
  2. Optionally, set a label or placeholder for the input field.
  3. Connect its Message output to downstream nodes.

Inputs: None, as input is provided by the user

Outputs: Message containing the entered text

Prompt

The Prompt node creates and formats textual instructions for an LLM or Agent, often combining fixed instructions with dynamic values from upstream nodes.

When to use?

How to add and configure?

  1. Drag the Prompt node onto the canvas.
  2. Write your instructions, optionally using placeholders to reference outputs from upstream nodes.
  3. Connect the Prompt node to an LLM or Agent node.

Inputs: Message or JSON data for variable interpolation

Outputs: Message containing the final prompt text

Outputs

Chat Output

The Chat Output node renders or returns model/agent responses in a chat interface.

When to use?

How to add and configure?

  1. Drag the Chat Output node onto the canvas.
  2. Connect Message output from the LLM or Agent (or a formatted message from a Parser node) to the Chat Output node.

Inputs: Message

Outputs: None, as the response is rendered directly in the UI

Email Output

The Email Output node sends generated content via email. Accepts comma-separated email addresses and sends the generated content by email to multiple recipients.

When to use?

How to add and configure?

  1. Make sure SMTP is configured. See SMTP Configuration
  2. Drag the Email Output node onto the canvas.
  3. Provide a subject, specify recipients as comma-separated email addresses to send to multiple people, and connect the Message output from an upstream node to serve as the email body.
  4. Optionally, attach any supported files using upstream file or data nodes.

Inputs: Message (email body); optional attachments or JSON for subject templating

Outputs: None (the node sends the email)

Data

Read CSV File

The Read CSV node imports and parses CSV files for batch or tabular data processing.

When to Use?

How to Add and Configure?

  1. Drag the Read CSV node into your workflow.
  2. Select or upload a CSV file.
  3. Adjust delimiter and encoding settings if necessary.
  4. Connect the node’s outputs to subsequent steps in your workflow.

Inputs: File path or uploaded CSV file

Outputs:

File Upload

The File Upload node allows users to upload files for processing or analysis.

When to Use?

How to Add and Configure?

  1. Drag the File Upload component into your flow
  2. Select the file to upload
  3. Connect the node to parsers, LLMs, or data converters

Inputs: None

Outputs: File handles/paths and/or Message/JSON, depending on integration

SQL Query

The SQL Query node executes SQL statements against configured databases and returns the results for use in workflows.

Note: Only SELECT-like queries are supported for security and governance.

When to use?

How to add and configure?

  1. Ensure the database connection is set up. See Data Sources.
  2. Drag the SQL Query node into your workflow.
  3. Enter your SQL query text. This can be static or parameterized using data from upstream nodes.
  4. Connect the output to nodes such as Parser, Type Convert, or Combine JSON.

Inputs: Message or JSON containing the query and its parameters

Outputs:

Processing

Condition

The Condition node enables simple if-else decision logic in agent flows. Use condition node to compare a Text Input against a Match Text using operators. If the condition evaluates to true, it outputs a configurable true message and follows the “true” branch. If false, it outputs a false message and follows the “false” branch. You can use it for basic branching without complex orchestration.

When to use?

How to add and configure?

  1. Drag the Condition node onto the canvas.

  2. Configure the following settings:
    • Text Input: The value you want to check.
    • Match Text: The value to compare against.
    • Operator: Choose from equals, not equals, contains, not contains, greater than, less than, or regex match.
    • True Message (optional): Message to return if the condition is met.
    • False Message (optional): Message to return if the condition is not met.
  3. Connect the True and False outputs to their respective next steps.
  4. Test the Condition node using sample inputs.

Type Convert

The Type Convert node transforms data between message to string, JSON to dictionary or list, and dataframe to list of dictionaries for use by downstream nodes that require a specific structural type. It outputs all three types simultaneously, allowing downstream nodes to use whichever type they require.

Type Convert node supports robust JSON parsing from text:

When to use?

How to add and configure?

  1. Drag and drop a Type Convert node from the Processing category.
  2. Connect any of Message, JSON, or DataFrame from the upstream node as input.
  3. Test the Type Convert node by connecting the output(s) - string, dictionary, or list of dictinories to appropriate downstream node(s). Execute the flow by selecting Playground.

Conversion Examples:

Inputs: Message, JSON, or DataFrame

Outputs (provided simultaneously):

For example flows that use the Type Convert node together with the Parser and Combine JSON Data nodes, see Example Flows Using Combine JSON Data and Example Flow Using Parser.

Combine JSON Data

The Combine JSON Data node merges multiple JSON payloads into a single object. It allows you to filter top-level keys and define how to resolve conflicts between keys. The node outputs both a JSON object and a Message containing the stringified JSON.

When to use?

How to add and configure?

  1. Drag and drop a *Combine JSON Data** node from the Processing category.

  2. Connect one or more upstream JSON sources to the JSON inputs inlet.

  3. Optionally, connect the output of a previous Combine JSON node to the Initial accumulator field for multi-stage merging. Leave it empty for a filter-only operation.

  4. Input the top-level keys to include from input sources in the Keys to include field. Supported formats are a JSON array or a comma-separated list: ["a","b"], a, b, or a b. Leave it empty to include all the top-level keys.

  5. Select the Merge mode from Deep or Replace. Use Deep merge to recursively extend lists, replace scalars, and combine keys from all JSONs. Use Replace mode to overwrite keys with values from the JSON of the last connected node to the JSON inputs inlet.

  6. Test the Combine JSON node by connecting two or more JSON inputs to the JSON inlet and the stringified JSON or merged dictorionary JSON to a chat output node. Execute the flow by selecting Playground, typing a space in the Chat input, and pressing Enter.

Inputs: Multiple JSON or Message-with-JSON

Outputs:

Example Flows

Example 1: Using the Combine JSON Data Component in Deep Merge Mode:

Example: Combine JSON Data in Deep Merge Mode

The output JSON {"Title": "Final Report", "report": {"topics": ["Physics", "Maths", "Chemistry", "Biology"], "Date": "24/11/25", "other_details2": "Rainy weather"}, "other_key1": "Other Report"} includes only the keys specified in the ‘Keys to include’ field (report and other_key1) of the Combine JSON Data node. In Deep Merge mode, list values are combined for matching keys (key=topics), whereas scalar values (key=Date) are replaced.

The Initial Accumulator can be any JSON, with or without keys that overlap with the incoming JSONs and will appear in the final JSON output. Any matching key in the Initial Accumulator will be replaced by the incoming value in the final JSON output. In this example, the Initial Accumulator is {“Title”: “Final Report”}.

Example 2: Using the Combine JSON Data Component in Replace Merge Mode:

Example: Combine JSON Data in Replace Merge Mode

Description of the illustration combine-json-data-example-replace.png

The output JSON {"Title": "Final Report", "report": {"topics": ["Chemistry", "Biology"], "Date": "24/11/25", "other_details2": "Rainy weather"}, "other_key1": "Other Report"} includes the keys that were entered in the ‘Keys to include’ field (report and other_key1) of the Combine JSON Data Node. In Replace Merge mode, all the values for the same keys (key=topics) are replaced. The key value from the last input replaces the previous key values, so nodes connected later to Combine JSON Data overwrite those connected earlier.

The Initial Accumulator can be any JSON, with or without keys that overlap with the incoming JSONs and will appear in the final JSON output. Any matching key in the Initial Accumulator will be replaced by the incoming value in the final JSON output. In this example, the Initial Accumulator is {"Title": "Final Report"}.

Parser

The Parser node extracts, formats, and combines text from structured inputs, such as JSON or DataFrame. It transforms these inputs into a clean, readable message output that can be displayed or passed to downstream steps.

When to use?

How to add and configure?

  1. Drag and drop a Parser node from the Processing category .

  2. Connect JSON or Message outputs from upstream nodes to the JSON or DataFrame inlet. Alternatively, input a JSON in the text box.

  3. Input an optional dot-path in the Root path field to navigate into the input before applying template. Examples: records, records.0, outer.inner.items.

  4. Select Mode from Parser or Stringify. The parser mode applies the template to each of the selected items or objects. The stringify mode ignores template and converts the selection(s) to a raw string.

  5. Input Template for parser mode as a format string with placeholders referencing keys or columns from objects, or {text} for entire string inputs.

    Examples:

    • Name: {name}, Age: {age}
    • {PRODUCT_ID}: {PRODUCT_NAME} — stock {STOCK_QUANTITY}
    • Text: {text}
  6. Enter a Separator which is a string used to join multiple rendered items when the input is a list. Default is \n.

  7. Test the Parser node by connecting a JSON input to its JSON inlet and routing the output message to a chat output node. Execute the flow by selecting Playground.

Inputs: JSON or DataFrame

Outputs: Message containing the final rendered text

Example Flow

Below flow demonstrates grouping the Jira issues into a ‘Primary Group’ and storing the remaining Jira issues as ‘Remaining Jiras Primary’.

Parser Example

Description of the illustration parser-example.png

Using the following prompt, the LLM will generate a JSON with two keys - primary_groups and remaining_jiras_primary.

You are a JIRA analysis expert focusing ONLY on cluster-based grouping.

INPUT DATA: Problem Set:
<problem-set>
{{problem_set_data}}
</problem-set>

TASK: Group JIRAs based ONLY on cluster IDs.
STRICT RULES:
ONLY group by cluster ID matches
MANDATORY : EXACT cluster ID should match
MANDATORY : Each group MUST have minimum 2 JIRAs in the jira_list
If a group has fewer than 2 JIRAs, dissolve that group and ALL JIRAs from that group MUST be moved to remaining_jiras_primary
Order JIRAs chronologically within groups
DO NOT group by any other attributes
NO partial matches allowed
NO inferred relationships
VERIFY all groups meet the minimum 2 JIRAs requirement BEFORE outputting
DO NOT explain your reasoning or corrections
DO NOT output any intermediate results
MANDATORY: Use double quotes for ALL strings in JSON (not single quotes)
MANDATORY: Do NOT add “JIRA” prefix to any ID - use the exact ID format from the input data

OUTPUT FORMAT:
{{
    "primary_groups": [
        {{
            "title": "Cluster - <Full Cluster ID>",
            "summary": "Clear description of cluster impact",
            "jira_list": ["EXACSOPS-1", "EXACSOPS2-2"]
        }}
    ],
    "remaining_jiras_primary": ["EXACSOPS-3", "EXACSOPS-4"]
}}

CRITICAL INSTRUCTION: Return ONLY the final VALID JSON with proper double quotes for all strings. Do not include ANY explanations, reasoning, comments, corrections, or text of any kind outside the JSON structure. The response must begin with {{ and end with }} without any other characters.

Now, if you want to send the data in primary_groups directly for review and route remaining_jiras_primary through another round of analysis, you can use Parser nodes to parse the JSON generated by the LLM and extract data from two keys. This requires two Parser nodes. Select Playground to execute the flow.

Parser Example Output

Description of the illustration parser-example-output.png

You can use the two outputs from the two Parser nodes for different purposes in a complex workflow.

Utilities

Sticky Notes

The Sticky Notes node keeps track of important information or instructions within your flow.

When to use?

How to add and configure?

  1. Drag the Sticky Notes node onto the canvas.
  2. Click the note to edit its text.
  3. Resize or move the note as needed.
  4. Choose your preferred color.

Inputs: Raw text

Outputs: Not applicable