AI & Intelligence Nodes
These nodes leverage LLMs to generate responses, classify text, extract structured data, and summarize conversations. They form the core intelligence layer of your agent's journey.
All AI nodes support error handling with Stop, Continue, Error Branch, and Fallback Message strategies, plus optional Retry Once.
AI Response
Type: ai_response
Generates a contextual reply using the LLM, optionally grounded in your knowledge base via RAG. This is the primary node for open-ended AI conversation.
Properties
| Property | Type | Description |
|---|---|---|
systemPrompt | string | Instructions that guide the AI's tone, scope, and behavior. |
useKnowledgeBase | boolean | When enabled, the AI retrieves relevant content from linked knowledge bases before responding. |
temperature | number | Controls randomness. Lower values (0.1--0.3) produce focused answers; higher values (0.7--1.0) produce more creative responses. |
maxTokens | number | Maximum length of the generated response. |
How it works
When the conversation reaches this node, the engine sends the conversation history and system prompt to the LLM. If useKnowledgeBase is enabled, it first performs a semantic search across the agent's linked knowledge bases and injects the most relevant chunks into the prompt context.
If AI Quick Replies are enabled in agent settings and no Quick Reply node follows, the AI will also generate contextual suggestion buttons.
Example use case
A product advisor agent uses an AI Response node with a system prompt like "You are a helpful skincare consultant. Recommend products based on the user's skin type and concerns." with useKnowledgeBase enabled so the AI draws from the brand's product knowledge base.
AI Categorizer
Type: ai_categorizer
Classifies the user's most recent message into one of several predefined categories, then routes the conversation down the matching branch. Each category becomes a separate output handle on the node.
Properties
| Property | Type | Description |
|---|---|---|
categories | array | List of categories. Each has name, label, description, examples (array of sample phrases), and nextNodeId. |
confidenceThreshold | number | Minimum confidence score (0--1) required to commit to a category. |
fallbackNodeId | string | Node to route to when no category meets the confidence threshold. |
How it works
The engine sends the user's message along with all category definitions to the LLM and asks it to classify the message. Each category's description and examples are included in the classification prompt, so more detail produces better accuracy.
If the LLM's confidence for the best-matching category is below confidenceThreshold, the flow follows the fallback path instead.
Example use case
A support agent uses AI Categorizer with categories like "Billing", "Technical Issue", "Account Management", and "General Inquiry". Each category routes to a specialized sub-flow. The fallback path asks the user to clarify their request.
Intent
Type: intent
Detects the user's intent using a hybrid approach: keyword matching, AI-based classification, or both. Supports branching to different paths per detected intent.
Properties
| Property | Type | Description |
|---|---|---|
intents | array | Keyword-based intents. Each has name, keywords (comma-separated), and nextNodeId. |
categories | array | AI-based categories (same structure as AI Categorizer). Each has name, label, description, examples, and nextNodeId. |
confidenceThreshold | number | Minimum confidence for AI classification. |
fallbackNodeId | string | Node to route to when no intent or category matches. |
question | string | Optional prompt to display before waiting for user input. |
variable | string | Variable name to store the detected intent value. |
How it works
The engine first checks keyword-based intents. If a user's message contains any of the keywords for an intent, that intent is matched immediately (fast and deterministic). If no keyword match is found and AI categories are defined, it falls back to LLM-based classification using the same mechanism as AI Categorizer.
This hybrid approach lets you handle common, predictable inputs with keywords (low latency, no LLM cost) while catching everything else with AI classification.
Example use case
An e-commerce agent uses Intent with keyword intents for "track order" (keywords: "track", "where is my order", "shipping status") and "return" (keywords: "return", "refund", "exchange"), plus AI categories for less predictable intents like "product question" and "complaint".
Classify
Type: classify
A general-purpose AI classifier. Similar to AI Categorizer but accepts a custom prompt, making it suitable for classifying any text -- not just the user's last message.
Properties
| Property | Type | Description |
|---|---|---|
categories | array | Classification categories. Each has name, label, description, examples, and nextNodeId. |
confidenceThreshold | number | Minimum confidence required. |
fallbackNodeId | string | Fallback node when confidence is too low. |
prompt | string | Custom prompt for the classification task. Can reference variables with double-brace syntax. |
How it works
Unlike AI Categorizer (which always classifies the latest user message), Classify uses a custom prompt that you control. This prompt can include variable interpolation, so you can classify data stored earlier in the flow -- API responses, extracted fields, aggregated text, or anything stored in a variable.
Example use case
An agent collects a product review via a Question node, stores it in {{ reviewText }}, then uses Classify with the prompt "Classify the following product review by sentiment: {{ reviewText }}". Categories are "Positive", "Neutral", and "Negative", each routing to different follow-up flows.
Data Collector
Type: data_collector
Extracts structured information from the conversation. Can pull data from the existing conversation history or prompt the user with follow-up questions for missing fields.
Properties
| Property | Type | Description |
|---|---|---|
collectorFields | array | Fields to collect. Each has variableName, description (tells the AI what to extract), and type (text, email, phone, date, number). |
extractFromHistory | boolean | When enabled, the AI scans the conversation history first and extracts any fields that were already mentioned. |
continueAutomatically | boolean | When enabled, the flow advances automatically once all fields are collected without waiting for an explicit "continue" action. |
How it works
The engine reviews the conversation and attempts to extract each field. If extractFromHistory is enabled, it looks through past messages first. For any fields that are still missing, the AI generates natural follow-up questions and waits for the user's response. Each collected value is stored in the variable specified by variableName.
The type property provides validation -- an email field rejects values that are not valid email addresses, a phone field expects a phone number, and so on.
Example use case
A lead qualification agent uses Data Collector with fields for name (text), email (email), company (text), and teamSize (number). With extractFromHistory enabled, if the user already mentioned "I'm Sarah from Acme" earlier in the conversation, the AI extracts the name and company automatically and only asks for the remaining fields.
Summarize
Type: summarize
Generates a summary of the conversation so far and stores it in a variable for use in later nodes.
Properties
| Property | Type | Description |
|---|---|---|
summaryVariable | string | Variable name where the generated summary is stored. |
summaryInstruction | string | Custom instructions for the summary (e.g., "Focus on the customer's main issue and any resolution steps discussed"). |
summaryFormat | string | Output format: paragraph, bullets, or json. |
How it works
The engine sends the full conversation history to the LLM along with your custom instructions and requested format. The result is stored in summaryVariable and can be referenced in subsequent nodes using double-brace syntax ({{ var }}).
The json format is useful when you need to pass structured summary data to an API Call or Webhook node downstream.
Example use case
Before a Handoff node transfers the conversation to a human agent, a Summarize node generates a bullet-point summary of the customer's issue, attempted solutions, and current status. The summary is stored in {{ conversationSummary }} and included in the handoff message, giving the human agent full context.
AI Batch Classify
Type: ai_batch_classify
Classifies multiple items in a single AI call. Designed for Action agents that process arrays of data -- campaign lists, product catalogs, lead batches, and similar datasets.
Properties
| Property | Type | Description |
|---|---|---|
inputVariable | string | Variable containing the array of items to classify. |
outputVariable | string | Variable where the classified results are stored. |
categories | array | Classification categories. Each has name, label, and description. |
instructions | string | Custom instructions for the classification task. |
contextFields | array of strings | Which fields from each item to include as context for classification. |
batchSize | number | Number of items to classify per LLM call. Larger batches are more efficient but may reduce accuracy on complex classifications. |
includeReasons | boolean | When enabled, the AI provides a reasoning explanation for each classification decision. |
How it works
The engine reads the array from inputVariable, splits it into batches of batchSize, and sends each batch to the LLM for classification. For each item, only the fields listed in contextFields are included in the prompt. The output is an array of objects with the original item data plus a category field (and optionally a reason field).
This is significantly more efficient than classifying items one at a time in a Loop, since it reduces the number of LLM calls.
Example use case
An ad optimization Action agent fetches 50 campaign performance records via an API Call, stores them in {{ campaigns }}, then uses AI Batch Classify with categories "Performing Well", "Needs Attention", and "Critical" based on context fields like spend, conversions, and costPerConversion. The classified output drives downstream logic: campaigns marked "Critical" are paused, while "Needs Attention" campaigns get budget adjustments.
AI Image Lint (v3)
Type: ai_image_lint
Validates source product images against destination platform requirements before sync. Catches resolution, format, file size, and aspect ratio issues before they cause rejections. Optionally suggests better images from the product's Shopify gallery.
Properties
| Property | Type | Description |
|---|---|---|
destinations[] | string[] | Which destinations to validate images for (Meta, Google, TikTok, Amazon, ACP). |
systemPrompt | string | Override the default validation prompt. |
model | string | LLM model for smart fallback analysis. Default: Claude Haiku 4.5. |
enableSmartFallback | boolean | When enabled, ranks all available Shopify images and suggests the best one per destination when the primary image fails. |
confidenceThreshold | number | Minimum confidence score (0–1) for a suggestion to appear in the approval queue. Default: 0.7. |
executionMode | string | bulk_delegate (default — processes all products server-side in Syncer kit). Per-product mode is planned for v4. |
How it works
Pre-sync only. Runs before the write step in the Catalog Feed Syncer flow.
For each product, the node checks the primary image against each selected destination's requirements:
- Resolution: minimum width/height, recommended resolution
- Format: allowed image formats (JPEG, PNG, WebP, etc.)
- File size: maximum allowed by the platform
- Aspect ratio: preferred ratios per destination
When Smart Image Selection is enabled and the primary image fails, the node scores all other Shopify images using:
- Destination requirement compliance (required)
- Resolution quality (higher = better)
- Aspect ratio match for the destination
- Cached Claude vision analysis (professional quality, product visibility, background cleanliness)
Results appear as suggestion cards in the approval queue, each showing the violation and (if applicable) the recommended alternative image.
Pre-sync vs Post-sync
This node runs before the write step because it validates source images against platform requirements (documented rules). The separate AI Image Audit node runs after writing because it compares source images against what the destination actually served back — which doesn't exist until after the write + status poll.
Example use cases
Syncer kit: Add this node between fetch and write. Enable all destinations. Turn on smart fallback. The node catches "image too small for Amazon" or "wrong format for TikTok" before they hit the platform APIs.
Note: In v3, AI nodes are available in the Catalog Feed Syncer kit only (bulk_delegate execution mode). Per-product execution for use in other kits is planned for v4.
AI Image Audit (v3)
Type: ai_image_audit
Compares source images against destination-served images after sync to detect visual changes introduced by the platform (cropping, compression, color shifts, watermarks). Uses perceptual hashing (pHash) for fast comparison, escalating to Claude vision analysis for detailed diffs when changes exceed the threshold.
Properties
| Property | Type | Description |
|---|---|---|
destinations[] | string[] | Which destinations to audit images for. |
systemPrompt | string | Override the default vision analysis prompt. |
model | string | LLM model for vision escalation. Default: Claude Haiku 4.5 (supports vision). |
pHashThreshold | number | Hamming distance threshold (0–64). 0 = identical, 64 = completely different. Default: 12, which catches meaningful changes while ignoring minor compression artifacts. |
confidenceThreshold | number | Minimum confidence to surface in approval queue. Default: 0.7. |
executionMode | string | bulk_delegate (default). Per-product mode planned for v4. |
How it works
Post-sync only. Runs after the status poll step — must come after the write step so destination-served images exist.
For each successfully-synced product:
- Computes the pHash of the source image
- Fetches the image as served by the destination platform
- Computes the pHash of the destination-served image
- Compares via Hamming distance
If the distance exceeds pHashThreshold:
- Low severity (distance 13–20): logged but may be suppressed
- Medium severity (distance 21–32): escalated to Claude vision for detailed diff description
- High severity (distance 33+): always escalated and flagged
Claude vision analysis returns structured output: {same, differences[], severity, confidence} — describing what changed (cropping, color shift, quality degradation, etc.).
Example use case
A Syncer agent syncs 500 products to Google Merchant. After the write + status poll, the audit node discovers that 3 products have images that Google cropped significantly (pHash distance 28). Claude vision identifies "Product partially cut off on the right side due to automatic square cropping." The suggestion card shows a side-by-side comparison with the description, and the user can choose to re-upload a pre-cropped version.