Docs navigation(tap to expand)
Bots & chat experience
Chat page & web widget
Knowledge & data
Tools & integrations
Operations & monitoring
Security & compliance
Plans, billing & limits
Agent builder & runtime
Build a chat-triggered graph in the dashboard, preview the draft, then publish a snapshot that powers the public chat.
Agents are created and edited in /dashboard/agents. Publish to make the chat page available at /agents/[slug].
Draft preview is available inside the dashboard and uses the draft graph (auth-gated).
For a high-level orientation, see Agents overview.
The agent runtime executes a DAG. Each node can read upstream outputs using workflow-style templates like {{@nodeId:Label.field}}.
- Chat Trigger: starts a run on each user message and exposes
input_text,current_date,conversation_id, andstate. - If/Else: evaluates a condition and routes via
true/falsebranches. - Transform (JSON Template): builds a JSON object from a template (values can include templates).
- Set State: deep-merges a patch object into conversation state for later turns.
- HTTP Request: GET-only, with an allowlist set at the agent level (publish-safe subset).
- Reply: the terminal LLM node. Configures model + instructions, and optional tools (file search, web search, MCP servers).
Reply nodes run with the AI SDK in a tool loop. When enabled, the assistant can call:
- File search (vector store): choose one store for MVP to enable
file_search. - Web search: optionally restrict to an allowed domain list, and optionally force web search on step 0.
- MCP servers: select MCP servers by ID (plan gated by tier).
Sources are accumulated from tool steps and stored on the final assistant message as metadata. The chat UI can display citations for web results and file matches based on agent settings.