Skip to main content

Node Overview

Every element on the Voiceblox canvas is a node. Nodes are grouped into five categories.

Agent

The Agent node is the central orchestrator. There is exactly one per flow.
NodeDescription
AgentConfigures the agent’s name, persona (system prompt), and language; connects to all component and tool nodes

Component

Component nodes define the AI capabilities of your agent.
NodeHandleDescription
LLMllm_inLanguage model for generating responses
TTStts_inText-to-speech for voice output
STTstt_inSpeech-to-text for voice input
Avataravatar_inVisual avatar rendering (optional)

Conversation

Conversation nodes form the step chain — the sequence of actions during a call.
NodeDescription
StartEntry point. Agent speaks its opening line.
BurstA fixed number of back-and-forth exchanges.
Open TalkUnlimited free-form conversation on a topic.
TimerConversation capped at a time duration.
If/ElseBinary branch based on an LLM-evaluated condition.
CategorizeMulti-way branch by semantic classification.
TransferHand off to another agent or phone number.
EndTerminates the conversation.

Tools

Tool nodes connect to the Agent’s tools_in handle and give the LLM access to external capabilities during a conversation.
NodeDescription
MCPExposes tools from an MCP server to the LLM
ExaGives the LLM real-time web search via Exa.ai
SIP TransferLets the LLM transfer a call to a human phone number

Post-Processing

Post-processing nodes run after the conversation ends. Chain them after an End node.
NodeDescription
Structured OutputExtracts structured data from the conversation using an LLM
WebhookFires an HTTP POST with conversation data to an external endpoint

Connection diagram

[LLM] ──llm──▶ [Agent] ◀──tools── [MCP / Exa / SIP Transfer]
[TTS] ──tts──▶    │
[STT] ──stt──▶    ▼
              [Start] ──▶ [Burst] ──▶ [End]

                             [Structured Output] ──▶ [Webhook]