Skip to main content

In-Browser Playground

The Test Panel lets you run a live voice session with your agent directly in the browser. No deployment needed.

Requirements

  • LIVEKIT_URL, LIVEKIT_API_KEY, LIVEKIT_API_SECRET in .env.local
  • At least one LLM, TTS, and STT configured in your flow
  • Microphone access in the browser

Starting a session

  1. Click Test in the top toolbar
  2. The Test Panel opens on the right side of the canvas
  3. Click Start Session
  4. Allow microphone access when prompted
  5. The agent speaks its opening line — start talking

Agent runtime

The playground dispatches agent jobs to the Python worker process. You need to have pnpm agent-python:dev running in a separate terminal. See Agent Runtimes.

What happens under the hood

When you click Start Session:
  1. The frontend POSTs to /api/agent-session with your current flow’s nodes and edges
  2. The server converts the graph to an AgentConfig via graphToConfig()
  3. A LiveKit room is created with the config stored in room metadata
  4. The job is dispatched to the Python agent worker via AgentDispatchClient
  5. You receive a JWT token to join the room
  6. The frontend joins the room using the LiveKit SDK
  7. The agent publishes voiceblox.agent.events data messages that the frontend listens to

During a session

  • Speak into your microphone to interact with the agent
  • Watch the active conversation step highlight on the canvas in real time
  • View the conversation transcript in the Test Panel
  • The session ends when the agent reaches an End node or you click End Session

Ending a session

Click End Session in the Test Panel to disconnect from the LiveKit room.

Session persistence

Test sessions and their conversation transcripts are automatically saved to the local SQLite database. After a session ends, you can review the full conversation history from the History tab in the Test Panel — including message timestamps, session duration, and the conversation step each message belonged to. Previous sessions are listed per agent, so you can compare runs across different versions of your flow.

Troubleshooting

See Troubleshooting for common issues.