Google ADK (Agent Development Kit)
Google ADK (Agent Development Kit)
This guide shows how to integrate the Telcoflow SDK with the Google Agent Development Kit (ADK) for complex AI agent behavior, including multi-agent orchestration, long-term memory, and structured tools.
Overview
The integration bridges Telcoflow’s bidirectional audio stream with ADK’s Runner and LiveRequestQueue:
- Caller audio -> ADK: Incoming phone audio is sent to the
LiveRequestQueueas real-time input - ADK audio -> Caller: ADK events containing model audio are forwarded to the caller via
send_audio()
Full Example
How It Works
ADK Components
- Agent - Defines the AI model and system instruction
- Runner - Orchestrates agent execution and session management
- InMemorySessionService - Stores session state (swap for a persistent store in production)
- LiveRequestQueue - Accepts real-time audio input and feeds it to the runner
Stream to ADK
The stream_to_adk() coroutine reads audio chunks from call.audio_stream(), wraps them as types.Blob objects, and sends them to the LiveRequestQueue using send_realtime().
Receive from ADK
The receive_from_adk() coroutine runs runner.run_live() with StreamingMode.BIDI and response_modalities=["AUDIO"]. For each event:
- Interruption: When
event.interruptedisTrue,clear_send_audio_buffer()is called to stop queued audio - Model audio: When
event.contentcontainsinline_data, the raw audio is forwarded to the caller
Session Management
Each call gets its own session, keyed by call.call_id. This allows ADK to maintain conversation context within a single call. For cross-call memory, use a persistent session service.
When to Use ADK vs. GenAI SDK
Use the GenAI SDK integration for straightforward voice AI. Use ADK when you need structured agents, tools, or multi-agent coordination.
Environment Variables
Next Steps
- Google GenAI Integration - Simpler integration for basic voice AI
- Audio Streaming - Buffer management and interruption handling
- Use Cases - Combine ADK with escalation patterns
