Official Resources
Key Features
- Multi-agent Orchestration: Hierarchical, sequential, parallel, or loop workflows for complex agent coordination.
- Rich Tool Ecosystem: Pre-built tools (Search, Code Exec), custom Python functions, OpenAPI endpoints, MCP servers, or other agents as tools.
- Streaming Support: Bidirectional SSE, WebSocket, audio, and video streaming for interactive agents.
- Built-in Evaluation: End-to-end response and step-level evaluation tooling for agent performance.
- Deploy Anywhere: Container-ready with native integration for Vertex AI Agent Engine and Cloud Run.
- Developer UI (adk-web): Angular-based UI for real-time debugging, tracing, and workflow visualization.
- Open Protocols: Supports Agent2Agent (A2A) and Model Context Protocol (MCP) for interoperability.
Code Examples
Installation
bash
pip install google-adk
Single Agent Setup
python
from google.adk.agents import Agent
from google.adk.tools import google_search
root_agent = Agent(
name="search_assistant",
model="gemini-2.0-flash",
instruction="You are a helpful assistant. Answer questions using Google Search when needed.",
tools=[google_search]
)
Multi-Agent Coordination
python
from google.adk.agents import LlmAgent
greeting_agent = LlmAgent(
name="greeter",
model="gemini-2.0-flash",
instruction="Provide a friendly greeting only.",
description="Handles greetings"
)
weather_agent = LlmAgent(
name="weather",
model="gemini-2.0-flash",
instruction="Use the get_weather tool to answer weather questions.",
description="Returns weather data",
tools=[get_weather]
)
root_agent = LlmAgent(
name="coordinator",
model="gemini-2.0-flash",
instruction="Delegate to sub-agents based on user intent.",
sub_agents=[greeting_agent, weather_agent]
)
Development UI
bash
# Run the built-in development UI
adk web
# or start the API server only
adk api_server
# Visit http://localhost:4200 to chat, trace, and debug
Use Cases
- Conversational assistants with search and code execution
- Data pipelines orchestrating multi-step agents
- B2B enterprise tools integrated with internal APIs
- Interactive streaming UIs like voice or video assistants
- Multi-agent mashups combining GitHub, chat, and data agents
Pros & Cons
Advantages
- Code-first orchestration - Full developer control
- Multi-language support - Python (mature) and Java (early v0.1.0)
- Rich debugging and evaluation built in
- Model-agnostic - Swap Gemini, OpenAI, Anthropic, etc.
- Scalable deployment on Vertex AI or any container runtime
- Open protocols (A2A, MCP) for cross-framework compatibility
Disadvantages
- Early development - Expect occasional rough edges
- Cloud familiarity required - Most value unlocked with Vertex AI, Cloud Run, IAM
- Java ecosystem lags behind Python in maturity
- Developer UI adds Angular/Node toolchain complexity
Future Outlook & Integrations
- TypeScript & Go SDKs [Q4 2025]: First public releases with parity to Python 1.0 API
- C# & Rust SDKs [H1 2026]: Road-mapped after TypeScript/Go stabilize
- Agent Engine Autoscaling 2.0 [Aug 2025]: GPU-aware scale-to-zero, global edge endpoints
- MCP Marketplace [Sep 2025]: Curated registry of vetted MCP servers with one-line installation
- Vertex AI Fine-tune API [Oct 2025]: In-console fine-tuning of Gemini models directly from ADK traces
- A2A v1.0 Protocol [Oct 2025]: Final spec with multi-org federated agent discovery & billing
- Snowflake & Databricks MCP [Nov 2025]: Native connectors exposing SQL, warehouse, and feature-store tools
- Slack / Teams Bot Templates [Dec 2025]: Ready-to-deploy agents with OAuth, mention handling, file threads
- LangGraph → ADK Bridge [Jan 2026]: Drop-in wrapper allowing LangGraph graphs to run as ADK sub-agents