For the past year, MCP gave agents a way to reach tools. Good. But agents couldn't find or talk to each other without duct-taping custom HTTP calls between services and hoping for the best. A2A v1.0, which hit stable release this month, fills that gap — and the design choices it made are worth understanding even if you never adopt the protocol.
Agent Cards
The core idea is almost disappointingly simple. Every A2A agent publishes a JSON file at /.well-known/agent-card.json — a machine-readable "business card" that declares what the agent does, what skills it has, how to authenticate, and where to send requests.
{
"name": "invoice-processor",
"description": "Extracts line items from PDF invoices",
"capabilities": { "streaming": true },
"skills": [
{ "id": "extract-lines", "description": "Parse invoice PDF into structured data" }
],
"securitySchemes": { "oauth2": { "flows": { "clientCredentials": {} } } },
"serviceEndpoint": "https://invoices.internal/a2a"
}
A client agent hits that well-known URL, reads the card, knows what to ask for. No central registry required. It's DNS-like discovery applied to agents, following RFC 8615 conventions that the web already understands.
The trick isn't the JSON schema — it's that this gives agents a uniform discovery mechanism that doesn't depend on the framework that built them. A LangChain agent and a Google ADK agent and a hand-rolled Python service all look the same from the outside.
What v1.0 Actually Ships
Earlier drafts were interesting but rough. The stable release tightened things in ways that matter for production.
gRPC binding. JSON-over-HTTP is fine for moderate traffic. But if you have agents exchanging thousands of messages per second — say, a monitoring swarm reporting anomalies — the gRPC binding gives you binary serialization and bidirectional streaming without bolting on WebSocket hacks.
Signed Agent Cards. This is the real upgrade for cross-org deployments. Agent Cards can now carry JWS signatures (RFC 7515), letting a client verify the card hasn't been tampered with and actually comes from the claimed provider. Before this, any man-in-the-middle could swap a legitimate agent card for a malicious one. Signed cards move identity verification from "I hope this endpoint is who it says it is" to cryptographic proof.
Multi-tenancy. A single A2A endpoint can host multiple agents, each with their own card. Useful for platforms that expose dozens of specialized agents behind one domain.
Task lifecycle. Tasks flow through well-defined states — working, input-required, auth-required, completed, failed, canceled, rejected. The input-required state is particularly smart: it lets an agent pause mid-task and ask the calling agent for more information, enabling genuinely collaborative workflows rather than fire-and-forget delegation.
Auth negotiation. The card's securitySchemes field declares what authentication the agent accepts — API keys, OAuth2, OpenID Connect, mutual TLS. Clients read the card and know exactly how to authenticate before sending a single task. No more guessing.
The release notes describe it as "maturity rather than reinvention." The core ideas survived the draft phase; rough edges got filed down. AgentCard itself evolved backward-compatibly, so agents can advertise support for both v0.3 and v1.0 during migration — a smart move that avoids the flag-day problem.
Different Problems, Same Stack
This comes up constantly: "Do I use A2A or MCP?" Wrong framing.
| MCP | A2A | |
|---|---|---|
| Relationship | Agent → Tool | Agent → Agent |
| Discovery | Client configures server list | /.well-known/agent-card.json |
| State | Stateless tool calls | Stateful task lifecycle |
| Trust model | Trust your configured servers | Signed cards + auth negotiation |
| Analogy | USB for AI | HTTP for AI |
MCP is how your agent calls a database, reads a file, or hits an API. A2A is how your agent finds another agent, negotiates auth, delegates a task, and gets structured results back. In a production multi-agent system, you typically need both: each agent uses MCP internally to access its tools, and A2A externally to communicate with peer agents.
Both protocols now live under the same roof — the Agentic AI Foundation at the Linux Foundation, co-founded by Anthropic, Google, OpenAI, Microsoft, and AWS. That organizational alignment matters more than any technical feature. It means the two specs will evolve in coordination rather than accidentally stepping on each other.
When to Skip A2A
If all your agents live in one process — a CrewAI or AutoGen setup where agents are Python objects calling each other's methods — A2A adds nothing. The protocol exists for agents that are separate services, possibly maintained by different teams or running in different organizations.
Litmus test: if you'd need an HTTP call or a message queue to reach the other agent anyway, A2A standardizes that call. If you're passing data between functions in the same runtime, it's ceremony for ceremony's sake.
The Trust Gap Nobody Wants to Own
Signed Agent Cards prove identity — this card genuinely came from invoices.acme.com. They don't prove competence or safety.
A signed card can truthfully declare "I process invoices" while the agent behind it hallucinates line items, mishandles PII, or quietly exfiltrates data. The spec punts on behavioral trust deliberately. Verifying what an agent does after you connect to it is a different problem — one that'll need runtime monitoring, output validation, and reputation systems that don't exist in any standard yet.
Honestly, this is the right call. Baking behavioral guarantees into a communication protocol would've made v1.0 either impossibly complex or uselessly restrictive. Ship the plumbing first. Let the trust layer emerge from the teams building actual multi-agent deployments and discovering which failure modes hurt the most.
The infra half is done. The hard half starts now.