The MCP Dev Summit kicked off in New York today with 95 sessions about protocols, security, and scaling agent infrastructure. But the most consequential thing happening to the agent ecosystem right now might be a plain text file sitting in your repo root.

AGENTS.md — now stewarded by the Agentic AI Foundation alongside MCP — has quietly hit 60,000+ repo adoptions since its August 2025 debut. Cursor, Codex, Copilot, Devin, Gemini CLI, Jules, VS Code — they all parse it. No SDK required. No server to run. No protocol negotiation. Just Markdown that an LLM reads before it touches your code.

Why READMEs Weren't Enough

README.md is for humans. It tells contributors how to set up the project, what the architecture looks like, where to file issues. An agent doesn't need most of that. What it actually needs:

  • The exact command that runs tests — not "we use Jest" but npm run test -- --coverage --bail

  • Which directories contain generated code it should never edit

  • What PR title format the CI bot enforces

  • Security boundaries like "never write credentials outside the vault module"

  • Style conventions that go beyond what a linter catches

You could cram all of this into your README, but then your README becomes an unreadable wall of instructions mixing human onboarding with machine directives. The separation exists because agent-relevant context is a fundamentally different kind of documentation. It's operational, not explanatory.

The Spec That Isn't a Spec

Here's what's interesting: there is no formal specification. No required fields. No schema. No validation step. The whole thing is standard Markdown with whatever headings you feel like using.

The agent finds the nearest AGENTS.md in the directory tree, reads the text, and follows instructions. Closer files override parent files. User prompts override everything. That's the entire contract.

This bothers people who want formal interoperability guarantees. "Where's the JSON schema? How do we ensure conformance?" But the radical simplicity is exactly why adoption exploded. Every agent framework that wants to support it can start today — there's nothing to implement beyond "read a file." The LLM handles interpretation.

Compare that with the years of transport layer specs, OAuth 2.1 flows, and session management work behind MCP. That protocol solves a genuinely hard problem: standardized tool access across arbitrary services. AGENTS.md solves a completely different problem — project-specific context — and does it by being almost aggressively simple.

What the Good Ones Actually Contain

After browsing dozens of early-adopter repos, the useful AGENTS.md files share clear patterns. The bad ones share different patterns. Both are instructive.

Build and test commands are the single highest-value section. Not a vague "we use pytest" but the exact invocation with flags: pytest -x --tb=short -q tests/. Agents that know the precise command skip the costly trial-and-error loop of guessing test runners, discovering missing flags, and re-running.

Code generation boundaries prevent a specific class of disaster. "Files in /generated/ are produced from protobuf definitions. Never modify them directly." Without this line, agents will cheerfully edit generated code, create merge conflicts with the next codegen run, and waste twenty minutes of your review time.

Intent-level style rules fill the gap that linters can't. "We use barrel exports." "Error handling follows the Result pattern, not try-catch." "All database access goes through the repository layer — no direct ORM calls in route handlers." A linter catches semicolons. These rules catch architectural drift.

Security guardrails are the highest-stakes section. "All user input passes through the sanitize module before touching a query." "The /admin routes require the adminAuth middleware — never bypass it." One repo I saw had a single line that probably prevented a dozen vulnerabilities: "Never use dangerouslySetInnerHTML outside the RichText component."

The bad AGENTS.md files? They're copy-pasted READMEs with a header change, or exhaustive API references documenting every function signature. Agents don't need that — they can read source code. They need the unwritten rules that live in your team's collective memory and code review comments.

Monorepo Nesting Is Where It Gets Practical

Nested file support is where the convention earns its keep in larger codebases. The root AGENTS.md covers global rules — commit message format, CI requirements, shared tooling. Then each package gets its own file with local specifics:

monorepo/
├── AGENTS.md            # commit format, CI, shared lint rules
├── packages/
│   ├── api/
│   │   └── AGENTS.md    # test DB setup, migration commands, API patterns
│   └── web/
│       └── AGENTS.md    # component conventions, Storybook, CSS modules

Nearest file wins. An agent working on the API service gets API-specific instructions layered on the global ones. This mirrors how developers actually navigate large codebases — the rules aren't uniform, and now the agent knows that too.

The Feedback Problem

Here's my real concern with the current state of things: there's no enforcement loop.

When an agent ignores your AGENTS.md — writes to a generated file, skips the test command, uses the wrong PR format — nothing happens automatically. The file doesn't flag the violation. The agent doesn't learn from the mistake. You discover the problem in code review, fix it manually, and hope next time the model pays closer attention to paragraph four.

MCP has structured error responses. A2A has task state management. AGENTS.md has hope.

Some teams have found a workaround: they add verifiable commands to the file. "After making changes, run make lint && make test and fix all failures before submitting." Because agents will actually execute referenced commands and attempt to fix failures, this creates a crude enforcement layer. But not every convention reduces to a shell command. "Use the Result pattern for error handling" isn't something make check can verify.

I suspect the next evolution sits somewhere between pure prose instructions and formal tool schemas — maybe a structured machine-parseable section alongside the freeform Markdown. But for now, 60,000 repos have voted with their commits: imperfect context beats no context every time.

Three Layers, One Missing from the Conversation

The agent infrastructure stack is crystallizing into three distinct layers:

  1. Tool access — MCP handles how agents invoke external capabilities

  2. Agent communication — A2A defines how agents coordinate with each other

  3. Project context — AGENTS.md tells agents how your specific codebase works

Layers one and two dominate conference stages and funding announcements. Layer three determines whether agents actually produce useful output in your repo or just generate plausible-looking code that violates every convention your team spent years building.

The MCP Dev Summit has 95 sessions this week in New York. I'd trade ten of them for one honest talk titled "How We Got Agents to Actually Follow Our Coding Standards."