Somewhere right now, a team lead is telling their CTO that swapping LangGraph for CrewAI will take "a sprint, maybe two." That team lead is about to have a very educational quarter.

The Lock-in Nobody Budgeted For

Traditional vendor lock-in was annoying but well-understood. You picked Postgres or MySQL, built your queries, and the switching cost scaled with your schema complexity. Agent framework lock-in doesn't work that way. It compounds across layers.

Kai Waehner's recent enterprise landscape analysis nails it: agentic AI lock-in is more durable than API lock-in because it accumulates at the model layer, the orchestration framework, the tool definitions, the memory schema, and the observability stack simultaneously. These layers don't just add friction — they multiply it. An enterprise that built on Azure OpenAI, adopted the Microsoft Agent Framework, stored context in Cosmos DB, and instrumented with Azure Monitor hasn't picked a tool. They've entered a gravity well.

Pick LangGraph, and your business logic lives inside graph nodes with typed state flowing along edges. Pick CrewAI, and that same logic is encoded as role-goal-backstory triplets attached to task definitions. Pick the OpenAI Agents SDK, and you're writing imperative handoff chains. These aren't different syntaxes for the same idea. They're different mental models, and porting between them means rethinking your architecture — not find-and-replacing import statements.

One framework comparison estimated that migrating tool definitions, memory schemas, and observability plumbing between frameworks routinely consumes an engineer for a quarter. That's not a refactor. That's a rewrite wearing a refactor's clothes.

Three Frameworks, Three Universes

The gap gets concrete fast:

Unit of work Control flow Strength Trap
OpenAI Agents SDK Agent Imperative handoff chains Ships fast, tracing built-in Debugging deep chains
LangGraph Graph node Explicit state machine Durable checkpoints, branching You're maintaining a state machine
CrewAI Role Persona-driven delegation Maps to business workflows Anything outside the role metaphor

CrewAI tasks become LangGraph nodes mechanically, but your role abstractions evaporate in the translation. OpenAI SDK sessions have to be re-modeled into typed state objects for LangGraph. Going the other direction — LangGraph to the OpenAI SDK — means sacrificing durable checkpointing entirely. Every migration path has a conceptual gap that no automated tooling bridges.

MCP Covers the Periphery

The optimists point to MCP: if your tools sit behind MCP servers, swapping orchestration layers should be painless because the tools are framework-agnostic. They're half right.

MCP genuinely slashes the tool integration tax. Before it became an industry standard under the Linux Foundation, every framework maintained its own connector ecosystem — LangChain tools, CrewAI tools, AutoGen tools — all doing roughly the same thing with incompatible interfaces. A well-built MCP server now works with any compliant client, and that's real progress. The 2026 roadmap is pushing further into enterprise territory: stateless transports, horizontal scaling, audit trails, SSO-integrated auth.

But tools are the easy part. The hard part is everything MCP doesn't cover: how agents coordinate, how state flows between steps, how failures propagate, how human-in-the-loop checkpoints fire. A2A handles inter-agent communication but stays silent on intra-framework orchestration. You can standardize the periphery while the core remains completely proprietary. That's exactly the situation today — the edges are portable, the center isn't.

The Abstraction Layer Bet

Mozilla AI took a direct swing at this problem with Any-Agent:

from any_agent import AgentConfig, AgentFramework, AnyAgent

agent = AnyAgent.create(
    AgentFramework("langchain"),  # swap to "crewai", "openai"
    AgentConfig(model_id="gpt-4o-mini")
)

Build once, swap frameworks via config. It works for simple single-agent tool-calling scenarios. The moment you need LangGraph's checkpointing, CrewAI's role delegation, or the OpenAI SDK's native handoffs, you're reaching through the abstraction into framework-specific territory. Each framework also carries hardcoded system prompts and opinionated routing logic that no wrapper can normalize away. This is the classic leaky abstraction problem — these frameworks aren't different implementations of the same spec, they're different opinions about what agent orchestration should be. You can't abstract over a philosophical disagreement.

What Actually Holds Up

After watching teams cycle through this, the surviving pattern looks like this:

Accept coupling at the orchestration layer. Pick a framework and own the decision. Wrapping everything in abstraction layers gives you the worst of both worlds — the overhead of indirection without real portability. The teams that hedge end up maintaining framework-agnostic wrappers that nobody else understands and that break on every upstream update.

Decouple everything below. Tools behind MCP. Models behind a provider-agnostic client like LiteLLM or OpenRouter. Prompts and eval datasets in version-controlled files, not embedded in framework config objects. Memory in a database you control, not the framework's built-in store. If you get these right, the orchestration layer becomes the only coupled component — still painful to swap, but no longer catastrophic.

Keep agents small. The teams that survive framework migrations built focused, single-capability agents with clear boundaries. When each agent does one thing, rewriting it in a new framework is a day of work. God-agents orchestrating 40 tools across 12 steps are why migrations eat entire quarters.

Watch the protocol layer. MCP and A2A are maturing fast. Whether orchestration-level interop emerges or the three-universes model calcifies permanently will probably shake out over the next 12 months. If you're picking a framework today, weight protocol adoption and community trajectory heavily in the decision. The framework that plays well with open standards will be the easiest to leave — which, paradoxically, is why you'll stay.

The honest reality: there is no framework-portable way to build complex multi-agent systems in April 2026. Protocols handle the edges. The core is still a bet. Make it a small, informed one.