Skip to content
Back to Blog
APIsOpenAPISoftware ArchitectureDeveloper ExperienceSpeakeasySwaggerOverlaysArazzo
Why OpenAPI Is the Most Underrated Piece of API Infrastructure

Why OpenAPI Is the Most Underrated Piece of API Infrastructure

Tristan Cartledge15 January 202612 min read

The Underutilized API Asset

Most teams I encounter treat OpenAPI as documentation. They generate a spec from their code, point Swagger UI at it, and call it done. When I suggest they're leaving value on the table, I usually get some variation of:

  • "We tried OpenAPI, it went stale"
  • "It's just for Swagger UI"
  • "Our API is too complex for it"
  • "The spec can't express what we need"

Here's the thing: these aren't arguments against OpenAPI. They're symptoms of treating it as an afterthought.

At Speakeasy, where I work, we generate SDKs, MCP servers, and Terraform providers from OpenAPI documents. I've seen firsthand what becomes possible when teams treat OpenAPI as infrastructure rather than documentation. This post explains the "why" and "how" I've come to understand through that work.

The Real Problem: Fragmented API Knowledge

Every API already has a contract. The question is whether it's explicit or scattered across your codebase.

Most teams have:

  • One mental model in backend code
  • Another in docs
  • Another in SDKs
  • Another in tests
  • Another in Terraform or infrastructure definitions
  • Another in client examples

The result? Drift. Inconsistent SDKs. Docs that lie. Tests that don't reflect reality. And breaking changes that only surface when customers complain.

Every time you don't have a shared contract, you rebuild understanding by hand.

OpenAPI doesn't create a contract. It centralizes one that already exists. The difference is whether that contract is machine-readable and automatable, or locked in people's heads.

OpenAPI as Infrastructure, Not Documentation

When I say "infrastructure," I mean something with specific qualities:

  • Shared across teams and tools
  • Stable enough to build on
  • Machine-readable for automation
  • Evolvable as your API grows

When you treat OpenAPI this way, it becomes the single source of truth that powers everything downstream. Let me group what that actually means:

Build artifacts (things you ship):

  • SDK and client generation
  • Server stub generation (in design-first workflows)
  • Documentation generation

Assurance (things that keep you honest):

  • Validation and mocking
  • Contract testing
  • Breaking change detection

Enablement (things that scale your platform):

  • Workflow modeling
  • Platform governance and linting
  • AI agent interfaces

The spec isn't just describing the API. It's defining what the API is in a way that machines can act on.

The Code-First vs Design-First Debate

This is where conversations about OpenAPI often go sideways. People frame it as a methodology war:

  • Design-first = slow, academic, impractical
  • Code-first = pragmatic, real, how things actually work

I think this framing misses the point.

The real question isn't how your OpenAPI is created. It's whether it's trusted, maintained, and used.

Let me break down both approaches honestly.

Code-First: What It Gets Right

Code-first means your OpenAPI spec is generated from your server code, usually via annotations or framework introspection. It's popular because:

  • It's easy to adopt incrementally
  • The spec naturally stays close to implementation
  • Lower upfront cost
  • Fits existing workflows

If you already have an API and want an OpenAPI spec without rewriting anything, code-first is the path of least resistance.

Code-First: Where It Breaks Down

The problem is that code-first specs tend to describe how the API is implemented, not how it should be used.

Common issues:

  • Framework internals leak into the spec (weird parameter names, internal types exposed)
  • Poor naming and ergonomics for SDK consumers
  • Hard to model intent or workflows
  • Docs and SDKs optimized for the server, not the client

There's also massive variance in the quality of code-first tooling. Depending on your language and framework, you might get a clean, accurate spec or something that needs significant cleanup before it's useful. The good news is that these issues are solvable with better tools, overlays, and post-processing, but you need to know they exist.

Here's a concrete example. A code-first generator might produce this operation from your backend code:

# Code-first output (leaky) paths: /api/v1/internal/user_records/{user_record_id}: get: operationId: get_api_v1_internal_user_records_user_record_id parameters: - name: user_record_id in: path schema: type: string

After cleanup (or with overlays), it becomes something SDK consumers can actually use:

# Cleaned up for consumers paths: /users/{userId}: get: operationId: getUser summary: Retrieve a user by ID parameters: - name: userId in: path schema: type: string format: uuid

The difference in generated SDK method names: get_api_v1_internal_user_records_user_record_id() vs getUser(). This matters for developer experience.

Design-First: What It Gets Right

Design-first means you write the OpenAPI spec before (or at least independently of) the implementation. Benefits:

  • Consumer-oriented API design
  • Clear naming and consistency
  • Better long-term evolution
  • Enables parallel work (backend, SDKs, and docs can all start from the same spec)

When teams commit to design-first, they often end up with APIs that are genuinely pleasant to use.

Design-First: Where It Struggles

Design-first isn't free:

  • Higher upfront investment
  • Can drift from implementation without strong tooling
  • Feels slow for early-stage or rapidly-evolving APIs
  • Requires cultural buy-in

If your team isn't bought in, or you don't have the tooling to keep spec and implementation in sync, design-first can become a maintenance burden.

The Pragmatic Middle Ground

Here's what I've seen work in practice: contract-first over time.

Most teams don't switch to design-first overnight. They evolve:

  1. Start code-first to move fast
  2. Generate an initial OpenAPI spec
  3. Clean it up (fix naming, remove framework leakage)
  4. Layer meaning on top (better descriptions, examples, groupings)
  5. Gradually make the spec authoritative

Most teams don't switch to design-first. They grow into it.

The key is treating the spec as something you own and improve, not just something you generate and forget.

Making One OpenAPI Work for Many Consumers

Here's a problem I ran into repeatedly before I understood the ecosystem better: one OpenAPI spec can't serve everyone equally well.

SDKs want different shapes than server implementations. Docs need different descriptions. Internal APIs diverge from public ones. If you try to cram all of this into a single spec, you end up with a mess of vendor extensions and conditional logic.

There are several approaches to solving this.

Overlays (an OpenAPI Initiative specification) let you make non-destructive modifications to a base spec:

  • SDK-specific naming or ergonomics
  • Doc-specific descriptions
  • Internal vs public views
  • Language-specific hints

Document merging and inlining help manage large, multi-file specs by combining them into cohesive outputs or resolving $ref references for tools that need a single document.

You don't need one "perfect" OpenAPI. You need one core contract and multiple views.

This was a game-changer for me. Instead of fighting with a spec that tries to be everything to everyone, you maintain one authoritative spec and adapt it for different contexts.

APIs Are Used as Flows, Not Endpoints

There's another gap that OpenAPI alone doesn't address: real users don't call endpoints in isolation.

Think about how you actually use an API. You authenticate, then list resources, then fetch details, then perform actions. It's a flow. But OpenAPI describes endpoints as independent operations.

Docs often describe these workflows in prose. Tests re-encode them manually. Neither is machine-readable.

This is where Arazzo comes in. Arazzo (another OpenAPI Initiative specification) formalizes API workflows and scenarios:

  • Multi-step sequences
  • Dependencies between calls
  • Expected responses at each step
  • Common use cases and integration patterns

Arazzo isn't just for testing. It's for documentation and developer education. When you can formally specify "here's how you onboard a new user" or "here's the checkout flow," that specification can drive:

  • Interactive documentation that walks developers through real scenarios
  • Contract tests that validate the workflow still works
  • SDK examples that show idiomatic usage patterns

Once workflows are part of the contract, documentation, testing, and SDK examples can all stay in sync.

OpenAPI and AI Agents

Here's something I didn't expect when I started working with OpenAPI: it's become an important format for AI.

AI agents and large language models can parse OpenAPI specs to understand what an API does, generate accurate API calls, and identify inconsistencies. This works because OpenAPI is structured, well-documented, and widely adopted.

The Model Context Protocol (MCP) is emerging as a standard for how AI assistants interact with external tools. MCP servers can be generated directly from OpenAPI specs, meaning your existing API contract becomes the interface for AI agents without additional work.

If you want AI to interact with your API reliably, give it a good contract to work with.

This is more "leverage from the same contract" than a new paradigm. Teams that have invested in high-quality OpenAPI specs are now positioned to integrate with AI tooling much faster than those starting from scratch.

Why Tooling Quality Matters

Here's something I didn't fully appreciate until I started working on it professionally: parsing OpenAPI is genuinely hard.

Specs come in different versions (OpenAPI 3.0, 3.1, and legacy Swagger 2.0). Overlays and Arazzo add more complexity. But the real challenges that break naive parsers are:

  • $ref resolution across multiple files with relative paths, URLs, and circular references
  • Circular $refs that cause infinite loops in tree-walking code
  • Polymorphism (oneOf, anyOf, allOf) with inconsistent discriminator handling
  • Vendor extensions (x-* fields) that need to be preserved, not dropped
  • Multi-file specs with complex dependency graphs

Most existing libraries handle the happy path but break on real-world specs. They're spec-version fragile, lossy (drop information during parsing), hard to extend, or opinionated in unhelpful ways.

You can't build serious tooling on top of brittle parsers.

This is why Speakeasy invested in building an open source OpenAPI library: github.com/speakeasy-api/openapi. It's designed for real tooling: supports OpenAPI, Swagger, Overlays, and Arazzo; aims to preserve as much source information as possible (including comments and formatting where feasible); and is built for transformations and tooling authors.

If you're building tools that work with OpenAPI specs, I'd encourage you to check it out. I wrote a detailed post about the design decisions on the Speakeasy blog.

What "Good" Looks Like in Practice

Let me paint a picture of what mature OpenAPI usage looks like:

  • OpenAPI checked in like code. Reviewed in PRs, versioned, owned by a team (not generated and forgotten).
  • Overlays for each consumer. SDKs get one view, docs get another, internal tools get a third.
  • Arazzo for workflows. Key user journeys are formally specified.
  • SDKs and tests generated automatically. No hand-written clients.
  • Breaking changes visible early. Spec diffs show what changed before code ships.
  • Tooling built on solid foundations. No more "it works on my machine" parsing issues.

The teams winning with APIs aren't writing more code. They're writing better contracts.

Getting Started (If You're Not Already)

If you're not using OpenAPI at all:

  1. Pick a library for your language/framework that generates specs from code
  2. Generate your first spec
  3. Point Swagger UI at it
  4. Start using it in CI (linting, validation)
  5. Assign an owner. Someone needs to care about spec quality, or it will drift.

If you're using OpenAPI but it's stale or ignored:

  1. Regenerate from current code
  2. Spend an hour cleaning it up (names, descriptions, groupings)
  3. Add it to version control
  4. Start reviewing spec changes in PRs
  5. Assign an owner. Stale specs are orphaned specs.

If you're already committed to OpenAPI:

  1. Explore overlays for different consumers
  2. Look at Arazzo for workflow documentation and testing
  3. Evaluate whether your tooling is holding you back
  4. Consider what you could generate from your spec

Closing Thoughts

OpenAPI is optional until it isn't.

You can build APIs without it. You just can't scale them without a contract. And the informal contracts locked in code, docs, and people's heads don't survive team growth, consumer diversity, or time.

You can build APIs without OpenAPI. You just can't scale them without a contract.

If OpenAPI burned you before, I'd encourage you to revisit it with modern tooling. If you're already using it, consider whether you're treating it as documentation or infrastructure.

The teams I've seen get the most value don't just generate a spec. They own it, evolve it, and build their entire API platform on top of it.


If you're building API tooling and wrestling with OpenAPI parsing, check out speakeasy-api/openapi. And if you want to see what's possible when OpenAPI is treated as infrastructure, take a look at what Speakeasy can generate from a good spec.

Have thoughts on this? I'd love to hear how other teams are using (or not using) OpenAPI. Join the conversation on LinkedIn.

Share this article

Join the Discussion

Have thoughts on this post? I'd love to hear from you! Join the conversation on LinkedIn where we can discuss, share insights, and connect.

Comment on LinkedIn
Tristan Cartledge

Tristan Cartledge

Principal Software Engineer & Consultant specializing in backend systems, cloud architecture, and applied AI. Based in Cairns, Australia.