Nobody Coordinated. Everybody Converged.
Six independent builders arrived at adjacent versions of the same structural pressure — from six different altitudes. None of them talked to each other.
Over the past three weeks, I’ve been reading everything I can find on Substack about how people actually work with AI. Not the tutorials. Not the prompt libraries. The builders — the ones running real projects and writing about what they’re learning.
I found something I wasn’t looking for. Six writers, working in six different domains, at six different altitudes of abstraction, arriving at adjacent versions of the same structural pressure: the model is not the constraint. Everything around it is.
These are not the same claim. Some are economic, some cognitive, some operational. The convergence is not in their conclusions — it’s in the direction they all point. None of them coordinated. None of them cite each other. Most of them don’t know each other exists.
The Pattern
Start at the top.
Eric Porres, writing in Beyond Reason, reweighted Anthropic’s labor exposure data by wage bill and found that the US economy sits inside AI’s capability zone — but the global economy doesn’t. His “$23 Trillion Blind Spot” is rigorous macro-economics, not AI commentary. But buried in the analysis is a distinction that matters here: the difference between “dumb friction” and “meaningful friction.” Dumb friction is the kind automation should eliminate — rote process, unnecessary handoffs, redundant approvals. Meaningful friction is the kind that produces judgment: the constraint that forces you to decide before you build, the review that catches a bad assumption before it ships.
That distinction — friction worth keeping — is the macro-economic version of a conclusion the rest of these builders reached from completely different starting points.
Jean-Paul Paoli, writing in The Intelligence Fabric, made the case that paperwork is productive again. Not bureaucracy — structured documentation that becomes executable context. His “Specificity Paradox” argues that code’s real product was never software; it was specificity — the discipline of making intent unambiguous. AI removes the coding labor but not the specificity requirement. His paperwork maps closely to what governance files do in practice. His specificity is what constraint documents enforce. His work stands on its own terms — but it intersects here.
Yuyan Sun, writing in Amazing Work!, identified the organizational version. Her concept of “clarity debt” — accumulated imprecision in goals and scoping that worked fine between humans but fails catastrophically with AI — names the exact problem that governance files solve. When she writes that “the prompt is the thinking,” she’s describing what happens when the environment forces you to articulate decisions before delegating execution.
Tyler Folkman, writing in The AI Architect, built it from the developer side. His five-stage factory maturity model tracks how AI coding workflows evolve: copy-paste, then assistant-with-review, then compound systems, then autonomous pipelines, then multi-agent. The gap between stage two and stage three — the place where most teams stall — is where governance enters. His “50/50 rule” (spend half your time improving the system, not producing output) is the builder’s version of the same insight: the infrastructure around the AI matters more than the AI itself.
Aaron Kennedy, writing in A Feature a Day, synthesized a concept he calls “compounding engineering”: observe, translate, automate, measure. “If you do it twice, make a tool for it.” His compounding is about encoding process into tooling — prompt libraries, linter rules, test scaffolds. It’s compounding at the automation layer, and it works. But it focuses on process, not on persisting decisions across projects. It doesn’t carry judgment forward.
Scott Werner, writing in Works on My Machine, arrived at the cognitive version. His “Collective Superstitions” essay uses Borges’ Pierre Menard to argue that prompting techniques work for a trivially simple reason — any structure is better than none — but the technique itself is just visible residue of a cognitive path. The value isn’t in the ritual. It’s in the forcing function that makes you think before you prompt. His key line: “Your prompting technique isn’t special because of what it does to the model. It’s special because of what it does to you.”
Six builders. Macro-economics, institutional theory, organizational strategy, developer infrastructure, engineering automation, cognitive science. All pointing the same direction: the leverage sits in how the environment is structured and maintained. The model is a commodity.
What They All Miss
The convergence is real. The gap is also real.
Porres identifies meaningful friction but doesn’t attempt to operationalize how it should be preserved. Paoli describes governance documents but doesn’t attempt to show what happens when those documents compound across projects over months. Sun names clarity debt but her work stops at organizational strategy — it doesn’t extend into operational infrastructure. Folkman builds the compound system but scopes it to a single engineering workflow, not a cross-domain practice. Kennedy’s compounding engineering encodes process but not decisions — and decisions are the part that transfers. Werner identifies the cognitive forcing function but doesn’t attempt to persist it; the path dies when the session ends.
Each of them has a piece. None of them is trying to build the full stack — that’s not their project. But the stack is largely unbuilt.
The missing layer is the one that sits between the model and the operator — the governance infrastructure that persists decisions across sessions, enforces constraints across projects, and compounds judgment instead of just compounding process. Constraint files. Decision logs. Cross-workspace handoffs. The architecture that makes each session start from the accumulated intelligence of every previous one.
In practice, this looks like: constraint files that gate what gets built before code starts. Decision logs that carry reasoning forward across sessions so the same mistake doesn’t get made twice. Cross-workspace handoffs that route an insight from one project to the domain where it can compound. A status file that eliminates the re-explanation cost of every new session. Without this layer, every session resets: decisions are re-litigated, constraints drift, and the same errors repeat under different prompts.
That’s the layer I’ve been building for four months and writing about in this publication. Not because I predicted the convergence — because I hit the same wall everyone else hit and decided to build through it instead of writing about it from a distance.
What the Convergence Means
When six independent builders arrive at the same conclusion from six different starting points, one of two things is happening: either they’re all wrong in the same way, or they’ve found a structural feature of the problem.
The structural feature is this: Model capability has outrun the infrastructure required to persist context, constraints, and decisions. The models can do the work. The environment around them — the context, the constraints, the decision history, the cross-project memory — doesn’t exist. Every builder I found is dealing with the consequences of that gap, whether they frame it as friction, specificity, clarity debt, factory maturity, compounding engineering, or cognitive superstition.
Everyone agrees on the diagnosis. No one agrees on what to build. The operational infrastructure that turns the diagnosis into daily practice remains largely undocumented in public.
The Honest Part
Convergence can be confirmation bias. I went looking for builders writing about AI practice, and I found builders writing about AI practice. The search terms I used, the publications I clicked into, the entries I promoted on my watchlist — all of those carry selection effects. I may be pattern-matching where the pattern is partly an artifact of how I looked.
I also can’t verify influence chains. It’s possible some of these writers have read each other. Kennedy references “compound engineering” from Every’s Dan Shipper. Folkman may have read Kennedy. The independence I’m claiming could be less independent than it appears.
And the convergence is at the thesis level, not the solution level. Everyone agrees the environment matters more than the model. Almost no one agrees on what the environment should look like, how it should be maintained, or what governance infrastructure actually means in daily practice. Everyone agrees on the problem. No one agrees on what to build.
There’s also a failure mode on my side of the stack. Bad governance recreates the friction it’s meant to remove. Constraint files that grow unchecked become bureaucracy. Decision logs that nobody reads become documentation theater. The meaningful friction Porres describes can easily become dumb friction if the system isn’t maintained — and maintenance is the part that doesn’t scale.
The gap between “everyone sees the problem” and “someone builds the solution” is where most convergent insights die — acknowledged by many, operationalized by few.
What This Is Actually About
There is no shared vocabulary for this layer. No agreed-upon infrastructure. No canonical method. Six builders naming six versions of the same pressure, in six different registers, without a common frame.
But the direction is converging faster than the solutions. What hasn’t converged is the operational layer — the thing you actually run every day that turns these insights into compounding practice. Describing the problem from six altitudes is progress. Building the solution at one of them is the next step.
The build is the work.
Robert Ford builds products, writes stories and essays, and publishes The Intelligence Engine — a Substack about building AI practices that compound. His other writing lives at Brittle Views.


