What Rao Gets Right
The strongest critique of governance isn’t that it fails. It’s that it succeeds too comfortably.
Venkatesh Rao thinks my practice is the disease mistaking itself for the cure.
He hasn’t said this about me specifically. He doesn’t know I exist. But his argument (across Rediscovering Irony, New Ferality, and Discworld Rules) describes a pathology, and my AI practice is a textbook case.
Rao’s frame is simple: once structure becomes moral, it starts replacing judgment with ritual. He calls it devout sincerity. You build a constraint file. The constraint file catches a mistake. You conclude that constraint files are how good practitioners work. The rigor of the process replaces the quality of the output as the test, and you can’t tell the difference because the process still looks rigorous.
He points to practitioners operating without visible governance — his own 34-book pipeline, the “feral” builders who ship without systems. His claim stands: anyone still maintaining explicit structure may have mistaken the scaffolding for the building.
He’s not wrong about the pathology. The question is whether he’s right about me.
Here’s what he gets right.
I maintain a concept index — a registry where every coined term is capitalized and never varied. Typist Trap. Amnesia Tax. Compiled Thinking. Each has a canonical definition, a status, and a propagation prediction. The consistency is deliberate: it creates ownership of the vocabulary, makes the ideas citable, gives the publication a distinctive intellectual texture.
But consistency creates rigidity. Five essays build on a concept graph where each term depends on the others. The cost of discovering that one foundational concept was wrong isn’t intellectual — it’s structural. I’d have to tear down published work. That’s the sincerity trap Rao describes. Not that the concepts are wrong, but that the system makes it expensive to discover they’re wrong.
I maintain a cooling-off gate that requires new skills to sit for seven days before building. I installed it because I was building governance tools faster than I could evaluate whether they worked. The system responds to the problem of too much system by building more system. Rao would recognize the recursion immediately.
I maintain a landscape scanner — a tool that monitors other practitioners, scores their engagement value, and generates action obligations. It evolved through seven versions. It started as a reading list and became an enforcement mechanism that flags when I’m choosing comfortable engagement over hard intellectual work. Rao’s Auditors of Reality — the Discworld characters who hate life because it’s messy and want a universe following predictable laws — would approve. It makes the messy human business of intellectual relationships auditable.
Here’s where the argument breaks.
Three things suggest governance is functioning as scaffolding rather than devotion in this system.
First: three weeks ago, building a caregiving app, I killed a feature before the constraint file flagged it. The spec called for an observation dashboard — a panel where one family member could monitor everyone else’s activity. I didn’t need the file to tell me this would undermine the product’s trust model. Four prior projects under that constraint had taught me to see surveillance dynamics before they reach the spec. The constraint was still there. I didn’t consult it.
Second: early in the system, I wrote a constraint prohibiting cross-workspace file references — each project had to be fully self-contained. Three projects later, I’d routed around it so many times that the constraint was generating more overhead than the coupling it was supposed to prevent. So I removed it. The governance layer had enforced a boundary I’d drawn before I understood the joints. I drew a bad line, built under it, learned it was bad, and took it down.
Third: the error profile is rotating. What the constraint files catch now is categorically different from what they caught in February. Trust-model violations, scope-boundary decisions, voice-register slips — these are reflexive now. The files catch architectural mistakes I haven’t seen enough times to internalize. Old categories compress into judgment. New categories surface from unfamiliar territory.
Static error profiles mean the system is preventing. Rotating error profiles mean the system is teaching. The rotation is what separates scaffolding from religion.
But there’s a subtler thing Rao gets right that the scaffolding answer doesn’t address.
His irony argument isn’t only about whether governance is temporary. It’s about what governance does to the practitioner’s relationship with surprise. A system designed to make practice predictable reduces tolerance for the unpredictable. And the unpredictable is where the interesting work happens.
I’ve watched this in my own system. When a workspace produces something unexpected — a convergence across four independent projects that nobody coordinated, a case study seed that surfaced from an evaluation rather than from the work itself — the system’s first move is to name it, log it, and build a process to reproduce it. Convergence becomes a hypothesis to test. Serendipity becomes a pipeline to optimize. The system metabolizes surprise into structure.
This essay is that reflex. A critique of structured earnestness, processed through a governed content pipeline, evaluated by adversarial review, filed in a workspace with its own constraint document.
The naming instinct has produced real value — named patterns propagate and unnamed ones don’t. But the cost Rao identifies is real and unmeasured: what doesn’t get built because the system is too busy governing what already did?
The Honest Part
The strongest version of Rao’s critique isn’t that governance fails. It’s that governance succeeds too comfortably. The system catches mistakes, produces artifacts, generates content, compounds knowledge. At no point does it feel broken. And that comfort is precisely what he warns about.
I’d know the critique had landed — fully landed — if the error profile stopped rotating. If the same constraints caught the same categories month after month. If I maintained every artifact, consulted every checklist, and never noticed they’d stopped teaching me anything new. The system would look rigorous. The judgment underneath would have stopped growing. That’s the failure mode, and it’s invisible from the inside.
So I’ll run the experiment. Pick a workspace where the governance artifacts have been stable for months. Take the constraint file out — not delete it, move it somewhere I’d have to deliberately retrieve. Build for a month without it.
If the judgment holds, the scaffolding argument is validated. Rao’s critique applies to a phase I’ve passed through. If the work degrades, what I’ve built is closer to a prosthetic than a scaffold — something I need, not something I’m growing past. And the willingness to run a test that could prove you wrong is the one thing devout sincerity can’t produce.
Rao doesn’t know this practice exists. If he found it, he’d recognize the symptoms immediately.
What he might not recognize is a system that built the test designed to prove him right.
If the system survives its own removal, it was scaffolding. If it doesn’t, it was the practice.
Robert Ford builds products, writes stories and essays, and publishes The Intelligence Engine — a Substack about building AI practices that compound. His other writing lives at Brittle Views.


