Three AIs Built One Product. Here’s Why It Didn’t Fall Apart.
When a governed system spans multiple AI tools with no shared memory, the methodology either holds or it doesn’t. This is the test.
One product. Three AI tools. No shared memory between any of them. By every measure of the Amnesia Tax, this should have produced incoherent architecture — conflicting schemas, duplicated logic, incompatible assumptions about how the product works.
It didn’t.
Claude designed the architecture. ChatGPT built the execution engine. Lovable scaffolded the frontend. Each tool worked in its own session. None could see what the others had built. The product shipped with a converged schema, consistent security boundaries, and a unified data flow.
Not because the tools coordinated. Because the system around them did.
The Friction
The first three case studies tested the methodology within a single tool — Claude, operating inside a governed workspace with persistent files. This one tests whether it survives contact with tools that can’t read each other’s context.
The problem showed up immediately. Claude designed a database schema with specific column names and enum values. ChatGPT needed to build edge functions that write to that same schema. But ChatGPT had never seen the schema. It was designing in a vacuum — inferring table structures from the task description, making reasonable guesses about column names and data types that were reasonable but wrong.
The same friction appeared in reverse. When Lovable rebuilt the frontend, it needed to know the API contract — which endpoints existed, what parameters they expected, what the response shapes looked like. Twenty-plus REST endpoints, each with specific behaviors around partial updates, COALESCE patterns, and error handling that Claude had established across multiple sessions.
Three tools. Zero shared memory. Every handoff was a potential drift point.
The Build
The fix was not a new tool. It was two files that already existed.
**constraints.md** held the rules. Not the code — the rules about the code. Security boundaries that no tool was allowed to weaken. Naming conventions that every table had to follow. Architectural decisions that were settled and not open for re-litigation. By the time the file had accumulated entries from all three tools, it contained 49 constraints — each one a decision that no future session with any tool needed to revisit.
**architecture.md** held the blueprint. The database schema. The API contract. The component structure. The data flow diagram showing how a thought becomes a brainstorm becomes an idea becomes a project. When ChatGPT needed to build edge functions, it read the architecture file. When Lovable needed to wire up the frontend, it read the same file. Neither tool knew the other existed. Both built to the same spec.
The workflow was not elegant. When a tool produced something — a schema, an edge function, a component structure — I shared it back into the constraint and architecture files. The files grew as the build progressed. When the next tool started a session, it read the current files and inherited every decision the previous tools had made.
The bridge between tools was the files themselves. Share the output. Update the docs. Start the next session with the docs loaded. The tool figures out the consequences — what applies, what constrains, what’s already been decided.
Not automated. Not orchestrated. But durable.
The key is what the files actually contained. Not descriptions of what to build — records of what had been decided and why. When ChatGPT read that the edges table uses no foreign keys because Postgres can’t have polymorphic FKs, it didn’t propose a FK-based alternative. When Lovable read that progressive disclosure is data-driven — features appear when the user has enough data, not based on time or tutorials — it didn’t build an onboarding wizard.
Here’s where the system actually caught something. Lovable’s first pass at the brainstorm edge functions used its own built-in AI to handle responses — the default behavior when scaffolding an LLM-powered feature. But constraint #1 in the file said the product must be LLM-agnostic. No dependency on any specific model’s capabilities. The constraint forced a rewrite: provider-agnostic functions that load the user’s own API keys and route to whatever model they’ve configured. Without the file, Lovable’s default would have shipped — technically functional, architecturally wrong. The constraint caught the violation before it became infrastructure.
Each tool started its session at the decision boundary, not before it.
The Insight
The Amnesia Tax isn’t just the cost of re-explaining context between your sessions with one AI. It’s the cost between your sessions with different AIs. And the fix is the same: persistent files that any tool can read.
What made this work was not the tools’ relative capabilities. Those differences matter. But they’re not why the product converged instead of fragmenting.
It converged because the constraint file made decisions portable. A security boundary established in Claude’s session was enforced in ChatGPT’s session — not because ChatGPT understood the security reasoning, but because the constraint existed as a rule it could follow. An architectural pattern established across Claude’s first five sessions was inherited by Lovable in session one — not through training or tool integration, but through a text file the tool read before generating anything.
This is what the methodology actually proves at scale. The governance layer — the SOP, the constraints, the architecture doc, the decision log — isn’t a Claude feature. It’s a discipline. The system holds the memory. The AI provides the capability. Those two things are separate, and keeping them separate is the point.
If the methodology only worked with one tool, it would be a workflow. Because it works across tools, it’s a practice.
The Honest Part
Sharing outputs between tools and maintaining the files takes real effort. Not the mechanical kind — the judgment kind. Deciding what belongs in constraints versus architecture, what’s a standing rule versus a session-specific choice, when a file needs tightening versus expansion. A direct integration — where tools could read shared files automatically — would reduce friction. That integration doesn’t exist today. The maintenance overhead is the cost of tool-agnosticism.
The constraint file works because one person maintains it. When I update architecture.md after a Claude session, I know what changed and why. In a multi-operator system — two developers working with different AI tools on the same product — the constraint file becomes a merge conflict waiting to happen. The single-operator assumption runs deep in this methodology, and this case study doesn’t test what happens when it breaks.
There’s a quality gap between tools that the governance layer doesn’t fully close. Claude’s architectural reasoning produced cleaner abstractions than ChatGPT’s implementation patterns in several cases. The constraint file prevented drift, but it couldn’t elevate the weaker tool’s output to match the stronger tool’s. Governance ensures consistency. It doesn’t ensure uniform quality.
And the product’s complexity creates a new kind of maintenance cost. Architecture.md is now over 600 lines. Constraints.md has 49 entries. The governance layer that enables multi-tool development also demands ongoing curation — archiving outdated constraints, updating architecture after major changes, keeping the files honest about what the system actually does versus what was planned. The files compound, but they also accumulate. The difference between those two things requires judgment that no constraint file can automate.
What This Is Actually About
The first case study proved speed. The second proved compounding. The third proved operational self-management. This one proves portability — the methodology is not bound to any specific AI tool.
That matters because the tool landscape is shifting faster than any practice built on a single tool can survive. A workflow that depends on Claude’s specific capabilities breaks when Claude changes or when a better tool emerges for a specific task. A practice that lives in persistent files — constraints, architecture, decisions — survives any tool transition. The AI changes. The governance layer doesn’t.
Three AIs built one product because the system that held the decisions was more durable than any session with any tool. The intelligence wasn’t in the model. It was in the files the models read before generating anything. But every case study so far has tested that claim on my own work, my own tools, my own stakes. The harder question is what happens when the methodology meets someone else’s problem on someone else’s timeline.
Case Study Insight: The methodology works across AI tools because governance lives in files, not in any tool’s memory. The system holds the decisions. The AI provides the capability. Keeping those two things separate is what makes the practice portable.
Robert Ford builds products, writes stories and essays, and publishes The Intelligence Engine — a Substack about building AI practices that compound. His other writing lives at Brittle Views.


