I Built a Product in 5 Hours. I Spent 4 of Them Not Building.
A governed workspace made this build possible — because the decisions came first.
The product didn’t start as a product. It started as a sentence in a review session for a different project.
I was evaluating my care coordination app — a clinical tool for a therapist’s practice — when the therapist said something I hadn’t planned for: the architecture we’d built for her clients would work for families managing aging parents. Not her clients. Regular families. The ones calling each other in a panic after Dad falls, texting updates into a group chat that nobody reads, and burning out one sibling at a time because nobody else can see the full picture.
That was 7:38 in the morning. By 9pm the same day, the product had a name, a domain, a 70-line product constitution, a live app with 15 working features, a pitch deck, a one-pager, and a brand identity. Five hours of working time across the day. One hour building. Four hours thinking.
The ratio is the story.
The Friction
Building software with AI is fast. Everyone knows this. The friction isn’t the building — it’s the deciding. What should the product do? What should it refuse to do? Who is the user, really? What happens when the user’s needs conflict with the obvious feature?
These questions don’t have code answers. They have judgment answers. And judgment takes time — time most AI workflows skip because building is cheap enough to ship and iterate.
I’ve watched this produce a specific failure mode. The product works. The features function. And nobody uses it — because nobody decided what the product was actually for.
The workspace system I’d built over the previous three weeks had a different opinion about how products should start. Not with a prompt. With a constraints document.
The Build
The constraints document came first. Not a feature list — a product constitution. Seventy lines of decisions about what this product would and wouldn’t be, established before any code existed:
Family coordination tool, not a health monitoring platform. No clinical language. That one sentence eliminated an entire feature category that would have taken weeks to build and made the product feel like a hospital intake form.
The coordinator role rotates.
This wasn’t a feature request. It was a structural answer to caregiver burnout — the single biggest reason families abandon coordination tools. The product must treat primary caregiving as a shift, not a sentence.
The shared timeline is the core product. Not a dashboard. Not analytics. Not a form. This killed the most obvious product direction — the observation-logging app that every caregiving startup builds and every family stops using after a week.
Design for the exhausted caregiver, not the ideal caregiver. Every interaction must pass: “Could an exhausted person do this in 30 seconds?”
I didn’t write these constraints from scratch. The clinical app’s constraint file became the structural starting point — its 49 entries showed which architectural choices held under real use and which needed rework. The decision log entry where I’d reversed the A-Team’s observation-first design (users wouldn’t fill out structured forms) saved me from building the same wrong thing twice. The brainstorm skill refined across multiple projects ran the diverge-converge-decide cycle.
Without the constraints, I know exactly what I would have built — because it’s what every caregiving startup builds first. An observation-logging dashboard where family members fill out structured forms about Dad’s mobility, cognition, and medication. It’s the obvious product. It’s also the product families stop using after a week, because exhausted caregivers don’t fill out forms. The constraint that killed this — “the shared timeline is the core product, not a dashboard, not analytics, not a form” — redirected the entire architecture toward natural-language updates with optional tags. That one line in the constraints file is the difference between a product that looks right in a demo and a product that might survive contact with a real family.
Then I ran adversarial review against the constraints — a different AI model, four rounds. Product strategist lens. Elder care domain expert lens. The adult child in crisis lens.
The reviews were brutal in exactly the right way. “People will not reliably log observations as structured data.” That killed my original interaction model and replaced it with a timeline-first design where families share natural updates and the system extracts structure from tags. “The person portal is a false dependency.” That reversed a decision I’d already committed to — an entire interface for the elderly parent, promoted from the clinical app’s architecture. The reviewer argued the product must work fully without the supported person ever touching it. I’d spent an hour designing that portal. The reversal took five minutes and removed a feature that would have blocked launch.
The external evaluation flagged confirmation bias in my own simulation, surfaced objections I hadn’t tested, and reordered feature priorities based on trust signals I’d underweighted. That came after four adversarial rounds and a twelve-persona simulated focus group — each layer catching things the previous one missed.
Not everything changed. The “30-second rule” for exhausted caregivers survived every review round unchanged — which meant every interaction design decision had a fixed constraint it couldn’t violate. The system isn’t only destructive. Some constraints stabilize.
Four hours of thinking. Forty structured decisions. A product definition stress-tested across six distinct lenses.
Then the building started.
Thirteen consecutive builds in roughly one hour. Each build executed a decision that was already made. No ambiguity about what to build. No mid-build pivots. No “actually, let me rethink the data model.” The constraint file had settled every architectural question before the first prompt.
Baton passing — the coordinator rotation feature — shipped as an atomic acceptance flow with handoff summaries, because the constraints said rotation must respect agency. The care snapshot shipped as a shareable summary generated from real timeline data, because the constraints said it was the primary adoption mechanism. Visibility controls shipped with three levels, because the constraints said the product must not become ammunition in family disputes.
Every feature traced back to a line in the constraints file. The builds were straightforward because the decisions were already made.
The Insight
The standard AI product story goes: “I built something in two hours that used to take two months.” Speed becomes the story.
This is a different story. The product took five hours — and the interesting part is that four of those hours involved no building at all. Every hour spent deciding eliminated hours of building, rebuilding, and discovering mid-build that the product was solving the wrong problem.
The deeper insight is about what made those four hours of thinking *productive* rather than just slow.
I didn’t start from zero. The constraints template came from the clinical app — a file I could fork and rewrite in fifteen minutes instead of drafting from scratch. The decision log entry that killed the A-Team’s observation-logging model told me not to build one here. The brainstorm skill’s diverge-converge-decide structure, refined across four previous uses, ran the ideation phase. The adversarial review pattern emerged from the quality assurance workflow I’d established for publishing.
Each of those was a specific artifact from a previous project, reused in this one. A product constitution written in isolation is hard. A product constitution written by forking a proven constraints file, reading a decision log that flags which ideas already failed, and running a tested brainstorm structure — that’s fast.
This is what compounding looks like in practice. Not faster prompts. Not better models. Prior decisions — recorded, stress-tested, reusable — making the next build structurally better before a single line of code exists.
The Honest Part
The product was built in five hours. It is not done.
What shipped is a beta-ready app — feature-complete for testing, live on a custom domain, with working authentication, timeline, care snapshots, coordinator rotation, task claims, and a shared calendar. But “beta-ready” means “ready to discover whether anyone will actually use it.” The existential question — will a second person contribute to the same care timeline? — hasn’t been answered. If they don’t, the product collapses into a personal journal.
The adversarial reviews and simulated focus group were genuinely useful for product definition. They are not substitutes for real users. The external evaluation said so explicitly: “Stop simulating. Start real testing.” The four hours of thinking produced a battle-tested spec. It did not produce a validated product.
The constraints document works because one person maintains it. The same single-operator assumption that runs through every case study in this series applies here. The product I built is for families — multiple people with different relationships, different technology comfort levels, different emotional stakes. Building a multi-user product as a single operator using a single-operator methodology is a structural tension I haven’t resolved.
And the speed of the build created its own risk. When building is cheap, the temptation is to keep building. In the days after the initial sprint, the product accumulated condition-specific templates, needs briefs, pitch deck variants, and roadmap features. Some was needed. Some was scope creep masked by accessible building.
The governance layer prevented building the wrong thing *within the spec*. It does not prevent building too much *beyond the spec*. That’s a different discipline — one the constraints file doesn’t automate.
There's a deeper question this case study doesn't answer: whether the governance layer is permanent infrastructure or transitional scaffolding. The constraints file, the decision log, the adversarial review — I needed all of them for this build. But I needed them because I was building the muscle, not because the muscle can't eventually work without them. A practitioner who has internalized what these artifacts teach — who instinctively kills the observation-dashboard idea without needing a decision log entry to remind them — may not need the explicit governance at all. The system's goal, if it's honest, is to become unnecessary. This case study documents a phase of practice, not a permanent way of working.
What This Is Actually About
Each prior case study tested one property of this methodology — speed, then compounding, then operations, then portability across tools. Each one also deposited specific artifacts: a constraints template, a decision log pattern, an adversarial review workflow, a proven multi-tool handoff protocol. This case study is what happens when those artifacts combine. Remove the constraints template and the product constitution takes days instead of minutes. Remove the decision log and the observation-dashboard mistake gets repeated. Remove the adversarial review pattern and the person portal ships as a required feature that blocks launch. The five-hour timeline depends on all four layers existing before the morning started.
Emergence, operationally: a product that no one planned, built from artifacts that were created for other purposes, in a timeline that’s only possible because those artifacts already existed. This is the difference between a tool that makes you faster and a system that reduces the cost of deciding enough that unplanned products become viable. A faster tool would have built Togetherly’s features more quickly. The workspace system built Togetherly’s *judgment* more quickly — and judgment is the part that determines whether the features matter.
The workspace layer changes what can be built in a single session — because most of the decisions are already made. But this breaks the moment constraint ownership becomes shared. Multi-operator governance — multiple people maintaining the same constraints file, the same decision log, the same review standards — is a different problem, and one this system doesn’t yet solve.
Case Study Insight: The product took five hours because four of them were spent deciding, not building. The decisions were fast because every prior project had deposited reusable artifacts — constraints templates, decision log entries, tested review workflows. Compounding doesn’t just make you faster — it makes you capable of things that weren’t in the plan.
Robert Ford builds products, writes stories and essays, and publishes The Intelligence Engine — a Substack about building AI practices that compound. His other writing lives at Brittle Views.

