The Amnesia Tax: What It Costs You to Start From Zero Every Session
Why your AI workflow resets to zero every morning
Every AI session begins with amnesia.
You explain who you are. What you’re working on. What you’ve already decided. What the constraints are. What happened last time. What you need now.
Then you do the work. It goes well, or well enough. You close the window. And tomorrow the AI has forgotten all of it.
This is the Amnesia Tax. Not the cost of bad AI — the cost of AI that has no memory between sessions.
Most people don’t notice it because they assume this is normal.
What You Lose
The obvious loss is time. Re-explaining context takes ten, fifteen, twenty minutes per session depending on the complexity. Multiply that across sessions, across projects, across weeks. It adds up to hours per month spent saying things you’ve already said.
But time isn’t the real cost.
The real cost is depth. When you re-explain your project from scratch, you don’t reproduce the full picture. You reproduce a summary. And summaries lose nuance — the constraint you added after that one failure, the decision you made three weeks ago about tone, the reason you stopped using a particular approach. Those details don’t make it into the recap because you’ve forgotten they’re important enough to mention.
So the AI starts each session slightly less informed than the last one ended. Not dramatically — just enough that it offers a suggestion you already rejected. Proposes an approach you already tried. Misses a constraint that took you three sessions to identify.
You correct it. The session proceeds. But you’ve lost the compounding.
The Compounding Problem
Here’s what most people miss about working with AI: the value doesn’t come from any single session. It comes from accumulation.
A single session produces output. A series of connected sessions — where each one builds on the last, where decisions persist, where constraints evolve, where the AI’s understanding of your work deepens — produces something qualitatively different. It produces a system that knows how you think.
I run five concurrent AI-assisted projects. A fiction series with fifty published stories. A care coordination app. A product architecture practice. A knowledge engineering system. And now, a course about the methodology that holds all of them together.
Every one of these projects has a memory. Not in the AI’s head — the AI has no persistent memory worth trusting. The memory lives in files. A status document that tells the AI where things stand. A decision log that records what was chosen and why. A constraints file that encodes what must never happen. An SOP that defines how this particular project works.
When I open a session, the AI reads these files first. It doesn’t need me to explain the project. It already knows. Not because it remembers — because the system remembers for it.
That’s the difference between a session and a practice.
What a Session Without Amnesia Looks Like
Tuesday morning. I open my fiction workspace. The AI reads the SOP — voice constraints, editorial doctrine, the Do-Not-Write lists for each character. It reads the status file — current story in draft, where I left off, what’s unresolved. It reads the decision log — why I changed a character’s arc two weeks ago, why a particular motif is restricted to certain registers.
I don’t explain any of this. I just say: “Pick up where we left off.”
And it does. Not from a vague memory. From documented state.
The draft continues from the exact point it stopped. The constraints are already loaded. The decisions are already applied. The AI doesn’t suggest the approach I rejected last Thursday because the rejection is recorded.
Twenty minutes later, I close the fiction workspace and open the product workspace. Different project, different SOP, different constraints, different voice. The AI pivots instantly because the context isn’t in its head — it’s in the file structure. Each workspace carries its own intelligence.
This is what compounding looks like. Not “the AI gets smarter.” The system gets denser. Each session adds to the record. Decisions accumulate. Constraints refine. The AI’s starting point for Tuesday’s session is better than Monday’s, which was better than Friday’s.
The Amnesia Tax is what you pay when none of this happens.
The Hidden Costs
The obvious cost is repetition. But here are the ones that don’t surface until you’ve been working this way long enough to notice:
**Decision re-litigation.** Without a decision log, you revisit the same choices. Should this character speak in first person or third? You decided three weeks ago — but neither you nor the AI remembers, so you decide again. Sometimes differently. Now your project has an inconsistency you won’t catch until it’s published.
**Constraint erosion.** You established a rule: never use the word “compliance” in patient-facing copy. Six sessions later, neither you nor the AI remembers the rule. The word appears. Nobody catches it. The constraint existed, worked for a while, and then dissolved because nothing was holding it in place.
**Depth ceiling.** Without persistent context, every session starts at roughly the same depth. You can’t build on last week’s insight because last week’s insight isn’t in the room. The AI gives you competent, surface-level responses every time instead of progressively deeper ones. You’re running in place.
**Cross-project blindness.** An insight in one project that’s relevant to another never transfers. Your fiction work informs your product copy in ways you can feel but the AI can’t see — because each project exists in isolation, with no mechanism for one workspace to learn from another.
These costs are invisible in any single session. They only become visible in aggregate, when you realize you’ve been working with AI for months and it still doesn’t know your preferences, your constraints, or your decisions.
What This Actually Requires
I won’t oversell this. Building a system that eliminates the Amnesia Tax takes effort. Not massive effort — but more than a prompt template.
At minimum, you need three files per project:
A **status file** that captures where things stand. Not a to-do list — a snapshot of current state that the AI reads at the start of every session. What’s in progress. What’s blocked. What was decided last time.
A **decision log** that records choices and their reasoning. Not every decision — the ones that shape the project. When you choose approach A over approach B, write down why. When you add a constraint, record what failure prompted it. This is the memory that prevents re-litigation.
A **constraints file** that encodes what must never happen. The Do-Not-Write lists. The banned words. The quality thresholds. The rules that earned their way in through real failures and need to persist across every future session.
That’s the minimum. My system is more elaborate — it includes SOPs, cross-project transfer records, editorial passes, artifact pipelines — but those three files eliminate the worst of the Amnesia Tax. You can build the rest as you need it.
The overhead is small. Updating these files takes two to three minutes at the end of a session. The return is disproportionate: every future session starts where the last one ended instead of starting from zero.
The Honest Part
This approach doesn’t eliminate all friction. The AI still makes mistakes. It still needs correction. It still occasionally ignores a constraint you’ve written in bold and underlined twice.
What it eliminates is the *same* friction, session after session. The AI stops re-proposing rejected ideas. It stops violating constraints you’ve already identified. It stops asking questions you’ve already answered. The novel problems remain — the solved ones stay solved.
That’s the trade. A small investment in documentation — status, decisions, constraints — in exchange for a practice that gets better instead of resetting to zero.
What This Is Actually About
The first essay in this series was about governance — how to use AI without losing your voice. This one is about continuity — how to use AI without losing your context.
Together, they’re two halves of the same problem: AI is powerful, but it has no memory and no standards. If you want output that compounds — that gets more useful, more aligned, more yours over time — you have to build the infrastructure that AI lacks.
That’s what I did. Over the past year, across five projects, through hundreds of sessions. And I’m turning the full system into a course called Stop Starting Over With AI.
The governance essay told you to write your Do-Not-Write list. This one tells you to write your status file. Three lines, updated at the end of every session: what happened, what was decided, what’s next.
Tomorrow, don’t explain yourself again. Hand the AI a record instead.
The Amnesia Tax is optional. You just have to stop paying it.


