As a system evolves, the distance between documentation and code naturally increases. In most teams, documentation is the first thing to degrade. In an AI-assisted workflow, stale context isn’t just annoying — it’s a failure mode. If the AI believes the system works one way, but the implementation has drifted, it will produce code for a system that no longer exists.
The final piece of this research looks at Contextual Continuity: the process of ensuring the AI’s understanding of the system evolves at the same rate as the code itself.
From Save Points to Persistent Knowledge
In our previous post, we introduced IMPLEMENTATION_STATE.md as a contextual heartbeat. While it acts as a save point for a single task — bridging separate chat windows and preventing the need for re-explaining — its true power lies in how it feeds back into the memory centre.
Engineering rarely happens in a single, uninterrupted flow. Complex features span days and dozens of context switches. By persisting state in the repository rather than a chat history, we can reduce context decay.
The goal here isn't just to remember where we left off; it's to make the source of truth stay current as part of the workflow.
Context Patches: Synchronising Design and Reality
The core mechanism of a closed-loop repository is the context patch. This shifts documentation from an afterthought to a structured requirement for finishing a ticket.
A context patch is just a small, reviewable diff that keeps ai/context/ aligned with what the code now actually does.
The point isn’t to write perfect documentation — it’s to capture the constraints that would matter in a review so the next AI session starts from the truth.
Example context patch
This is the self-updating bit: every change leaves behind a breadcrumb the next session can rely on.
This works through a coordinated hand-off between the two roles:
-
The Architect’s Proposal: During the planning phase, the architect doesn't just plan the code; it plans the knowledge update. It identifies exactly which files in
ai/context/need to evolve and proposes the specific Markdown changes — the context patch — before a single line of code is written. -
The Developer’s Implementation: The developer then implements the feature. Once the code is complete and tested, the developer is responsible for applying the patch. This ensures that the documentation update is verified against the final implementation.
Why the 'Architect Proposes, Developer Applies' Model Works
-
The Architect captures intent: how the system should evolve and what context must change.
-
The Developer verifies reality: the code matches the blueprint, then applies the patch as part of the change.
The Result: A Repo That Onboards Itself
When you combine these pillars, the repository becomes a maintainable knowledge system. A new developer can clone the repo and ask: "Read the ai/ folder. What should I work on next, and what constraints must I respect?"
The AI will produce a high-context briefing that rivals a senior engineer’s onboarding.
The Road Ahead: From Research to Framework
What started as a weekend project to scratch an itch — trying to make AI more reliable for my own tasks — has opened a door. Throughout this research, it became clear that the "AI problem" isn't about the models being too weak; it’s about our repositories being too quiet.
By giving the AI a memory centre, distinct operational roles, and rigid engineering standards, we’ve moved past the "unpredictable chatbot" phase. We’ve built the foundations of a system that doesn't just write code, but actually understands the intent and the integrity of the software it’s building.
The Final Ecosystem
This diagram shows how the pieces work together as one closed loop, with separate channels for grounding, enforcement, and patching. It’s no longer just a chat window; it’s a closed-loop workflow for high-fidelity software engineering.
graph LR
%% 1. Input (Far Left)
Engineer((Software Engineer)) -- "1. Request" --> Arch
%% 2. The Loop (Center)
subgraph Loop ["The Integrated Execution Loop"]
direction TB
Arch["<b>Architect Role</b><br/>(Audit & Plan)"]
Approval{<b>Human<br/>Review</b>}
Dev["<b>Developer Role</b><br/>(Code & Patch)"]
Arch -- "2. Proposal & Diagrams" --> Approval
Approval -- "3. Approve" --> Dev
end
%% 3. Foundation (Left)
subgraph AI_Dir ["ai/ Directory"]
direction TB
Context["📂 context/<br/>(Memory & State)"]
Standards["📂 standards/<br/>(Engineering Standards)"]
Roles["📂 roles/<br/>(Role Definitions)"]
end
%% 4. Output (Far Right)
Code[("📦 src/<br/>(Code)")]
%% Connections — order matters for layout
Context -.->|Grounding| Arch
Standards -.->|Enforces| Dev
Roles -.->|Defines| Arch
Roles -.->|Defines| Dev
Dev -- "4. Writes" --> Code
Dev ==>|5. Context Patch| Context
%% Styling
style Engineer fill:#f5f5f5,stroke:#9e9e9e,stroke-width:2px
style AI_Dir fill:#f0f7ff,stroke:#0052cc,stroke-width:1px,stroke-dasharray: 5 5
style Loop fill:#ffffff,stroke:#01579b,stroke-width:2px
style Code fill:#fff7e6,stroke:#ffa940,stroke-width:2px
style Arch fill:#fff,stroke:#01579b
style Dev fill:#fff,stroke:#2e7d32
style Approval fill:#fff,stroke:#6200ee,stroke-width:2px
style Roles fill:#f0f7ff,stroke:#0052cc
What’s Next?
I’m genuinely excited about where this is heading. This series was the discovery phase, but the next few weeks are about execution.
I’m currently focused on turning these research notes into a robust system. Over the coming weeks, I'll be:
-
Standardising the File Schemas: Refining the
ai/directory structure so it's plug-and-play for any team. -
Hardening the Roles: Stress-testing the Architect and Developer personas against legacy codebases and complex refactors.
-
Scaling the Loop: Automating the context patching process so that documentation maintenance becomes a zero-effort byproduct of development.
This is the end of the initial series, but it’s really just Version 0.1.
Join the Conversation
I’m sharing this research as a series on LinkedIn to gather feedback:
-
What’s the fastest-moving part of your system where docs drift first: domain rules, service boundaries, or operational runbooks?
-
If you introduced context patches tomorrow, which file would you patch first:
DOMAIN_MODEL.md,BOUNDARIES.md, orSYSTEM_FACTS.md— and why? -
Would you accept a PR if the code changed but the repo's source of truth didn’t (no context patch)?