We’ve all been there. You start a fresh chat with an AI, and for ten minutes, it’s magic. Then context drift sets in. The conversation grows, the buffer fills up, and the AI starts losing the thread of your specific architectural decisions.
By hour two, it’s suggesting Flask snippets in a Django project or proposing a NoSQL schema for a system built around PostgreSQL. Instead of a collaborator, it becomes a liability you constantly have to course-correct.
The issue isn’t intelligence — it’s context fragmentation.
Your project’s standards live across Slack threads, Jira tickets, and the collective intelligence of senior engineers. Without access to those constraints, AI defaults to global 'best practices,' not your system’s architectural truth. To move from chatting with AI to architecting with it, we need to stop treating it like a chatbot and start treating it like a remote senior engineer who needs proper onboarding.
This weekend, I set myself a task to explore this exact problem. I wanted to see if I could move beyond ephemeral chat prompts and build a "Source of Truth" directly into the repository — a framework that ensures the AI stays grounded in the project's reality, no matter how long the session lasts.
The "Aha!" Moment: From Prompting to Governing
Consider how engineering opinions actually work inside teams. Imagine your team enforces a strict rule: no comments in code. You expect clarity to come from expressive naming and clean structure.
Note: This is an illustrative example — I personally value well-placed comments — but the constraint demonstrates the mechanism.
Most AI tools, trained on billions of public repositories, default to adding comments everywhere because that reflects statistical averages.
The "Standard" Interaction
User
Write a Python function to calculate the area of a circle.
AI
import math
def calculate_circle_area(radius):
# Check if the radius is valid (cannot be negative)
if radius < 0:
raise ValueError("Radius must be non-negative")
# Calculate the area using the formula: Area = π * r^2
area = math.pi * (radius ** 2)
# Return the calculated area
return areaYou ask for a function; the AI adds comments to every line, ignoring your team's style.
The "Workflow Method"
Now introduce a durable source of truth. Create a repository file:
User
Read STYLE_GUIDE.md and write a Python function to calculate the area of a circle.
AI
I'll read the style guide first and then create a Python function following its conventions.
Now I'll create a Python function to calculate the area of a circle following the style guide's rule of no comments and descriptive variable names:
import math
def calculate_area_of_circle(radius_of_circle):
if radius_of_circle < 0:
raise ValueError("Radius cannot be negative")
return math.pi * (radius_of_circle ** 2)By pointing the AI to that file, you didn't just prompt it; you governed it. The constraint moved from a temporary conversation into a durable, version-controlled source of truth.
Scaling for the Team: The Shared Memory
Up until recently, I was using AI like many developers do: pasting snippets into chats and hoping the model stayed aligned. But that doesn't scale. If constraints exist only inside individual conversations, knowledge resets every session.
I started asking bigger questions about a team's workflow:
-
How can I build a system that helps everyone — from juniors to experts and sysadmins?
-
How do we stop every engineer from re-inventing the context wheel in every new chat?
-
How do we make organisational knowledge durable rather than conversational?
The solution is a Memory Centre: a structured directory residing in the repository: ai/context/. This acts as shared onboarding memory grounded in the system itself.
Payment Processing Demo
Let’s look at a practical example. We’ll use a Payment Processing demo to show how these concepts come together in code:
1. The Technical Stack and Constraints
This file establishes the environment boundaries, ensuring the AI does not suggest libraries, versions, or patterns that are incompatible with your environment and infrastructure.
2. The Business Logic and Invariants
By defining our core entities and state machines, we prevent the AI from proposing creative logic that violates fundamental business rules.
3. Architectural Standards and Security
This encodes our system governance and separation of concerns, ensuring the AI respects security protocols and reduces architectural drift.
The Onboarding Interaction
Start a completely fresh chat and ask:
User
Read the files in ai/context/. Based on our standards, give me a top-level overview of how we handle a new transaction and what constraints I need to be aware of.
AI
New Transaction Flow
A new payment moves through the PaymentIntent state machine:
Created → Pending → Succeeded / Failed
- A
PaymentIntentis created with a uniquetrace_id(mandatory before anything is sent to the gateway). - The request goes through the PaymentService layer — the checkout view never touches the DB directly.
- The intent is submitted to Stripe (API version 2026-01-28) and transitions to Pending.
- Stripe's response (via webhook, handled asynchronously by Celery/Redis) resolves the intent to Succeeded or Failed.
- Succeeded is terminal — once reached it cannot be reverted.
Key Constraints
| Area | Rule |
|---|---|
| Money | Store all monetary values as integers (pence/cents). Never use floats. |
| Security | PAN/CVV must never be logged. Redact PII at the service boundary. |
| Architecture | Checkout view → PaymentService → DB. No direct DB access from the view. Payments module must not import Marketing or Inventory. |
| Refunds | A refund cannot exceed amount_captured. |
| Traceability | Every PaymentIntent must have a unique trace_id before transmission. |
| Style | No comments in code — use descriptive variable names instead. |
The AI produces a project-specific briefing distilling stack rules, domain invariants, and architectural boundaries.
Instead of generic advice, you get contextual reasoning aligned to your system from the first interaction.
Making This Work in Real Tools
Different assistants expose repo context in different ways, but the pattern is the same: keep instructions version-controlled and close to the code.
Examples you’ll see in real-world projects include:
-
A single project memory file
-
A rules directory / scoped instructions
-
A dedicated context pack like
ai/context/
The key point isn’t the filename — it’s that the repo becomes the onboarding packet.
If your tool supports project instructions, point it at the memory centre. If it doesn’t, you can still adopt the workflow by using a consistent first step:
User
Read ai/context/ first. Then answer.
The Result: High-Fidelity Onboarding
This mirrors how we onboard human engineers. We don’t expect new developers to infer architecture from code alone; we provide context and guardrails.
When you start a fresh chat and point it to the ai/context/ directory, the AI stops guessing and starts operating within documented decisions. It assembles the rules into a Project-Specific Briefing.
By storing AI-readable context in the repo, consistency becomes the default. Whether it’s a Junior dev or a Senior Architect asking a question, the AI responds with the same team-approved standards. Architecture decisions are protected, and knowledge becomes versioned organisational memory.
A Foundation for Growth
This directory isn’t a finished framework; it’s an evolving research protocol (Version 0.1). As a project grows, teams can extend it with testing standards, CI/CD expectations, or observability rules. The core insight from my research is this:
Treat AI context as part of your system’s infrastructure, not an afterthought inside chat windows.
A Mental Model
By moving this knowledge into the repository, we change the relationship between the Engineer, the AI, and the Code. We are no longer just "chatting" with a chatbot; we are grounding an agent in our project’s specific reality.
Instead of relying on the software engineer to manually bridge the context gap in every prompt, that knowledge is now defined in a persistent context directory.
graph TD
Repo["📦 Repository<br/><br/>📂 ai/context (Memory Centre)<hr/>SYSTEM_FACTS.md<br/>DOMAIN_MODEL.md<br/>BOUNDARIES.md"]
Tool["🧰 AI Tool / IDE Assistant<br/>(loads project rules)"]
Agent{AI Agent}
Code[("📦 src/ (Code)")]
User((Software Engineer))
Repo -->|1. Provides versioned context| Tool
User -->|2. Requests task| Agent
Tool -.->|3. Injects context| Agent
Agent -->|4. Generates PR| Code
In this model, the ai/context directory acts as a permanent anchor. Even if you start a fresh chat session or switch to a different AI model, the truth of your project remains version-controlled and accessible.
Join the Conversation
I’m sharing this research as a series on LinkedIn to gather feedback:
-
How are you currently managing context drift in your longer chat sessions?
-
Have you tried moving your standards directly into the repo?
-
What file would you put in your repo first — stack facts, domain rules, or architectural boundaries — and why?
-
What’s the most frustrating hallucination an AI has given you during a complex build?
Next Step: Operational Roles
We have the memory centre in place, but an AI that tries to be everything at once often ends up being mediocre at everything. In Part 2, we’ll look at how to split the AI’s personality into two distinct operational roles: the Architect and the Developer.