← Back to posts

Turning your AI tool into your pair programming companion

Part 1 of 7 in the Agentic (.NET) developer workflow series
  1. Turning your AI tool into your pair programming companion
  2. Dependency updates that understand your code
  3. Teaching your AI how to write tests with you
  4. Quality gates that actually run: verification and security in the agentic workflow
  5. Documentation as a first-class concern in your agentic workflow
  6. AI-driven usability testing: a think-aloud study with a team of AI testers
  7. Building and evolving your own AI development skills

AI models are trained on billions of lines of “everyone’s” code good and bad code, generic best practice, the Stack Overflow answer, the textbook approaches. But your project might not be that generic; why else did you otherwise create it?

You’ve might have chosen a specific architecture pattern for a reason. Or for example, you picked testcontainers over an in-memory database because your integration tests kept passing locally but failing in CI.

None of that is in the training data. So every session, the AI predicts or guesses based on what it learned. Sometimes it guesses right for your use case, sometimes it doesn’t. And tomorrow, it start all over again, from scratch, because it forgot everything you told it today.

A real pair programmer doesn’t work this way. A colleague who sat next to you yesterday remembers the context, they felt the struggle to get that one thing working. They know what you tried, what failed, what decision you landed on. That shared history is often more valuable than the best from a word prediction machine.

By now, we know the big AI hype isn’t about replacing the developer, but about making the developer more productive. AI is at its best when it handles the repetitive parts, remembers the details you’d forget, and follows the conventions you’ve already decided on, while you focus on the decisions that actually matter.

In this blog post, we will look at how we can embrace AI tools into your development workflow. Not generic predictions based on everyone else’s codebase, but a partner that learns from what we tried, what worked, and what didn’t in our codebase.

From prediction to partnership

We’ve seen that AI tools like Claude Code and others can write working code, and can do it fastly. While the output is getting better and better, the biggest challange today is the context window. Providing it with enough context and managing what’s inside of the context window. Because there is only a limited amount that can fit inside of it’s working memory/ context window, and when the session ends, it’s gone.

Without persistent memory, the next session falls back to generic patterns that may not match your stack, your conventions, or your architecture. This agentic developer workflow tries to help with that challange.
Build on top of plain markdown files that give your AI tool a persistent memory, a workflow providing session rituals, and deep skills for your specific stack.

You install it into any project by running /meta-bootstrap, which sets up the context files, adopts the skills, and configures your docs repository. Updates come through /meta-upgrade, which pulls new skill versions with version-aware diffing.

Everything is version-controlled alongside your code. And the daily memory from working together stays on your machine, ready to learn from and improve your skill system over time.

Three ideas make it work:

  1. Persistent memory: structured files that survive between sessions
  2. Session rituals: a “heartbeat” pattern that loads context at start and saves learnings at end
  3. Stack-aware skills: development workflows that go deeper than any IDE plugin or bot
  4. Documentation as a first-class concern: ADRs, design docs, runbooks, and architecture diagrams generated alongside the code, not as an afterthought

This first post covers the framework itself: memory, heartbeat, hooks, multi-repo support, and self-learning. In the rest of this blog series, we cover the skills and patterns used for .NET and Angular development.

Memory that compounds

The memory system has three layers, each with a different lifespan and purpose.

Layer 1 SOUL.md defines how the agent behaves. Your team’s conventions, quality bar, and opinions about code. This is shared across repositories.

## Core principles

- Have opinions on code. A partner without judgment is just autocomplete.
- Verify before declaring done. Run the command, read the output.
- Write it down. Memory dies with the session unless persisted.
- Respect the codebase. Read existing patterns before changing them.

Layer 2 USER.md is your personal preferences. Gitignored, because it’s private to you. Things like “Don’t explain basic C# to me” or “I’m senior level, skip the fundamentals.”

## Preferences

- I know C# and Angular well, skip basic explanations
- I prefer reading code over long prose explanations
- When suggesting packages, tell me the trade-offs, don't just pick one

Layer 3 MEMORY.md is where the real value builds up. Curated long-term knowledge: architectural decisions, lessons learned, things that bit you before. This file grows over time as you work.

## Architecture decisions

- API uses vertical slice architecture, not clean architecture
- All commands go through MediatR pipeline
- Projections are read-optimized, never rehydrate aggregates for reads

## Lessons learned

- EF Core bulk operations need explicit transaction scope
- Angular signals don't trigger change detection in OnPush components
  when updated from outside the component tree
- The payment gateway sandbox returns different error codes than prod

On top of this, daily session logs capture what happened today. Raw notes: decisions made, things learned, open items.

## 2026-03-21

### Decisions
- Switched PaymentGateway from singleton to scoped lifetime
  (was causing connection pool exhaustion under load)

### Learnings
- HttpClient in .NET 8 has improved connection pooling,
  but scoped IHttpClientFactory is still the right call for our case

### Open items
- [ ] Add retry policy to payment gateway calls
- [ ] Update the runbook for payment service deployments

At the end of each session, the important bits get distilled into MEMORY.md.

The total startup cost is about 5-8K tokens. That’s less than 4% of a 200K context window, or under 1% if you’re on a 1M token model. The rest is for actual work.

What becomes really enjoyable over time is how this changes the dynamic. Your agent now knows your codebase’s quirks better, maybe even better than a new team member that you supply your notes to. It feels like someone who was there yesterday, working with you, taking better notes than you.

The heartbeat: directed sessions

Every session follows a rhythm called the heartbeat. This is what turns chaotic AI sessions into directed development.

On startup, the agent reads its memory files and greets you with context:

“Morning Erik, last Friday we merged the payment refactor. The retry policy is still open, want to tackle that?”

You’re not starting from zero. The agent has an agenda based on what happened last time. During the session, it logs decisions and learnings to today’s daily memory file, like the example shown above. When you say “bye” or “done”, a hook automatically triggers the wrap-up. You can also run /meta-wrap-up directly. Either way, it reviews what was delivered, updates memory, distills learnings, and checks for documentation gaps.

The session closes cleanly. Next time, the heartbeat picks up where you left off.

This is the “directed” part. Instead of the AI going wherever it wants, sessions have structure. You set the direction, the agent stays on track, and progress carries forward.

Self-learning: skills that improve over time

Now that we have our daily programming diary with all our learnings, there’s one more piece missing. We’re writing down what we experience during development, but we’re not learning from it yet. Humans improve by reflecting on what worked and what didn’t, even if they don’t write a daily diary, but the AI can’t do that yet.

The next step is closing that loop: turning daily learnings into better skills, so that both you and your AI companion get better at what you do.

Learning from your own patterns

This is where /meta-continuous-learning comes in. It reads through your daily session logs and notices patterns. Maybe you keep running the same sequence of commands to debug a specific type of issue. Maybe your TDD skill keeps needing the same correction about how you name test classes. It spots these recurring patterns and suggests packaging them into skill updates or entirely new skills.

When it finds something worth improving, it updates the skill right there in the project you’re working in. The change is immediate: next time that skill runs, it uses the improved version.

Managing skills across all your repositories

To understand how this all fits together: you maintain one central workflow repository with your base set of skills, templates, and conventions. This is your base curated toolkit, for you or your entire team.

When you start a new project, /meta-bootstrap installs the workflow from the central workflow repository into your project, setting up the context files and skills.

It’s smart about what it installs: based on your project type (api, frontend, library, docs), it only adds the relevant skills. A backend API project gets the .NET TDD, verification, and security skills. A frontend project gets the Angular equivalents. No clutter from skills you’ll never use.

From there, each project can adjust its skills for its own needs. Your API project might tweak the TDD skill to include event sourcing patterns. Your frontend project might adjust the verification skill to run Playwright instead of Vitest for certain tests.

The flow goes both ways. When a project-specific improvement turns out to be generally useful, /meta-contribute-back packages your learnings and submits them to the central workflow repository. From there, /meta-merge-contributions reviews and merges those contributions with proper version bumps.

When the central workflow repository gets updated with new skills or improvements, you pull those into your projects with /meta-upgrade. The central workflow repository stays generic. Each project stays specific. Learnings flow in both directions.

Adopting and managing skills

By default, the central workflow repository I’ve shared ships with a set of the skills I use, but it’s designed to grow and shrink towards your needs.

The community online share skills too, throught GitHub or other platforms that might benefit you. When you find one that looks useful, /meta-adopt-skill reviews it for you: it checks compatibility with your setup, audits for anything unexpected, and integrates it into your local workflow. Think of it like adding a new tool to your toolbox, but with a review step before it goes in.

To see what you’ve got installed, /meta-skill-catalog gives you an inventory with versions and update status. When the upstream workflow repository publishes improvements, /meta-upgrade pulls them into your projects with version-aware diffing, so you can see exactly what changed before accepting.

Because skills are plain files in .claude/skills/, distributing them is as simple as sharing a git repository. No package manager, no registry. Copy the directory, use a git submodule, or maintain a shared skills repo that your projects reference.

Over weeks and months, your skill collection becomes a curated library of your team’s development knowledge. Not documentation that sits in a wiki, but executable workflows that actually run.

Multi-repo support

Most real projects span multiple repositories. Your API, your frontend, shared libraries, documentation.

The workflow handles this with a workflow.json in each repo pointing to a shared docs repository:

{
  "docsRepo": "../my-docs",
  "projectType": "dotnet",
  "templates": "../my-docs/templates",
  "output": {
    "adr": "../my-docs/adr",
    "design": "../my-docs/design",
    "architecture": "../my-docs/architecture"
  }
}
my-api/                    ← .NET API
  └── workflow.json → ../my-docs

my-frontend/               ← Angular app
  └── workflow.json → ../my-docs

my-docs/                   ← Shared documentation
  ├── adr/                 ← ADRs from both projects
  ├── architecture/        ← C4 diagrams
  └── templates/           ← Shared templates

Multiple projects, one documentation home. The wrap-up ritual detects which repo each changed file belongs to and commits separately.

Hooks for automation

The workflow needs to react to what happens during a session. When you type “bye”, it should trigger the wrap-up. When the AI edits a file, it should run your formatter. You don’t want to remember to do these things manually every time. That’s what hooks are for: small scripts that listen for events and act on them automatically.

PowerShell hooks automate the parts you shouldn’t have to think about. There are eight of them, covering the full session lifecycle:

Hook Trigger What it does
on-prompt-submit User types “bye” or “done” Auto-triggers wrap-up ritual
on-quality-gate After editing code Runs dotnet format / eslint in background
on-doc-file-warn Writing .md files Warns if docs land in wrong directory
on-suggest-compact After 50/75/100 tool calls Suggests manual context compaction
on-pre-compact Context window fills up Saves working context before compaction
on-cost-track After AI response Logs token usage and estimated cost to JSONL
on-session-end Session closes Records end marker and cost in daily memory
hook-profile (utility) Performance profiling for hook execution

The quality gate hook is particularly useful. It catches formatting issues in real-time instead of waiting for CI to yell at you. The cost tracking hooks give you visibility into what sessions actually cost, broken down by day.

Getting started

The framework is designed to be ready to use for you.

But be aware: they reflect my stack and conventions at the time of writing. They improve over time as the workflow learns from daily use, and I actively adjust them as I discover new things. Sometimes a skill might not work perfectly because I broke it while improving something else. That’s part of the process. Your project will have different tools, different concerns, different quality bars. Fork it, adjust it, or use it as inspiration for your own.

To get up and running:

  1. Fork the agentic-dev-workflow repository to make it your own
  2. Customize context/SOUL.md with your team’s conventions
  3. Remove skills you don’t need, adjust the ones you keep
  4. Run /meta-bootstrap in your project to install from your fork
  5. Create your personal context/USER.md (gitignored) and start a session

No build step, no dependencies beyond PowerShell Core for hooks. Just files.

What’s next

This post covered the foundation: memory, heartbeat, self-learning, multi-repo support, and hooks. But the real daily value comes from the skills that build on top of this framework.

The rest of this series goes deeper into specific skills:

← Back to posts