
Two weeks ago I was debugging a module I had written myself.
Sat staring at it for twenty minutes. everything was unfamiliar. The structure made no sense to me. I had to reverse-engineer my own code like a stranger had written it.
The uncomfortable truth β an AI had written significant chunks of it, I had reviewed, merged, and completely moved on. Two weeks later it was an alien to me !
I kept thinking about this. We talk constantly about LLMs having a context window like it is some fundamental technical limitation. We never apply the same framing to ourselves.
Developers have a context window too. AI-assisted development is just filling it faster than any human brain can track.
The problem with existing solutions
The obvious answer is "write better documentation." Every team says this. No team actually does it consistently β not because developers are lazy but because documentation written as a separate task from coding immediately starts drifting from reality.
Asking your IDE to document as it goes is worse. Cursor adds a new README for every three lines it touches. Imagine 3-4 Readme files just for an Auth module ! nobody on earth would feel enthusiastic to open ai generated docs !
What I actually needed was something that treated documentation as a continuous output of development β written automatically at the one moment developers never skip.
The commit. ( ~ Version Controlled Documentation )
What I built
DevMem is an open source Go CLI that hooks into your git workflow and maintains a living knowledge base inside your repository.
First run β crawls your entire repo and documents everything
devmem init
After every commit β patches only what changed
devmem capture
Ask your codebase anything
devmem query "how does the auth module work?"
The thing I did not expect
The .devmem/ folder ends up being genuinely useful context for AI coding tools.
When Cursor or Copilot has access to accurate, current, structured documentation of every module β what it does, what its API surface is, what it depends on, what changed recently β it becomes meaningfully better at helping you. It stops making assumptions about your architecture because it is reading your actual architecture.
One knowledge base. Useful for your teammates and your AI tools simultaneously. That was not the original goal but it might be the most valuable outcome.
_
Honest rough edges_
Module detection uses directory heuristics. It works well on standard Go, Node, and Python project layouts. Unconventional structures need a small manual config to define module boundaries explicitly β the heuristics will miss or misgroup things.
Large messy refactor commits that touch many modules simultaneously stress-test the capture prompt in ways I have not fully solved. The classification is harder and the patches are less surgical than I want them to be.
The query command is only as good as your documentation is β which is only as good as your captures are. If you skip captures for two weeks the query answers drift.
_Stack
_
Go + Cobra
Anthropic API (Claude)
Single binary β no runtime dependencies
go install, direct download
MIT licensed
GitHub: https://github.com/surya-sourav/devmem
https://fun-tomato-aiuks1hrge.edgeone.app/
Built this because I was tired of being a stranger in my own codebase. Curious whether anyone else has felt the same way and what approaches you have tried.
Brutal feedback welcome β especially on the module detection and the query command. Those are the two places where real-world codebases will stress-test the assumptions hardest.
United States
NORTH AMERICA
Related News

Open Harness: The Multi-Panel AI Powerhouse Revolutionizing Developer Workflows
10h ago
Firefox Announces Built-In VPN and Other New Features - and Introduces Its New Mascot
9h ago
50% of Consumers Prefer Brands That Avoid GenAI Content
9h ago
Officer Leaks Location of French Aircraft Carrier With Strava Run
9h ago
CBS News Shutters Radio Service After Nearly a Century
9h ago


