About
The company behind Engraph
Engraph is a developer tools company. We build the layer that keeps your team's corrections alive between agent sessions, so every agent learns what your best engineers already know.
Why we started
Engraph started because our friends kept hitting the same wall. They jumped on the vibe coding path early, shipping fast with AI agents. It worked great at first. Then their projects grew. Codebases got bigger. Context files got longer. And the agents kept making the same mistakes no matter how much context you threw at them.
You'd correct your agent, explain a service boundary, point out a naming convention, redirect an architectural decision. It'd get it right. That time. Next session, same mistake. The correction was gone. Adding more to your context files felt safe but didn't actually solve the problem. Agents still did the wrong thing, just with more context to ignore.
The agents weren't broken. We tried RAG, longer context files, better prompts. None of it stuck. The knowledge needed a system that could learn from corrections and actually serve them back. We built Engraph to be that system.
What drives us
Corrections should compound
When someone teaches an agent the right way to do something, that lesson should survive the session. It should reach every other agent working in the same codebase. One correction, permanent effect.
Useful from day one, smarter every session
Spec-driven frameworks get it right in theory. But the upfront effort made most of our friends quit early and fall back to the same frustrations. Engraph reads your codebase on first run and proposes ground rules immediately. Then it gets sharper with every correction. No setup sprint required.
Surface everything, automate some things
Most rules just need to be visible at the right moment. That's useful from day one. Automated enforcement comes later, for the subset of rules specific enough to check programmatically. Not every rule needs a gate. Most just need to be seen.
Rules that travel with you
Every new project, same thing. You re-explain the same conventions, re-correct the same mistakes, re-teach the same boundaries. Some rules are specific to a codebase. But plenty of them are just how your team works. Those shouldn't be siloed in one project. They should follow you.
We have opinions on this stuff. Lots of them. We write about them on the blog.
The team
Small team, founded in 2025. We build in public, ship often, and run Engraph on our own codebase. If a rule doesn't survive contact with our own workflow, it doesn't ship.
Remote-first, minimal process. Our roadmap comes from what early adopters actually tell us, not a feature backlog. We'd rather get one agent integration right than ship four half-baked ones.
We're currently in early access, working with a small group of teams to refine how constraints get captured, matured, and served to agents at scale. The goal is a world where every engineering org has a living graph of its own rules, served at the moment it matters. Not another wiki nobody maintains, or a document that was written once and quietly forgotten.
Get in touch
Early access, questions about the approach, or just want to talk about how your team uses AI agents. We'd like to hear from you.