Me and You, and You and Me, so Happy Together

There's a pattern that keeps showing up in software engineering: the tools that give us the most leverage also demand the most structure.
Version control unlocked collaboration at scale, but only after teams agreed on branching strategies. Linting and formatting took out a lot of opinions during code reviews, but first we needed to have settled the "tabs vs. spaces" debate. CI/CD pipelines sped everything up, but only once contributors agreed upon a definition of done. AI coding tools are following the same arc, and the teams getting the most out of them have already figured this out.
It Starts in Isolation
Most developers start their AI journey the same way: alone. Experimenting with prompts, figuring out what works, building a personal workflow that feels productive. That phase is valuable, but it has a ceiling.
What one developer learns stays with that developer. What the AI produces reflects that developer's habits, their slice of the codebase, their framing of the problem, their domain knowledge. Useful! But local. The moment that output lands in a shared codebase, the cracks start to show.
What one developer learns stays with that developer
And here's the thing that makes AI different from a junior developer finding their feet: AI doesn't have guardrails! It makes decisions confidently and at speed, every single time. Without any grounding in your conventions and patterns, it doesn't just introduce one person's preferences: it introduces entirely new ones, consistently, at scale. That compounds fast. Too fast for humans to keep up!
Existing Projects Are Actually the Best Starting Point
There's a common assumption that AI tools work best on fresh, greenfield projects. Clean slate, no legacy baggage, lots of room to set things up properly. In my experience, the opposite is true.
Existing projects already contain most of what you need. The patterns are in the code. The decisions are in the commit history. The conventions exist: they just live in people's heads instead of in writing. That's not a problem; it's actually a great starting point. The work is extraction and formalization, not invention, a great starting point!
A four-year-old repository with minimal documentation is a perfect example. Nobody's touched it because it works and nobody remembers exactly how. Feed it to an AI agent with the right context and standards in place, and suddenly that dusty codebase is workable again. (Results may vary)
Give the Agent Focus
When you give an AI agent a task on an undocumented codebase, it will produce something. It will sequence steps, choose libraries, resolve ambiguities, make trade-offs. The question is whether those decisions reflect your team's standards or just... whatever seemed reasonable based on general training data (which is probably bad in the long run).
The difference becomes really clear with something like a major dependency upgrade. The kind of work that touches core packages, forces decisions about what to modernize and what to preserve, and breaks things in ways you don't always anticipate. Without context, an agent will make those calls based on what's common in the wild. With explicit standards: which package manager the team uses, which test framework is preferred, which platform constraints apply. The output is way more consistent with the rest of the system.
That consistency is what makes the output reviewable. And reviewability is what makes AI-assisted development actually scale. If every contribution needs significant massaging before it can be merged, the productivity gains disappear quickly.
A Concrete Example
I recently ran an experiment on a Vue 2 sideproject that I hadn't touched in years. Outdated dependencies, minimal documentation (actually, only the default "Readme.md" file), a couple of environment-specific quirks I hadn't written down anywhere. (Problems for future me!)
Starting with a deliberately vague instruction to my favourite AI assistent ("upgrade this"), the agent generated a structured, sequenced plan: which packages to upgrade first, what to do about libraries with limited forward support, which legacy tooling to replace with more current alternatives. It didn't just produce a flat list of tasks: it produced an opinionated sequence that reflected how you'd actually want to approach a migration like this. The list was stored in an "Agents.md" file, which makes it part of the codebase.
It also flagged what it didn't know. The project deployed to two different environments with different constraints, and without documentation on those requirements, the agent surfaced the gap rather than making the wrong call.
Feed it back in the requirements, get an updated plan. It's standardization through conversation, not through a big upfront documentation effort.
The Feedback Loop Is the Real Win
This is the part that I think gets undersold. Standards aren't static: standards evolve as the codebase evolves. And every interaction with an AI agent is an opportunity to refine them.
When an agent flags a gap, that's the standard telling you what it's missing. When output needs correction, the correction is an input for the next version of the guidelines. The standard gets smarter with use, produces better output, which surfaces better refinements. It compounds!
And when you share those standards across projects and teams (there are other tools for that, which are more suited than an "Agents.md" file), individual insight becomes collective knowledge. Improvements by one developer propagate to everyone using the same conventions. That's the "learn once, apply everywhere" lever that makes standardization worth the investment, especially at scale.
What This Looks Like in Practice
It doesn't have to start big. Write down (or generate) the conventions that already exist. Capture the context that lives only in people's heads. Specify the quality bar that contributions are expected to meet. That foundation is enough to meaningfully improve AI output from day one.
From there, the standard grows through use. Gaps get filled, conventions get refined, context accumulates. The codebase, rather than drifting toward entropy under the weight of well-intentioned but inconsistent contributions, starts to converge.
That convergence is the actual promise of AI-assisted development done well. Not just faster code generation (though that's real). Not just less repetitive work (also real). It's a codebase that stays coherent as it grows, because the standards, the tooling, and the contributors are all pulling in the same direction. You're responsible for adding the guardrails and focus that don't come out of the box.
This is definitely worth the effort!
Enjoyed this article? I’m available for part-time remote work helping teams with architecture, web development (Vue/Nuxt preferred), performance and advocacy.