Skip to content

From Vibes to Process: AI Coding in Production Codebases

Photo of Paweł Strzałkowski
Paweł Strzałkowski
Chief Technology Officer

The Prototype Illusion

Starting a new application with an AI coding assistant feels like a superpower. You describe what you want, the AI generates a working prototype, and within an hour you have something running. Then you try the same approach on a production codebase with 200,000 lines of Ruby, a decade of accumulated business rules, and a test suite that takes 40 minutes to run. The AI hallucinates method names, ignores your team's conventions, and produces code that passes in isolation but breaks three other features. The difference is not the AI's capability. It is the absence of context.

At Visuality, we are a Ruby on Rails software agency that has been shipping production code for clients for over 15 years. When AI coding assistants became viable, we saw both the opportunity and the trap. The opportunity: developers can move faster on well-understood tasks. The trap: "vibe coding" on legacy systems creates more problems than it solves. So we built a process around it.

Challenges of AI Coding in Legacy Codebases

A greenfield project is a blank canvas. The AI can make reasonable assumptions about structure because there are no constraints yet. Legacy codebases are the opposite. They carry years of decisions, workarounds, and domain knowledge that exists nowhere but in the code itself.

The challenges are specific:

  • Implicit conventions: The team uses service objects for business logic, but the AI doesn't know that unless it reads the existing ones first.
  • Hidden dependencies: Changing a method in one module breaks a callback chain three files away.
  • Domain complexity: The word "order" means something different in the billing context than in the shipping context.
  • Scale: The AI cannot hold the entire codebase in its context window. It needs to know which parts matter for the current task.

Giving the AI a vague instruction like "add email notifications to the system" and hoping for the best is not a strategy. It is a recipe for pull requests that look plausible but miss the point entirely.

Separating Research, Planning, and Implementation

The solution is not to avoid AI on legacy codebases. It is to structure how the AI works with them. The key insight comes from context engineering principles developed by HumanLayer and presented in their talk "No Vibes Allowed: Solving Hard Problems in Complex Codebases". The core idea: AI coding agents can handle complex, large codebases effectively, but only through deliberate context management.

At Visuality, we adopted these principles and refined them for our workflow, converting the tooling to Ruby and Rails and adjusting it to fit the way our teams operate across different client projects. Our adapted version is open source at visuality-humanlayer.

The workflow breaks down into three phases that keep the human in control while letting the AI do what it does best.

Phase 1: Research

Before the AI writes a single line of code, it needs to understand what it is working with. Not the entire codebase, just the parts that matter for the task at hand. But not only the codebase. The research phase also looks outward: exploring documentation, web resources, and potential solutions to inform the approach.

The /research_codebase command spawns multiple research agents in parallel. They explore the existing code, trace relationships between files, look up relevant documentation and external resources, and synthesize everything into a structured report.

/research_codebase VIS-1234

# Or without a ticket, describe the feature directly
/research_codebase send email notifications to customers when their order status changes

The output is a research document saved to thoughts/research/ with detailed findings, code references, and architectural context. This becomes a required input for the next phase.

Why this matters: without research, the AI makes assumptions. With research, it works from facts. The difference shows up immediately in the quality of the plan it produces.

Phase 2: Create a Plan

Planning is where the human stays most involved, and where the real value of the process emerges. A good plan turns vague requirements into concrete, actionable steps before any code is written. Every plan starts with a reference to the research output, ensuring the AI builds on verified context rather than guessing.

/create_plan VIS-1234

# Or with a direct reference to the research output
/create_plan @thoughts/research/order-email-notifications.md

The AI reads the research, analyzes the task, and then does something critical: it asks clarifying questions. These are not generic questions. They are specific to what it found in the codebase. "The notification system uses a NotificationService class with a strategy pattern. Should email notifications follow this same pattern, or do you want a separate approach?" This interactive Q&A catches edge cases and requirements gaps early.

The output is a detailed implementation plan saved to thoughts/plans/, with phases, success criteria, and references to specific files. The developer reviews this plan before any implementation begins.

The Role of Planning in AI-Assisted Development

Skipping the plan and going straight to implementation is the single most common mistake teams make with AI coding assistants. It feels faster. It is not.

Without a plan, the AI makes decisions about architecture, error handling, and edge cases on the fly. Some of those decisions will be wrong, and you will not discover which ones until code review or, worse, production. With a plan, those decisions are made explicitly, reviewed by a human, and documented before implementation starts.

This is what separates structured AI-assisted development from vibe coding. The human reviews and approves the approach. The AI executes it.

Iterating on the Plan

The plan is often just right. But it doesn't have to be, and that is why iteration exists. Maybe the AI missed some context, or the developer has a preference it could not have known about. The /iterate_plan command lets the developer refine the plan until it accurately reflects their intentions.

/iterate_plan VIS-1234 - the existing NotificationService already supports multiple channels, use that rather than building a new one

The AI makes precise edits while preserving what already works. This process also feeds back into improving the workflow itself. Patterns that come up repeatedly during iteration become inputs for refining the research and planning commands over time.

Phase 3: Implement Against the Plan

With a reviewed and approved plan, implementation becomes straightforward execution.

/implement_plan VIS-1234

# Or with a direct reference to the plan
/implement_plan thoughts/plans/2026-01-15-order-email-notifications.md

The AI reads the plan, loads the referenced files, and works through each phase sequentially. Two features make this reliable on legacy codebases:

Deviation handling: If the AI encounters something that does not match the plan (a method signature changed, a dependency is missing), it stops and asks for guidance rather than improvising.

Manual checkpoints: The AI pauses between phases for human verification. This is not a formality. In a legacy codebase, verifying that phase 1 works correctly before starting phase 2 prevents cascading errors that are painful to untangle.

Example Workflow of an AI-Assisted Coding Process

A developer on our team picks up VIS-1234 to add a new reporting feature to a client's Rails application. The codebase is seven years old with over 300 models.

  • /research_codebase VIS-1234 to understand how existing reports work, what query patterns the application uses, and where the reporting module lives.
  • /create_plan VIS-1234. The AI proposes a three-phase approach and asks whether the new report should use the existing PDF generator or the newer HTML-based one. The developer chooses HTML-based and asks to add caching.
  • /iterate_plan VIS-1234 with a note that the caching should use the existing ReportCache module rather than a new implementation.
  • /implement_plan VIS-1234. The AI works through each phase, pausing for verification after the query layer is built and again after the view layer.

The result is a code change that follows the codebase's existing patterns, handles edge cases identified during planning, and includes the specific implementation choices the developer made. Not the AI's best guess, but the developer's informed decision executed consistently.

Team Alignment for AI-Assisted Development

Process alone is not enough. For AI-assisted development to work across a team, everyone needs to share a common understanding of the workflow. At Visuality, this means every developer uses the same set of commands, agents, and follows the same research-plan-implement cycle. The workflow is consistent across projects, though the specific configuration may vary to accommodate each project's needs.

This consistency is what makes AI-assisted development predictable rather than dependent on individual developer skill with prompting.

How to Get Started with AI-Assisted Coding

The workflow we use at Visuality is built on top of HumanLayer's context engineering principles, adapted for Ruby and Rails. Our implementation is open source:

Commands and agents

github.com/visualitypl/visuality-humanlayer Copy the agents/ and commands/ directories to your project's .claude/ folder and you have the full workflow available as slash commands in Claude Code.

Video introduction to AI-Assisted Coding

I gave a talk about this workflow at a recent Ruby User Group meetup, walking through the process with live examples. You can watch the recording on how to get AI to work in complex codebases.

The tools and the process continue to evolve as we learn what works across different projects and team sizes. The fundamental principle stays the same: give the AI the context it needs, keep the human in control of decisions, and treat the plan as the contract between the two.

Did you like it?

Sign up To Visuality newsletter