How to Start AI-Driven Development: Reshaping Every Phase from Design to Implementation
AI Fast Dev

How to Start AI-Driven Development: Reshaping Every Phase from Design to Implementation

In the age of AI-generated code, what truly changes is the developer's role itself. A practical, experience-based guide to restructuring your design, implementation, and review workflow around AI.

Shingo Irie
Shingo Irie

Indie developer

What you'll learn

This article covers how AI-driven development changes the developer's role, how to choose the right tools, manage costs, restructure design and implementation phases, handle shifting bottlenecks, maintain security, and design your first task for immediate results.

SECTION 01

AI-Driven Development Changes Your Role, Not Just Your Coding Speed

When people hear "AI-driven development," they immediately think about faster code generation. But that's not where the real change happens. The moment coding gets faster, the actual bottleneck shifts to a completely different part of the workflow.

Having built over 40 services, the biggest lesson is that bottlenecks always migrate. Once coding speed improves, testing and verification start consuming most of your time. Fix that, and infrastructure configuration becomes the new constraint.

Through this cycle, my role gradually shifted from "the person who implements" to "the person who oversees the product." Before AI, solo development meant you could only build as much as one person could code. Everything was sequential.

With AI handling code generation, sequential work becomes parallel. You focus on decisions and course corrections while AI handles the bulk of implementation. Running multiple services simultaneously becomes genuinely feasible.

The key insight is that AI excels at the middle portion of any project.

- The first phase: Deciding what to build and articulating the desired experience
- The middle phase: Where AI delivers the most value
- The final phase: Eliminating the "it works but something feels off" moments

The first and last phases still require human judgment. Whether you rush through that final polish or do it properly determines the quality gap.

SECTION 02

Choosing Your Tools — CLI Agents vs. Editor-Integrated AI

AI coding tools fall into two main categories: CLI-based agents (Claude Code, Codex CLI) that run in your terminal, and editor-integrated tools (Cursor, Copilot) embedded in your IDE. It's not about which is better — it's about which matches your workflow.

I chose Claude Code (Anthropic's CLI-based AI coding agent) as my primary tool for two reasons: the ability to run multiple terminals simultaneously for parallel work, and its stability with long contexts.

Here's how the two types compare:

- CLI-based: Best for terminal-native developers. Great for parallel tasks and automation
- Editor-integrated: Best for developers who want to code and converse simultaneously. Inline completions feel intuitive
- Hybrid approach: Use CLI for large implementations and editor-integrated tools for small fixes

I also tried GeminiCLI (Google's CLI-based AI tool), but found differences in complex problem-solving and sustained development work. For one-off questions it's adequate, but context retention matters when you're working across an entire project.

The decision framework is simple: if you're comfortable in the terminal and want to run parallel tasks, try CLI-based tools. If you prefer staying in your editor and working conversationally with code visible, editor-integrated tools are the better fit.

SECTION 03

Managing Costs Without Blowing Your Budget on Usage-Based Pricing

When starting AI-driven development, understanding the pricing model is essential. CLI-based agents often use usage-based pricing, and costs can fluctuate significantly depending on how you work. Your choice of plan directly affects your psychological comfort.

I use Claude Code's Max Plan (a fixed monthly subscription). Usage-based pricing creates a mental brake — you hesitate before each interaction. AI-driven development requires freely iterating with AI, so that hesitation is a real problem.

Here's how to think about the choice:

- Daily users: Fixed plans provide both financial and mental stability
- A few times per week: Usage-based can work, but set spending limits
- Team adoption: Have one person trial a fixed plan first, measure impact, then expand

The danger with usage-based pricing is unexpected spikes from long conversations or large codebase processing. Build a habit of checking your usage dashboard daily, or configure a monthly spending cap before you start.

For solo developers and small teams, starting with a fixed plan for one month is the safest bet. That month will reveal your usage patterns. You can then make an informed decision about whether switching to usage-based pricing would save money.

SECTION 04

Transforming the Design Phase — Plans and Mockups with AI

The highest-impact use of AI is actually in the design phase, not implementation. Using AI before writing code dramatically reduces mid-implementation rework. The basic pattern: have AI create a plan before diving into complex features.

This doesn't mean formal documentation. Simply ask "list the steps needed to implement this feature" and review what comes back. Point out gaps or sequencing issues. This exchange eliminates most of the uncertainty you'd otherwise face during implementation.

For UI work, mockups are the ultimate specification document. Describing "I want a screen like this" in words is far less precise than having AI generate a quick mockup and saying "implement based on this."

How you ask matters enormously. Avoid confirmation-seeking questions like "this approach is correct, right?" AI tends to agree rather than challenge. Instead, ask open questions: "Give me three approaches to this problem with tradeoffs for each." That produces genuinely useful options.

Key principles for AI-assisted design:

- Have AI draft the plan first: The more complex the feature, the bigger the payoff
- Use mockups as specifications: Eliminates ambiguity that words leave behind
- Ask open questions, not confirmations: Prevents AI from just agreeing with you
- Apply your own judgment last: Never adopt AI suggestions without review

SECTION 05

Running Implementation — Parallel Tasks and Review in Practice

During implementation, the core workflow is running multiple terminals in parallel. Terminal A handles frontend, Terminal B works on the API, Terminal C adds tests — all simultaneously. This is how sequential development becomes parallel in practice.

But parallel work introduces a new bottleneck: verification shifts to consuming most of your time. AI-generated code can be "working but not what you intended." When multiple tasks complete simultaneously, the verification queue piles up fast.

Here's how to manage the verification bottleneck:

- Keep task granularity small: Don't let individual tasks grow too large
- Verify immediately upon completion: Don't let a backlog accumulate
- Have AI write tests: Automate part of the verification process

Another critical signal during implementation: if something is taking unusually long, that's a yellow flag. When AI keeps retrying or seems stuck, it means its understanding has drifted. Stopping and resetting the context is faster than waiting it out.

Review load also deserves attention. As AI-generated code volume increases, review burden inevitably grows. The code works but can be hard to read, with design intent that doesn't come through clearly. This remains a fundamentally human responsibility.

To reduce review cost, I've built a habit of asking AI to explain the design intent of its code immediately after implementation. Getting that explanation while context is fresh dramatically lowers the cognitive cost of reviewing later.

SECTION 06

Bottlenecks Always Move — The Mindset of Continuous Elimination

A pattern that keeps repeating in AI-driven development: every time you eliminate a bottleneck, the next one appears. Coding gets fast, so verification becomes the constraint. Verification improves, so managing multiple agents becomes the constraint. Accepting this chain reaction is essential.

The migration I experienced followed this path:

- Stage 1: Coding accelerated, verification became the bottleneck
- Stage 2: Verification improved, infrastructure setup became the bottleneck
- Stage 3: Managing multiple AI agents itself became the bottleneck

To solve Stage 3, I built KingCoding (a task management tool for multiple AI coding agents). Running parallel tasks across terminals means keeping track of what's progressing where consumes real mental energy. I wanted to systematize that overhead.

The symbolic part: KingCoding was built using KingCoding itself, over roughly two weeks. Solving an AI-driven development bottleneck with AI-driven development. Once this feedback loop starts working, the acceleration becomes tangible.

The core lesson: don't let bottlenecks become normalized. When you get used to an inefficiency, it blends into your daily routine and becomes invisible. If something feels off, that's your signal to improve.

SECTION 07

Safely Handling Internal Code and Sensitive Information

When adopting AI-driven development, security concerns inevitably arise. Especially for projects containing proprietary code or secrets, you need to decide in advance what you're comfortable sending to AI services.

CLI-based agents have a security advantage in local execution. Code is processed on your local machine, limiting the pathways through which code might leave your environment compared to cloud-synced editors. However, conversation content is still sent to the cloud, so it's not fully local.

Key points to verify when code is sent to the cloud:

- Data usage terms: Whether submitted code is used for model training
- Organization security policies: What classification level of information can be shared with AI
- Credential exclusion: Apply .gitignore-style exclusions to your AI tool configuration

The minimum setup you should complete is ensuring API keys and environment variables are never passed to AI. Most CLI tools offer configuration to exclude specific files or directories. Do this on day one without fail.

At the same time, don't over-restrict. If you withhold most of your codebase from AI out of caution, output quality drops significantly. The practical approach is to reliably exclude high-risk information while sharing everything else.

SECTION 08

Designing Your First Task to Feel the Impact on Day One

When starting AI-driven development, your first task choice determines whether the habit sticks. Jumping straight into a new project isn't recommended. Expectations run too high, and any friction risks a premature "this doesn't work" conclusion.

The fastest path to tangible results is refactoring existing code or adding tests. Ask AI to "write tests for this function" or "refactor this processing." You can evaluate AI's output against code you already understand. Having a known baseline makes assessment straightforward.

The recommended progression:

- Step 1: Add tests or refactor existing code
- Step 2: Small feature additions or bug fixes
- Step 3: New feature design and implementation

Once you have that first success, gradually weave AI into your daily workflow. During regular coding, identify tasks where you think "AI could handle this" and delegate them one by one. There's no need to force everything into AI-driven mode at once.

The most important mindset: don't treat AI as magic. It delivers remarkable results in its strengths but has clear limitations. Through trial and error, building an intuitive sense of that boundary is the shortest path to making AI-driven development a permanent part of your routine.

Built 40+ products and keeps shipping solo with AI-assisted development. Shares practical notes from building and operating self-made tools.

AI FAST DEV

AI Fast Dev

Practical techniques to maximize development speed with AI.

Read next

Related notes

Read the adjacent notes to connect the broader operating model.

How to Use Cursor: A Practical Guide to Faster Development with Step-by-Step Setup

For those who've installed Cursor but aren't getting the most out of it. This guide covers everything from using Tab completion, inline editing, and chat effectively to configuring Rules and setting up MCP integrations—practical steps to speed up your development workflow.

Practical Guide to Claude Code × Codex Review Integration

OpenAI's official plugin now lets you run Codex code reviews directly from Claude Code. This guide covers everything from installation to real-world workflow, based on hands-on experience.

Claude Code vs. Codex: A Head-to-Head Code Review Showdown

We planted 18 deliberate flaws in an e-commerce cart API and had both tools review the same code under identical conditions. Claude Code excelled at domain logic, while Codex stood out on security attack chains. Here's how to combine them for near-zero blind spots.

KingCoding

A tool that fits the next step after this article

Manage Claude Code and Codex tasks from a single dashboard. Move faster by reducing friction across implementation, review, and day-to-day operating flow.

AX ConsultingAI-powered business optimization & product development

We help optimize operations and build new products with AI through Lancers LLM Lab.

Learn more