Practical Guide to Claude Code × Codex Review Integration
AI Fast Dev

Practical Guide to Claude Code × Codex Review Integration

OpenAI's official plugin now lets you run Codex code reviews directly from Claude Code. This guide covers everything from installation to real-world workflow, based on hands-on experience.

Shingo Irie
Shingo Irie

Indie developer

What you'll learn

You'll learn how to install the Codex review plugin for Claude Code, the differences between /codex:review and /codex:adversarial-review, how to run reviews in the background without interrupting your work, and a practical flow for turning review results into actionable fixes.

SECTION 01

You Can Now Call Codex Reviews from Claude Code — What Changes

OpenAI has released an official plugin called codex-plugin-cc that lets you run Codex code reviews directly within Claude Code. It's open source under the Apache 2.0 license, so anyone can inspect the code.

Previously, you had to run Codex in a separate terminal. Now you can invoke it with a single command from within your Claude Code session.

The biggest advantage of this plugin is that the entire flow — from requesting a review, to receiving results, to issuing fix instructions — happens in one continuous workflow. You could always run Codex CLI on its own for reviews, but the path from reading results to telling Claude Code "fix this" was disconnected.

Another reassuring aspect is that all review commands are read-only. There's no risk of modifying your existing code, which makes it very low-stakes to try out.

The plugin also supports background execution, so you can continue your conversation with Claude Code while a review is running. No more sitting idle waiting for reviews to finish.

SECTION 02

Installation and Initial Setup

This assumes Codex CLI is already installed. If not, install it with npm install -g @openai/codex. Node.js 18.18 or later is required.

Setup is completed in 4 steps. You can paste each command directly into the Claude Code prompt.

  • /plugin marketplace add openai/codex-plugin-cc (add the marketplace)
  • /plugin install codex@openai-codex (install the plugin)
  • /reload-plugins (reload plugins)
  • /codex:setup (verify Codex is ready)

/codex:setup checks your Codex installation status and authentication state. If Codex isn't installed, it may even offer to install it on the spot.

If you haven't authenticated yet, run !codex login to log in. You can use it with a ChatGPT subscription (including the free plan) or an OpenAI API key.

For detailed setup instructions and source code, check the GitHub repository.

SECTION 03

Choosing Between /codex:review and /codex:adversarial-review

There are two review commands. /codex:review is a standard review that conservatively identifies issues, while /codex:adversarial-review is an aggressive review that challenges your design and implementation decisions with improvement suggestions.

Here's a quick guide for when to use each.

  • Final check before committing/codex:review to catch anything you missed
  • Feature addition where you're unsure about the design/codex:adversarial-review to stress-test your decisions
  • Not sure which to use → Start with /codex:review — it's the safer choice

/codex:review methodically flags issues in the code you provide. It keeps extra suggestions to a minimum, making it ideal as a final check on your changes. You can also use the --base <ref> option for branch comparisons.

On the other hand, /codex:adversarial-review digs deeper, questioning trade-offs and failure modes. You can append focus areas as text after the command, such as narrowing it to "are there any performance concerns."

Both commands are read-only and will never modify your code. Reading the feedback and deciding what to act on is entirely up to you.

SECTION 04

Running Reviews in the Background Without Stopping Work

In practice, reviews take a fair amount of time. In my case, changes across about 15 files took roughly 10 minutes. Waiting synchronously isn't realistic, so background execution becomes the default.

Usage is simple — just add the --background option.

  • /codex:review --background (start a review in the background)
  • /codex:status (check running jobs and progress)
  • /codex:result (display results of completed jobs)
  • /codex:cancel (cancel a running job)

While it's running in the background, you can continue your normal conversation with Claude Code. Claude Code detects when the review finishes, so you don't need to keep checking manually.

What works well in practice is making it a routine to run reviews at specific moments. Build a rhythm of triggering reviews at natural breakpoints — before committing, when a feature is wrapped up, before creating a PR — and it becomes effortless to maintain.

SECTION 05

How to Handle Review Results — Why the No-Auto-Fix Design Works

What I appreciated most in practice is that review results are simply returned to you, and Claude Code doesn't automatically apply fixes. Blindly applying everything an AI review suggests is honestly scary, so this behavior is a relief.

For example, sometimes the review flags parts you changed intentionally. But since Claude Code doesn't try to revert those on its own, you can pick and choose at your discretion. The flow is to select only what's needed and tell Claude Code "fix this."

The reason this approach feels sustainable for long-term use is clear.

  • Having a step to evaluate each suggestion prevents unintended changes
  • Getting things auto-fixed without understanding the intent behind your changes causes problems down the line
  • Review results can be directly converted into fix instructions, so the overhead is minimal

A workflow that blindly accepts AI reviews might be convenient short-term, but it's not trustworthy long-term. Having a process where you review results with your own eyes before deciding is exactly what makes it safe to incorporate into your daily routine.

SECTION 06

Why Solo Developers Should Embrace AI Reviews

Across the many services I've built, the moments I felt my programming skills level up were when a more skilled programmer reviewed and critiqued my code. That experience is incredibly valuable.

However, as a freelancer or solo developer, code review opportunities are surprisingly rare. In team development, PR reviews happen as a matter of course, but when you're building alone, that entire mechanism simply doesn't exist.

Codex has a thorough, persistent approach to analysis, and its reputation as a reviewer is strong among engineers. It's well-suited for catching oversights and reconsidering design choices. Through trial and error, I naturally settled into a division of labor: Codex for reviews, Claude Code for implementation.

When developing solo, reviews get skipped indefinitely unless you make a conscious effort.

  • Make /codex:review --background a habit before every commit
  • Run it when you finish implementing a feature
  • Pass it through /codex:adversarial-review before creating a PR, including design considerations

The practical value of this plugin is that it lets you build a review culture even as a solo developer. As a way to deliberately secure a second pair of eyes, it's remarkably effective.

SECTION 07

How to Get Reviews Returned in Japanese Instead of English

By default, Codex review results are returned in English. For understanding the content or sharing with your team, Japanese is often more convenient.

The solution is simple — just ask Claude Code to "return reviews in Japanese." Claude Code will adjust the review prompt, and subsequent reviews will come back in Japanese.

The nice part is that you don't need to manually edit any plugin configuration files. Since you can request adjustments through Claude Code, post-installation customization is low-friction.

Codex settings are managed in ~/.codex/config.toml (user level) and .codex/config.toml (project level), and the plugin inherits these settings. Model selection, reasoning effort adjustments, and other configurations are all reflected through the same mechanism.

SECTION 08

Bonus: /codex:rescue Lets You Delegate Tasks That Involve Edits

While review commands are read-only, /codex:rescue can delegate tasks that involve editing to Codex. You can request bug investigations, test failure fixes, design rework, and more in natural language.

Here are some typical use cases.

  • /codex:rescue investigate why the tests started failing (investigate test failures)
  • /codex:rescue fix the failing test with the small model (faster processing with a smaller model)
  • /codex:rescue --background investigate the flaky test (long-running task in the background)

Since rescue works by running Codex as a sub-agent, it's fundamentally different from reviews. You need to understand that it may modify your code before using it.

The recommended adoption order is to start with read-only reviews first, then expand to rescue once you're comfortable. Rather than granting edit permissions right away, it's safer to get a feel for Codex's tendencies before broadening its scope.

SECTION 09

The Recommended Workflow for Daily Development

Based on everything covered so far, here's a workflow for integrating this into your daily development. The key is deciding in advance "when to run reviews" and "how to handle results."

Here's how it looks during the implementation phase.

  • Once a feature implementation is wrapped up, run /codex:review --background
  • Meanwhile, continue other work with Claude Code (documentation updates, designing the next task, etc.)
  • Check completion with /codex:status, then retrieve results with /codex:result
  • Read through the suggestions and only ask Claude Code to fix what actually needs fixing

When you're uncertain about a design decision or you've written complex logic, switch to /codex:adversarial-review. Think of it as getting an outside perspective to challenge whether your approach is really the right one.

The biggest advantage of this plugin is that it eliminates the context-switching cost of managing multiple terminals. The overhead of jumping between tools adds up, so having everything in one screen directly improves daily productivity.

Since it works with any ChatGPT subscription including the free plan, if you're already using both Claude Code and Codex, it's worth trying the review integration first.

For the plugin's source code and latest updates, check GitHub.

Built 40+ products and keeps shipping solo with AI-assisted development. Shares practical notes from building and operating self-made tools.

AI FAST DEV

AI Fast Dev

Practical techniques to maximize development speed with AI.

Read next

Related notes

Read the adjacent notes to connect the broader operating model.

How to Use Cursor: A Practical Guide to Faster Development with Step-by-Step Setup

For those who've installed Cursor but aren't getting the most out of it. This guide covers everything from using Tab completion, inline editing, and chat effectively to configuring Rules and setting up MCP integrations—practical steps to speed up your development workflow.

Claude Code vs. Codex: A Head-to-Head Code Review Showdown

We planted 18 deliberate flaws in an e-commerce cart API and had both tools review the same code under identical conditions. Claude Code excelled at domain logic, while Codex stood out on security attack chains. Here's how to combine them for near-zero blind spots.

Skills Engineers Should Develop and How to Deliver Value in the AI Era

A practical guide for engineers on which skills to develop and how to deliver value in the AI era, organized around the real challenges that emerge after adopting AI tools.

KingCoding

A tool that fits the next step after this article

Manage Claude Code and Codex tasks from a single dashboard. Move faster by reducing friction across implementation, review, and day-to-day operating flow.

AX ConsultingAI-powered business optimization & product development

We help optimize operations and build new products with AI through Lancers LLM Lab.

Learn more