How to Use Claude Code Skills: 3 Patterns for Standardizing Your Workflow
AI Fast Dev

How to Use Claude Code Skills: 3 Patterns for Standardizing Your Workflow

Learn how to turn repetitive instructions into reusable Skills in Claude Code, automating reviews, tests, and external integrations with three practical patterns: Quality Gate, Scheduled Execution, and External Integration.

Shingo Irie
Shingo Irie

Indie developer

What you'll learn

From the basics of Claude Code Skills to three workflow standardization patterns (Quality Gate, Scheduled Execution, and External Integration), common design mistakes and how to fix them, and practical considerations for team-wide adoption — all based on real-world experience.

SECTION 01

What Are Claude Code Skills — Eliminating the Inefficiency of Repeating the Same Instructions

When I first started using Claude Code, my workflow was typing out what I wanted in plain language and copy-pasting it every time. "Review this," "Run tests," "Take a screenshot and verify" — typing these out each time isn't giving instructions, it's just busywork. Skills is the feature that eliminates this repetition entirely.

At its core, Skills is a system that lets you register reusable instruction sets and execute them with a single call. You might think of it as a prompt template, but what sets it apart is that the Claude Code agent automatically invokes the right Skill based on context. Once registered, it feels like Skills just kick in when needed without you having to think about it.

The first thing you notice after using Skills is that their value lies not in precision but in reproducibility. Calling a pre-built Skill instead of writing instructions from scratch every time reduces mistakes and lowers mental overhead. The biggest change is simply not having to think about work that doesn't require thinking.

Skills have three placement locations, each suited to different purposes. Understanding these upfront saves confusion later.
- Project Skills: Placed in .claude/skills within your repository. Use these to standardize procedures and rules specific to that project
- User Skills: Placed in ~/.claude/skills. Best for general-purpose instructions you want available across all projects
- Slash Commands: Invoked explicitly with /. Ideal for operations you don't want triggered automatically, or tasks where you need to choose the exact moment of execution

<!-- Figure: Simple conceptual diagram showing the three Skill placement locations (Project, User, Slash Commands) -->

The decision of where to place a Skill is straightforward. If it's only for this project, use a Project Skill. If you want it everywhere, use a User Skill. The trick with Slash Commands is to reserve them exclusively for operations you don't want running on their own.

SECTION 02

3 Patterns for Standardization — Which Skills to Build First

Once you start using Skills, you'll inevitably face the question: "What should I turn into a Skill?" Creating Skills at random only leads to management headaches. Classifying your work into three patterns makes priorities clear.

The three patterns are ① Quality Gate ② Scheduled Execution ③ External Integration. Each pattern naturally maps to a different type of work. The decision flow is simple:
- Do you have verification tasks that happen every time? → Quality Gate
- Is there work you want triggered on a schedule? → Scheduled Execution
- Do you need to call external APIs or tools? → External Integration

These three patterns are not mutually exclusive — combining them is how real-world operations work. For example, hitting a deploy API after tests pass chains the Quality Gate and External Integration patterns together. Start by identifying the task you repeat most often in your work, then figure out which pattern it falls under.

From my experience, the Quality Gate pattern is what you should build first. Reviews and tests happen every single time, and inconsistency in these procedures directly impacts quality. Locking these down with Skills stabilizes your entire development cycle.

Scheduled Execution and External Integration can wait until your Quality Gate Skills are running smoothly. Not trying to turn everything into Skills all at once is the key to making this approach sustainable.

SECTION 03

Pattern 1: Quality Gate — Semi-Automating Reviews, Tests, and Verification

The Quality Gate pattern is about automating the verification cycle after task completion. In practice, I run three Skills: one that automatically engages plan mode based on task size, one that runs a code-improver agent for post-task review, and one that uses test-verifier for testing and verification. Combined, the post-task verification loop runs semi-automatically.

Previously, I was manually mediating the entire sequential process of "made improvements → verified behavior → issued more fix instructions." By combining Skills with sub-agents, the experience shifted to the AI proposing improvements, implementing them itself, and running tests — all in an internal loop. My role is reduced to decision-making and final confirmation.

The key to designing Quality Gate Skills is separating each step into a single responsibility. Building one monolithic Skill that does "review plus testing plus screenshots" creates problems when you need to stop midway.
- Review Skill: Checks code accuracy, security, and readability
- Test Skill: Verifies execution results and captures screenshots
- Submission Skill: Executes store submission or deployment procedures

For App Store submissions and deployment workflows, I build the processes programmatically in advance and invoke them from Skills. Instead of writing procedures in a document, registering them as Skills eliminates procedural drift. The greatest strength of the Quality Gate pattern is that anyone can run it at any time and get the same result.

Once this pattern is running, development starts shifting from writing code to setting direction. Having more time to focus on design decisions and architecture is the most tangible benefit of building Skills.

SECTION 04

Pattern 2: Scheduled Execution — Running Information Gathering and Reports on a Timer

The Scheduled Execution pattern is about automatically running defined tasks on a time-based trigger. My favorite personal use case is automated daily emails. By combining sub-agents, Skills, and CLI tools, I built a system that pulls from multiple sources — AdMob, YouTube, X, note, RSS — and sends a compiled email automatically every day.

Using Claude Code Cowork (a scheduled execution framework), you can set up workflows like collecting news at fixed times — morning, noon, and night — and distributing them to Discord. For example, you can specify news sites and RSS feeds as sources, filter by your areas of interest, and build a Skill that posts to a channel.

There are three key points when designing Scheduled Execution Skills:
- Explicitly specify information sources: List what to pull within the Skill. Leaving it vague produces inconsistent results every time
- Fix the delivery channel: Decide on the output destination — email, Discord, Slack, etc.
- Set personalization criteria: Writing down your areas of interest and priority filters reduces noise

Another application of Scheduled Execution is automatically generating articles based on what I've tweeted on X. This isn't built for anyone else — it's a personal workflow. As you systematically convert routine tasks into Skills, you get the sense that "everything I used to do manually" is being automated wholesale.

<!-- Figure: Simple flow diagram of the Scheduled Execution pattern (Time Trigger → Information Gathering → Filtering → Delivery) -->

The appeal of Scheduled Execution is that once set up, it keeps running unattended. However, you need to watch for changes in information sources or API specifications. Building a habit of checking your Skill outputs once a month to confirm they're producing expected results gives you peace of mind.

SECTION 05

Pattern 3: External Integration — Turning APIs, Image Generation, and Email into Skills

The External Integration pattern is about calling external APIs and services through Skills. For example, I've registered the API for an image generation service called NanoBanana as a Skill, making it callable directly from Claude Code. Store submissions, deployments, image generation, email delivery — any external integration with a fixed procedure gets turned into a Skill, leaving only the thinking work for me.

You might be thinking, "Why not just use MCP (Model Context Protocol)?" However, I'm currently putting MCP on hold. The reason is simple: I'm concerned about it connecting to various services autonomously and causing unexpected damage.

What makes Skills appealing is that invocations are explicit, and you always know what's going to happen. MCP looks convenient, but honestly it's still hard to judge how much to delegate to it. My stance is to stay within what I can control with Skills and sub-agents, and connect to external services individually through code when needed.

Here are the design principles for External Integration Skills:
- Explicit invocation: Clearly state which API the Skill calls. Never create implicit connections
- Limited blast radius: Each Skill should touch only one external service. If you need to span multiple services, split them into separate Skills
- Defined error behavior: Write into the Skill how it should behave when the API is down

The most important principle for External Integration is maintaining a state where you always know what's being executed. The more automation progresses, the more things tend to become black boxes. Transparency — being able to read a Skill and understand exactly what it does — is what builds trust.

SECTION 06

What to Do When Skills Don't Work as Expected

You built a Skill but it doesn't get invoked when expected, or it fires at the wrong time — this is a problem most people run into. The cause is almost always in how the Skill file's description is written. Claude Code reads this description to automatically select the right Skill for the current context.

When the description is vague, a different Skill gets called in similar situations, or the right Skill gets skipped when it should fire. The fix is straightforward: be specific about "when this Skill should be used" and "what it does." Instead of "improve code," you need something like "after task completion, review from security, readability, and performance perspectives."

Another common failure is Skill granularity. Skills that are too large and Skills that are too small each cause different problems.
- Too large: Packing "review + test + deploy" into one Skill means you can't stop midway. Partial reuse becomes impossible
- Too small: Splitting into "variable name check" and "indentation check" causes management costs to explode. You can't figure out which one to call

Splitting by single responsibility and combining as needed is the practical solution. Review is review, testing is testing, deployment is deployment. Build each as an independent Skill, then call them in sequence as needed. This design balances flexibility with manageability.

The distinction between Slash Command invocation and automatic invocation matters too. Destructive operations or tasks that affect external systems should be explicitly executed via Slash Commands. Let automatic invocation handle tasks you want running every time — like reviews and tests — while deployments and store submissions are triggered manually with commands like /deploy. This separation is what works in practice.

SECTION 07

Practical Tips for Skill Design

With the three patterns understood, let's organize the design principles for actually writing Skills. Skills are a simple mechanism — just write instructions in a Markdown file — but how you write them dramatically affects usability.

First and foremost, writing a clear description at the top of the Skill file is critical. Claude Code uses this description to determine whether a Skill applies. Simply stating three things — "when to use it," "what it does," and "what it doesn't do" — drastically reduces misfires.

Next, the trick is to write Skill instructions as "goals and constraints" rather than step-by-step procedures. Rather than writing "Step 1: do this, Step 2: do that..." in granular detail, stating "the end state should look like this" and "these things must not happen" better leverages Claude Code's flexibility.
- Good example: "Confirm all tests pass and coverage has not decreased. If there are UI changes, capture screenshots"
- Bad example: "First run npm test, then parse the results, then if there are errors propose fixes..."

<!-- Figure: Simple image contrasting good and bad examples of Skill design -->

Also, avoid creating dependencies between Skills as much as possible. Chains like "after Skill A finishes, call Skill B" are better managed through claude.md or your workflow layer. Individual Skills should operate independently, with orchestration handled by higher-level configuration — this separation makes operations easier.

Finally, Skills are not something you create once and forget — they're something you grow. Rather than aiming for a perfect Skill from the start, building something that works first and then fine-tuning the description and instructions through actual use ultimately gets you to a better result faster.

SECTION 08

From Individual to Team — The Idea of Handing Off an Entire Environment

As you build up your personal Skill collection, you'll reach a realization: your current claude.md, memory, sub-agents, and Skills only work because they're all working together as a system. The seamless flow from review to testing to screenshot verification works not because of any individual Skill, but because the entire environment is designed as a whole.

This naturally leads to the idea of handing off not individual Skills, but the entire environment. When a new team member sets up their development environment, simply copying the claude.md and Skill files reproduces the same workflow. This is a fundamentally different onboarding experience from reading documentation and memorizing procedures.

However, team deployment raises practical considerations that differ from personal use.
- Naming conventions: As Skills multiply, name collisions and confusion arise. You need rules like classifying with prefixes such as review-security and test-e2e
- Version control: Skill files should be included in the repository and managed with Git. Without change history, you lose track of who changed what and when
- Update workflow: You need a process for how improvement proposals get reviewed and incorporated

Without a mechanism to feed individual improvements back into the team's Skills, operations tend to become siloed. "Person A's Skills work but Person B's are outdated" becomes increasingly common as the number of Skills grows. Building a habit of periodically auditing your Skills prevents this problem.

In the future, a marketplace for sharing and selling Skills and sub-agents built with Claude Code may emerge. Considering the possibility that "how to instruct AI" becomes the next thing people sell after code, investing time in building Skills now could be seen as turning your expertise into an asset.

SECTION 09

Build Your First One — Steps to Get Started

I've covered the three patterns and design principles, but the most important thing is building your first Skill. You'll learn more from getting one small, usable Skill running today than from endlessly planning the perfect design.

The recommended starting point is taking the instruction you type most often and writing it directly into a Skill file. If you find yourself typing "Review this code, check for security and readability" every time, just save that as-is to .claude/skills/review.md and your first Skill is done.

The basic structure of a Skill file is simple:
- File name: Give it a name that conveys what it does (e.g., review.md, test-e2e.md)
- description: Write 1–2 lines at the top explaining when this Skill applies and what it does
- Body: Write your instructions to Claude Code directly. No special syntax required

Once your first Skill is running, the next step is using it for about a week and adjusting the description. If it fires at unintended times, make the description more specific. If it gets skipped when it should fire, broaden the conditions. This tuning cycle is what improves Skill quality.

From my experience building many products, the most effective way to adopt any tool is to get that first small win as quickly as possible. Skills are no different — once you experience the convenience of your first one, the second and third follow naturally. Start with a Quality Gate review Skill.

SECTION 10

Skills Become Infrastructure, Not Just Instructions

When I first started using Skills, I saw them as convenient shortcuts — nothing more. But as I continued using them, I've come to feel that Skills aren't just abbreviated instructions; they're infrastructure for the development workflow.

Skills qualify as "infrastructure" because development breaks down without them. Remove the review Skill and quality checks revert to manual. Remove the scheduled execution Skills and information gathering stops. What started as a supplementary tool has, at some point, become the foundation of the development process — that's the fundamental transformation Skills bring.

The answer to "what are Skills for" is delegating routine tasks to the agent so you can focus on architecture and design decisions. Rather than making coding faster, Skills function as a system for "removing predetermined work from your hands."

What's become clear through trial and error is a shift in how people relate to Skills — from "discover and use" to "build, refine, and share." Moving from trying off-the-shelf Skills to packaging project-specific rules and procedures into custom Skills. Whether you consciously drive this transition is what separates effective Skill adoption from the rest.

Skills, like code, are something you write, test, improve, and share. Build your first one, organize with the three patterns, and deploy across your team. Through this process, the center of gravity in development shifts unmistakably from "doing the work" to "designing the systems."

Built 40+ products and keeps shipping solo with AI-assisted development. Shares practical notes from building and operating self-made tools.

AI FAST DEV

AI Fast Dev

Practical techniques to maximize development speed with AI.

Read next

Related notes

Read the adjacent notes to connect the broader operating model.

How to Use Cursor: A Practical Guide to Faster Development with Step-by-Step Setup

For those who've installed Cursor but aren't getting the most out of it. This guide covers everything from using Tab completion, inline editing, and chat effectively to configuring Rules and setting up MCP integrations—practical steps to speed up your development workflow.

Practical Guide to Claude Code × Codex Review Integration

OpenAI's official plugin now lets you run Codex code reviews directly from Claude Code. This guide covers everything from installation to real-world workflow, based on hands-on experience.

Claude Code vs. Codex: A Head-to-Head Code Review Showdown

We planted 18 deliberate flaws in an e-commerce cart API and had both tools review the same code under identical conditions. Claude Code excelled at domain logic, while Codex stood out on security attack chains. Here's how to combine them for near-zero blind spots.

KingCoding

A tool that fits the next step after this article

Manage Claude Code and Codex tasks from a single dashboard. Move faster by reducing friction across implementation, review, and day-to-day operating flow.

AX ConsultingAI-powered business optimization & product development

We help optimize operations and build new products with AI through Lancers LLM Lab.

Learn more