A

Agent Instructor

Back to Blog
Guide

How to Create an Agent Skill: Step-by-Step Guide

Learn how to create your first AI agent skill with this comprehensive tutorial. Covers both the no-code Agent Instructor approach and manual SKILL.md creation.

December 15, 2025
9 min read

Introduction

Creating an Agent Skill transforms your expertise into a format AI agents can understand and consistently apply. Whether you're a consultant wanting to scale your methodology, a team lead standardizing processes, or an expert looking to enhance your AI workflows. This guide will walk you through the entire process.

You have two paths to create Agent Skills:

  1. Agent Instructor (Recommended) — A guided, conversational approach that requires no technical knowledge
  2. Manual Creation — Writing SKILL.md files directly for those comfortable with Markdown

Let's explore both approaches.


Method 1: Using Agent Instructor (No-Code)

Agent Instructor guides you through skill creation with a conversational interview process. It's designed for subject matter experts who want to capture their expertise without worrying about file formats or technical details.

Step 1: Define Your Skill's Purpose

Start by describing what you want your skill to help AI do. Be specific about:

  • The task or domain — What specific area does this skill cover?
  • The goal — What should AI accomplish when using this skill?
  • The audience — Who will benefit from this skill's output?

Example prompt:

"I want to create a skill that helps AI write code review comments that are constructive, specific, and follow our team's standards for tone and content."

Step 2: Provide Context and Background

Agent Instructor will ask questions to understand your domain:

  • What terminology should AI understand?
  • What's the typical workflow or process?
  • What constraints or requirements exist?
  • What does "good" look like in your field?

Be generous with context. The more AI understands about your domain, the better it can apply your skill.

Step 3: Share Examples

Examples are the most powerful part of skill creation. Provide:

  • Good examples — What excellent output looks like
  • Bad examples — Common mistakes to avoid
  • Edge cases — Unusual situations and how to handle them

For our code review skill:

Good example:

"Consider using a Map instead of an Object here since you're doing frequent lookups. Maps have O(1) lookup performance and clearer intent for key-value operations. See: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map"

Bad example:

"This is wrong. Use Map."

Step 4: Define Best Practices

What rules should AI always follow? What should it always avoid?

Always:

  • Explain the "why" behind suggestions
  • Provide links to documentation when relevant
  • Acknowledge what's done well before suggesting improvements

Never:

  • Use condescending language
  • Make suggestions without explanation
  • Criticize code style when it's a matter of preference

Step 5: Quiz and Refine

Agent Instructor tests your skill by presenting scenarios and showing how AI would respond. Review the outputs and provide feedback:

  • Does it match your expectations?
  • Are there edge cases it handles poorly?
  • Is the tone right?

Iterate until the skill consistently produces the quality you expect.

Step 6: Export Your Skill

Once you're satisfied, export your skill as a SKILL.md file. You can:

  • Download it directly
  • Copy to your project repository
  • Share with team members

Create Your First Skill in Minutes

No technical knowledge required. Just answer questions about your expertise.

Start Creating

Method 2: Manual SKILL.md Creation

If you're comfortable with Markdown and want full control over your skill files, you can create them manually.

The SKILL.md Structure

Every skill file has two parts: frontmatter (metadata) and body (content).

---
name: code-review-standards
description: Provides constructive, specific code review feedback following team standards
version: 1.0.0
---

# Code Review Standards

## Instructions

[Step-by-step guidance for AI]

## Examples

[Concrete examples]

## Best Practices

[Rules and constraints]

Writing Effective Frontmatter

The frontmatter is crucial, it helps AI determine when to use your skill.

---
name: code-review-standards
description: Provides constructive, specific code review feedback following team standards for tone, content, and actionability
version: 1.0.0
author: Your Name
tags:
  - code-review
  - development
  - team-standards
---

Tips for good frontmatter:

  • name: Use kebab-case, be descriptive but concise
  • description: Include key trigger words AI will match against
  • version: Use semantic versioning for tracking changes
  • tags: Help with organization and discovery

Writing Clear Instructions

Instructions should be explicit and actionable. Use numbered steps for processes, bullet points for guidelines.

## Instructions

When reviewing code, follow these steps:

1. **Read for understanding first**

   - Understand what the code is trying to accomplish
   - Identify the main logic flow
   - Note any unclear sections

2. **Check for correctness**

   - Does the code do what it's supposed to do?
   - Are there edge cases not handled?
   - Are error conditions managed appropriately?

3. **Evaluate code quality**

   - Is the code readable and well-organized?
   - Are functions appropriately sized?
   - Is naming clear and consistent?

4. **Write constructive feedback**
   - Start with what's done well
   - Be specific about issues
   - Explain the "why" behind suggestions
   - Provide examples or links when helpful

Providing Strong Examples

Examples teach through demonstration. Structure them clearly:

## Examples

### Example 1: Suggesting a Refactor

**Code under review:**

```javascript
function getData(id) {
  if (cache[id]) {
    return cache[id];
  }
  const data = fetchFromDB(id);
  cache[id] = data;
  return data;
}
```

Good review comment:

Nice use of caching! Consider adding cache invalidation or TTL to prevent stale data issues in long-running processes. Also, fetchFromDB might throw. Wrapping in a try/catch would make this more robust.

Reference: Caching Best Practices

Example 2: Addressing Unclear Code

Code under review:

const x = arr.filter(i => i.s === 1).map(i => i.n);

Good review comment:

This works, but the variable names make it hard to follow. Consider:

const activeUserNames = users.filter(user => user.status === ACTIVE).map(user => user.name);

Descriptive names make the code self-documenting.


### Defining Best Practices

Best practices are rules AI should always follow:

```markdown
## Best Practices

### Tone and Approach
- Be constructive, not critical
- Use "consider" and "might" rather than "must" and "should"
- Acknowledge effort and good decisions before suggesting improvements
- Ask clarifying questions when intent is unclear

### Content Standards
- Every suggestion must include an explanation
- Provide links to documentation for complex topics
- Include code examples when suggesting refactors
- Prioritize: correctness > performance > style

### What to Avoid
- Never make personal comments about the author
- Don't suggest changes for purely stylistic preferences
- Avoid suggesting large refactors without discussion
- Don't approve code with obvious bugs just to be nice

Complete Example

Here's a complete, production-ready skill:

---
name: code-review-standards
description: Provides constructive, specific code review feedback following team standards for tone, content, and actionability
version: 1.0.0
---

# Code Review Standards

## Instructions

When reviewing code, follow this process:

1. **Understand Context**

   - Read the PR description and linked issues
   - Understand what the code is trying to accomplish
   - Consider the broader system impact

2. **Review Systematically**

   - Check correctness first (bugs, edge cases, error handling)
   - Then evaluate quality (readability, organization, naming)
   - Finally consider optimization (only if there are clear issues)

3. **Write Feedback**
   - Start with positive observations
   - Be specific: reference line numbers and code snippets
   - Explain reasoning, not just conclusions
   - Categorize: blocking issue vs. suggestion vs. question

## Examples

### Blocking Issue

> **Line 45:** This will throw a null pointer exception when `user` is undefined. We need to add a null check here since this code path is reachable from the guest login flow.

### Suggestion

> **Line 72:** Consider extracting this logic into a `calculateDiscount()` function. It would make the checkout flow easier to test and the intent clearer.

### Question

> **Line 93:** I'm not sure I understand the business requirement here—should inactive users really see this notification? Can you clarify?

## Best Practices

**Always:**

- Explain the "why" behind every suggestion
- Provide code examples for non-trivial changes
- Acknowledge good decisions and clean code
- Be specific (line numbers, snippets)

**Never:**

- Make it personal
- Demand changes for style preferences
- Approve with known bugs
- Leave vague comments like "this needs work"

Testing Your Skill

Before deploying your skill, test it thoroughly:

1. Coverage Testing

Does your skill handle the common cases you expect?

2. Edge Case Testing

What happens with unusual inputs? Does AI respond appropriately?

3. Tone Testing

Is the output voice consistent with what you want?

4. Failure Mode Testing

When AI doesn't know something, does it fail gracefully?


Deploying Your Skill

Once your skill is ready:

For Personal Use

Place the SKILL.md file in your project root or a .skills/ directory that your AI tool can access.

For Team Use

  1. Store skills in a shared repository
  2. Document which skills exist and their purposes
  3. Establish a process for updating and versioning
  4. Consider using Agent Instructor's team features for collaboration

For Claude

Claude automatically looks for SKILL.md files in your project context. Simply include the file in your project or upload it to a Claude Project.

For GitHub Copilot

Copilot respects .github/copilot-instructions.md and skill files in your repository. Place your skills where Copilot can find them.


Common Mistakes to Avoid

Being Too Vague

❌ "Write good code reviews" ✅ "Write code review comments that are specific, explain the reasoning, and provide actionable suggestions"

Skipping Examples

Examples are worth thousands of words of instruction. Always include them.

Making Skills Too Broad

A skill for "Software Development" is too broad. A skill for "Python Type Hints in Data Processing Code" is focused and useful.

Forgetting Edge Cases

Document what AI should do when it's uncertain or when situations fall outside normal parameters.

Not Iterating

Your first version won't be perfect. Plan to refine based on real usage.


Next Steps

Ready to Build?

Create production-ready Agent Skills with guided assistance.

Start Free

Related Topics
create agent skill
SKILL.md tutorial
AI agent skills
how to train AI
skill creation
Claude skills