I Built a Tool to Fix Claude Code Skills

I audited my own Claude Code skills and found problems in every one. So I built a plugin to do the audit for anyone.

I’ve been using Claude Code daily for months. I built 5 custom skills: session management, content engine, growth tracking, outreach, session closing. They worked. I thought they were solid.

Then Claude Code shipped new skills features, and I started seeing developers talk about capabilities I’d never used. Dynamic context injection. Argument modes. Tool restrictions. I went through my own skills with a checklist.


What was wrong

session-opener made 3 tool calls at startup that should have been !command dynamic injections. Zero round trips instead of three.

session-closer had no disable-model-invocation flag. Claude could auto-trigger it mid-conversation, writing to three files unprompted.

All 5 skills duplicated the same Project Context block already in CLAUDE.md. Thousands of unnecessary tokens loaded every session.

None of them restricted tool access. session-opener only reads files but had access to Write, Edit, Bash, and everything else.


Why I built it

I looked for existing tools to catch this. Anthropic’s quick_validate.py checks YAML frontmatter. Community skill-reviewer does a single-pass best practices check. Repello and Cisco have security scanners for malicious content. Important, but different problem.

None of them answered the question I actually had: are my skills well-built?

So I built skill-doctor.


How it works

Three commands. That’s the whole interface.

/skill-doctor

Reads all your skills, agents, and CLAUDE.md. Uses Claude Code's built-in claude-code-guide agent to evaluate each skill against current best practices. Run as checkup for a report, or consult to map findings to your actual pain points.

/skill-doctor:treat

Builds upgraded versions in a staging directory. Test with cc --plugin-dir, say "migrate" to apply. Originals are backed up.

/skill-doctor:rollback

Restores your originals if anything breaks.

How it evaluates

There’s no hardcoded checklist. Each skill gets sent to Claude Code’s claude-code-guide agent, which evaluates it against Claude Code’s current documentation. The agent knows the latest hooks system, script support, frontmatter fields, and mechanism tradeoffs. It decides whether your workflow should be a hook, a script, dynamic injection, or an instruction, and tells you why.

It also checks cross-cutting concerns: skill overlap, CLAUDE.md bloat, and agent-skill wiring gaps.


Install

claude plugin marketplace add amaljithkuttamath/skill-doctor
claude plugin install skill-doctor@skill-doctor-marketplace

No dependencies beyond Claude Code. Optionally integrates with skill-creator for format validation and blind comparison evals.

v1.0. Issues and feature requests welcome.