---
name: self-improvement
description: "Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks."
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
| Situation | Action |
|---|---|
| Command/operation fails | Log to .learnings/ERRORS.md |
| User corrects you | Log to .learnings/LEARNINGS.md with category correction |
| User wants missing feature | Log to .learnings/FEATURE_REQUESTS.md |
| API/external tool fails | Log to .learnings/ERRORS.md with integration details |
| Knowledge was outdated | Log to .learnings/LEARNINGS.md with category knowledge_gap |
| Found better approach | Log to .learnings/LEARNINGS.md with category best_practice |
| Simplify/Harden recurring patterns | Log/update .learnings/LEARNINGS.md with Source: simplify-and-harden and a stable Pattern-Key |
| Similar to existing entry | Link with **See Also**, consider priority bump |
| Broadly applicable learning | Promote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md |
| Workflow improvements | Promote to AGENTS.md (OpenClaw workspace) |
| Tool gotchas | Promote to TOOLS.md (OpenClaw workspace) |
| Behavioral patterns | Promote to SOUL.md (OpenClaw workspace) |
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
Via ClawdHub (recommended):bash clawdhub install self-improving-agent
Manual:bash git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
OpenClaw injects these files into every session:
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
mkdir -p ~/.openclaw/workspace/.learnings
Then create the log files (or copy from assets/):
- LEARNINGS.md — corrections, knowledge gaps, best practices
- ERRORS.md — command failures, exceptions
- FEATURE_REQUESTS.md — user-requested capabilities
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---|---|---|
| Behavioral patterns | SOUL.md |
"Be concise, avoid disclaimers" |
| Workflow improvements | AGENTS.md |
"Spawn sub-agents for long tasks" |
| Tool gotchas | TOOLS.md |
"Git push needs auth configured first" |
OpenClaw provides tools to share learnings across sessions:
For automatic reminders at session start:
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
See references/openclaw-integration.md for complete details.
For Claude Code, Codex, Copilot, or other agents, create .learnings/ in your project:
mkdir -p .learnings
Copy templates from assets/ or create files with headers.
When errors or corrections occur:
1. Log to .learnings/ERRORS.md, LEARNINGS.md, or FEATURE_REQUESTS.md
2. Review and promote broadly applicable learnings to:
- CLAUDE.md - project facts and conventions
- AGENTS.md - workflows and automation
- .github/copilot-instructions.md - Copilot context
Append to .learnings/LEARNINGS.md:
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
---
Append to .learnings/ERRORS.md:
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
Actual error message or output
```
If identifiable, what might resolve this
---
```
Append to .learnings/FEATURE_REQUESTS.md:
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
Format: TYPE-YYYYMMDD-XXX
- TYPE: LRN (learning), ERR (error), FEAT (feature)
- YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g., 001, A7B)
Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002
When an issue is fixed, update the entry:
**Status**: pending → **Status**: resolved### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
Other status values:
- in_progress - Actively being worked on
- wont_fix - Decided not to address (add reason in Resolution notes)
- promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
| Target | What Belongs There |
|---|---|
CLAUDE.md |
Project facts, conventions, gotchas for all Claude interactions |
AGENTS.md |
Agent-specific workflows, tool usage patterns, automation rules |
.github/copilot-instructions.md |
Project context and conventions for GitHub Copilot |
SOUL.md |
Behavioral guidelines, communication style, principles (OpenClaw workspace) |
TOOLS.md |
Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) |
**Status**: pending → **Status**: promoted**Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.mdLearning (verbose):
Project uses pnpm workspaces. Attempted
npm installbut failed.
Lock file ispnpm-lock.yaml. Must usepnpm install.
In CLAUDE.md (concise):markdown ## Build & Dependencies - Package manager: pnpm (not npm) - use `pnpm install`
Learning (verbose):
When modifying API endpoints, must regenerate TypeScript client.
Forgetting this causes type mismatches at runtime.
In AGENTS.md (actionable):markdown ## After API Changes 1. Regenerate client: `pnpm run generate:api` 2. Check for type errors: `pnpm tsc --noEmit`
If logging something similar to an existing entry:
grep -r "keyword" .learnings/**See Also**: ERR-20250110-001 in MetadataUse this workflow to ingest recurring patterns from the simplify-and-harden
skill and turn them into durable prompt guidance.
simplify_and_harden.learning_loop.candidates from the task summary.pattern_key as the stable dedupe key..learnings/LEARNINGS.md for an existing entry with that key:grep -n "Pattern-Key: <pattern_key>" .learnings/LEARNINGS.mdRecurrence-CountLast-SeenSee Also links to related entries/tasksLRN-... entrySource: simplify-and-hardenPattern-Key, Recurrence-Count: 1, and First-Seen/Last-SeenPromote recurring patterns into agent context/system prompt files when all are true:
Recurrence-Count >= 3Promotion targets:
- CLAUDE.md
- AGENTS.md
- .github/copilot-instructions.md
- SOUL.md / TOOLS.md for OpenClaw workspace-level guidance when applicable
Write promoted rules as short prevention rules (what to do before/while coding),
not long incident write-ups.
Review .learnings/ at natural breakpoints:
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
Automatically log when you notice:
Corrections (→ learning with correction category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
Feature Requests (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
Knowledge Gaps (→ learning with knowledge_gap category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
Errors (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
| Priority | When to Use |
|---|---|
critical |
Blocks core functionality, data loss risk, security issue |
high |
Significant impact, affects common workflows, recurring issue |
medium |
Moderate impact, workaround exists |
low |
Minor inconvenience, edge case, nice-to-have |
Use to filter learnings by codebase region:
| Area | Scope |
|---|---|
frontend |
UI, components, client-side code |
backend |
API, services, server-side code |
infra |
CI/CD, deployment, Docker, cloud |
tests |
Test files, testing utilities, coverage |
docs |
Documentation, comments, READMEs |
config |
Configuration files, environment, settings |
Keep learnings local (per-developer):gitignore .learnings/
Track learnings in repo (team-wide):
Don't add to .gitignore - learnings become shared knowledge.
Hybrid (track templates, ignore entries):gitignore .learnings/*.md !.learnings/.gitkeep
Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.
Create .claude/settings.json in your project:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
| Script | Hook Type | Purpose |
|---|---|---|
scripts/activator.sh |
UserPromptSubmit | Reminds to evaluate learnings after tasks |
scripts/error-detector.sh |
PostToolUse (Bash) | Triggers on command errors |
See references/hooks-setup.md for detailed configuration and troubleshooting.
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|---|---|
| Recurring | Has See Also links to 2+ similar issues |
| Verified | Status is resolved with working fix |
| Non-obvious | Required actual debugging/investigation to discover |
| Broadly applicable | Not project-specific; useful across codebases |
| User-flagged | User says "save this as a skill" or similar |
bash ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run ./skills/self-improvement/scripts/extract-skill.sh skill-name promoted_to_skill, add Skill-PathIf you prefer manual creation:
skills/<skill-name>/SKILL.mdassets/SKILL-TEMPLATE.mdname and descriptionWatch for these signals that a learning should become a skill:
In conversation:
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
In learning entries:
- Multiple See Also links (recurring issue)
- High priority + resolved status
- Category: best_practice with broad applicability
- User feedback praising the solution
Before extraction, verify:
This skill works across different AI coding agents with agent-specific activation.
Activation: Hooks (UserPromptSubmit, PostToolUse)
Setup: .claude/settings.json with hook configuration
Detection: Automatic via hook scripts
Activation: Hooks (same pattern as Claude Code)
Setup: .codex/settings.json with hook configuration
Detection: Automatic via hook scripts
Activation: Manual (no hook support)
Setup: Add to .github/copilot-instructions.md:
## Self-Improvement
After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection: Manual review at session end
Activation: Workspace injection + inter-agent messaging
Setup: See "OpenClaw Setup" section above
Detection: Via session tools and workspace files
Regardless of agent, apply self-improvement when you:
For Copilot users, add this to your prompts when relevant:
After completing this task, evaluate if any learnings should be logged to
.learnings/using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"