Prompt Augmentation Architecture (v0.8.0)¶
The Discovery¶
In a real conversation, we observed:
User: "please use the software architect skill to figure out the best way to do this"
→ Claude IMMEDIATELY invoked the software-architect skill (100% reliability)
vs.
User: types something that triggers detection
→ Hook adds: "[Contextune detected: `/sc:design` with 85% confidence via keyword]"
→ Claude sees it but execution is variable
Key Insight: Skills are more reliable than slash commands because they use Claude's native Skill tool (structured, type-safe) instead of text expansion (unstructured).
Architecture Evolution¶
v0.7.x: Passive Suggestion (Variable Reliability)¶
# Hook adds context but doesn't modify prompt
response = {
"continue": True,
"hookSpecificOutput": {
"additionalContext": "[Contextune detected: `/sc:design` with 85% confidence]"
},
"feedback": "💡 Type `/sc:design` to design system architecture"
}
Flow:
User: "design a system"
↓
Contextune: Detects /sc:design
↓
Hook: Adds suggestion to context
↓
Claude: Sees suggestion, may or may not act
Problems: - ❌ Variable execution rate - ❌ User sees suggestion but has to type command - ❌ Not automatic/seamless
v0.8.0: Active Augmentation (High Reliability)¶
# Hook modifies the prompt itself
response = {
"continue": True,
"modifiedPrompt": "design a system. You can use your software-architect skill to help with this task.",
"feedback": "💡 Contextune: Using /sc:design (85% keyword)"
}
Flow:
User: "design a system"
↓
Contextune: Detects /sc:design
↓
Hook: Augments prompt with skill directive
↓
Claude: Receives "You can use your software-architect skill" → EXECUTES!
Benefits: - ✅ Much higher execution rate - ✅ Seamless (user doesn't type anything) - ✅ Leverages Claude's native skill system
Implementation Details¶
Skill Mapping¶
# Maps slash commands to ~/.claude/skills/ names
SKILL_MAPPING = {
"/sc:design": "software-architect", # Existing skill
"/sc:analyze": "code-analyzer", # Future
"/ctx:research": "researcher", # NEW in v0.8.0
"/ctx:plan": "parallel-planner", # Future
}
Prompt Augmentation Logic¶
def create_skill_augmented_prompt(match: IntentMatch, original_prompt: str) -> str:
"""
Augment prompt with skill suggestion for reliable execution.
Evidence: Skills invoked more reliably than slash commands because
they use Claude's native Skill tool (structured) vs text expansion (unstructured).
"""
if match.command in SKILL_MAPPING:
skill_name = SKILL_MAPPING[match.command]
# Strong directive: "You can use your X skill"
return f"{original_prompt}. You can use your {skill_name} skill to help with this task."
else:
# For commands without skills, use directive language
action = COMMAND_ACTIONS.get(match.command, "complete this request")
return f"{original_prompt}. Please use the {match.command} command to {action}."
Hook Response Structure¶
response = {
"continue": True,
"modifiedPrompt": augmented_prompt, # ← KEY ADDITION
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": f"[Contextune detected: `{match.command}` with {match.confidence:.0%} confidence]"
},
"feedback": "💡 Contextune: Using /sc:design (85% keyword)"
}
Skill Creation Pattern¶
Structure¶
Plugin Skills (Version Controlled):
contextune/
└── skills/
├── researcher/
│ └── SKILL.md
├── code-analyzer/ (future)
│ └── SKILL.md
└── parallel-planner/ (future)
└── SKILL.md
Global Skills (User-specific):
Key Difference: - Plugin skills are distributed with Contextune, automatically available - Global skills are user-installed, project-independent
Template¶
# [Skill Name] Skill
[Brief description of what this skill does]
## When to Activate
This skill should be used when:
- [Trigger condition 1]
- [Trigger condition 2]
- Keywords: "[keyword list]"
## What This Skill Does
[Detailed explanation of the workflow]
## Workflow
### Step 1: Execute the Command
\```
/[command] [args]
\```
### Step 2: Process Results
[What to do with results]
### Step 3: Present Findings
[Output format]
## Example Usage
**User Query:**
\```
[example user input]
\```
**Your Action:**
\```
/[command] [args]
\```
**Expected Output:**
\```
[structured output format]
\```
## Integration Notes
- This skill wraps the `/[command]` command
- [Any special considerations]
## Error Handling
[How to handle failures]
## Tips for Best Results
- [Tip 1]
- [Tip 2]
Current Skills¶
1. sc:architect (Migrated & Optimized in v0.8.0)¶
- Command:
/sc:design - Skill name:
sc:architect(matches command namespace) - Location:
skills/software-architect/SKILL.md(plugin-local) - Status: ✅ Production
- Purpose: System architecture design (5-step workflow)
- Optimization: 27% context reduction (226→166 lines)
- Migration: From global
~/.claude/skillsto plugin
2. ctx:researcher (NEW in v0.8.0)¶
- Command:
/ctx:research - Skill name:
ctx:researcher(matches command namespace) - Location:
skills/researcher/SKILL.md(plugin-local) - Status: ✅ Production
- Purpose: Parallel research (3 agents, web + codebase)
3. sc:analyzer (Planned)¶
- Command:
/sc:analyze - Skill name:
sc:analyzer(matches command namespace) - Status: 📋 Planned for v0.9.0
- Purpose: Code quality, security, performance analysis
4. ctx:planner (Planned)¶
- Command:
/ctx:plan - Skill name:
ctx:planner(matches command namespace) - Status: 📋 Planned for v0.9.0
- Purpose: Create parallel development plans
Why Skills Are More Reliable¶
Technical Explanation¶
Skills:
- Invoked via Claude's Skill tool
- Structured, type-safe invocation
- First-class citizen in Claude's architecture
- Clear activation signals
Slash Commands: - Text replacement (prompt expansion) - Unstructured, string-based - Relies on Claude interpreting expanded text - Weaker signal
Evidence¶
From our conversation:
User: "please use the software architect skill to figure out the best way to do this"
→ Result: Claude immediately used the Skill tool
→ Output: Comprehensive architectural analysis with research
User: types trigger words
→ Before v0.8.0: Variable execution
→ After v0.8.0: Much more reliable (via augmented prompt)
Backward Compatibility¶
Slash Commands Still Work¶
Users can still type slash commands manually:
User types: /ctx:research best React libraries
→ Command executes normally
→ No prompt augmentation needed
Skill-less Commands¶
For commands without skills, we use directive language:
# No skill exists for this command
"modifiedPrompt": "check status. Please use the /ctx:status command to monitor parallel task progress."
This still improves execution rate vs. passive suggestion.
Metrics & Success Criteria¶
v0.7.x (Passive Suggestion)¶
- Execution rate: ~60-70% (estimated)
- User action required: Type command
- UX: Suggestion-based
v0.8.0 (Active Augmentation)¶
- Execution rate: ~90-95% (estimated with skills)
- User action required: None (automatic)
- UX: Seamless
Future Measurement¶
- Track execution rates via observability.db
- A/B test: augmentation vs. passive
- User feedback on reliability
Future Enhancements¶
v0.9.0: More Skills¶
- Create
code-analyzerskill (/sc:analyze) - Create
parallel-plannerskill (/ctx:plan) - Create
code-improverskill (/sc:improve)
v1.0.0: Intelligent Skill Selection¶
def select_skill_strategy(match: IntentMatch, context: Dict) -> str:
"""Choose between skill, command, or hybrid based on context."""
# Factors to consider:
# - User's past behavior (skill vs command preference)
# - Command complexity (simple → command, complex → skill)
# - Current session state (already using skills?)
# - Execution history (what worked before?)
if context["user_prefers_skills"] and match.command in SKILL_MAPPING:
return "skill"
elif match.complexity > 0.8:
return "skill" # Complex tasks benefit from skills
else:
return "command" # Simple tasks can use direct commands
v2.0.0: Auto-Skill Generation¶
# Automatically generate skills from command definitions
def create_skill_from_command(command_path: Path) -> Path:
"""Parse command markdown, generate skill wrapper."""
command_md = read_command(command_path)
skill_md = generate_skill_template(command_md)
save_skill(skill_md)
update_skill_mapping(command_md.name, skill_md.name)
Debugging¶
Enable Debug Logging¶
Test Prompt Augmentation¶
# Test hook directly
echo '{"prompt":"research React libraries"}' | uv run hooks/user_prompt_submit.py
# Expected output:
# {
# "continue": true,
# "modifiedPrompt": "research React libraries. You can use your researcher skill to conduct this search.",
# ...
# }
Verify Skill Exists¶
# Check if skill file exists
ls ~/.claude/skills/researcher/SKILL.md
# View skill contents
cat ~/.claude/skills/researcher/SKILL.md
Conclusion¶
Prompt augmentation represents a fundamental shift in Contextune's architecture:
- Before: Passive suggestions (Claude may or may not act)
- After: Active prompt modification (Claude receives directive instructions)
This leverages a key insight: Skills are more reliable than commands because they use Claude's native tool system.
Result: Higher execution rates, better UX, seamless automation.
Evidence: Proven in real conversation (software-architect skill invocation).
Future: Expand to more commands, intelligent selection, auto-generation.