๐ Full Skill Source โ This is the complete, unedited SKILL.md file. Nothing is hidden or summarized.
Deep Search โ Semantic Memory Power-Up โ
When your project outgrows AI's context window, bring the search engine to your docs. Optional integration with tobi/qmd โ BM25 + Vector + LLM re-ranking, 100% local.
When to Trigger โ
This skill is NOT invoked directly. It is triggered automatically by other skills when they detect an oversized project.
Detection Thresholds โ
During codebase scan (Phase 1a of cm-brainstorm-idea, Step 2 of cm-dockit, etc.), check:
TRIGGER if ANY of these are true:
โ docs/ folder contains >50 markdown files
โ Project has >200 source files total
โ User mentions "meeting notes", "old PRDs", "historical specs"
โ User asks "find that file about X from before"
โ cm-dockit just generated >30 doc filesWhat to Say (Non-Intrusive) โ
When threshold is met, suggest naturally โ DO NOT block or force:
๐ก **Pro Tip: Deep Search**
This project has [X docs / Y source files] โ too large for AI to read directly.
You can install **[qmd](https://github.com/tobi/qmd)** to create semantic search
across all documentation, helping AI find the right context faster.
Quick install:
npm install -g @tobilu/qmd
qmd collection add ./docs --name project-docs
qmd context add qmd://project-docs "Project documentation for [project name]"
qmd embed
Then AI can search with: `qmd query "your question"`Setup Guide โ
Step 1: Install โ
# Node.js
npm install -g @tobilu/qmd
# Or Bun
bun install -g @tobilu/qmdStep 2: Index project docs โ
# Add collections
qmd collection add ./docs --name docs
qmd collection add ./src --name source --mask "**/*.{ts,tsx,js,jsx,py,go,rs}"
# Add context (helps AI understand each collection)
qmd context add qmd://docs "Technical documentation for [project-name]"
qmd context add qmd://source "Source code for [project-name]"
# Create vector embeddings
qmd embedStep 3: Setup MCP Server (for Claude/Cursor/Antigravity) โ
Add to MCP config:
{
"mcpServers": {
"qmd": {
"command": "qmd",
"args": ["mcp"]
}
}
}Or run HTTP mode for shared server:
qmd mcp --http --daemonStep 4: Verify โ
# Check index
qmd status
# Test search
qmd query "authentication flow"Usage with Cody Master Skills โ
With cm-brainstorm-idea (Phase 1: DISCOVER) โ
When AI needs to understand a large project holistically:
# Find all docs related to the brainstorm topic
qmd query "user authentication redesign" --json -n 10
# Get full content of important docs
qmd get "docs/architecture.md" --fullWith cm-planning (Phase A: Brainstorm) โ
When you need to find specs, PRDs, or past decisions related to the feature being planned:
qmd query "payment integration decisions" --files --min-score 0.4With cm-dockit (Post-generation) โ
After cm-dockit finishes generating docs, index them so AI can search from any session:
qmd collection add ./docs --name project-knowledge
qmd embedWith cm-continuity (Tier 4: External Memory) โ
cm-continuity manages working memory (500 words). qmd extends with long-term semantic search:
Tier 1: Sensory Memory โ temp variables in session (not saved)
Tier 2: Working Memory โ CONTINUITY.md (~500 words)
Tier 3: Long-Term Memory โ learnings.json, decisions.json
Tier 4: External Semantic โ qmd (optional, for large projects)Staleness Prevention โ
The biggest risk of Semantic Search is stale index / new source. If AI reads old docs it produces wrong code.
Cody Master handles this with 3 mechanisms:
1. Post-Execution Sync โ
Whenever AI completes a task that changes/creates many files (e.g., cm-dockit generates docs, cm-execution refactors source):
# Runs fast because qmd only embeds changed files (incremental)
qmd embedAI Rule: If the project uses qmd, AI must automatically run
qmd embedvia terminal before ending a task.
2. Pre-Flight Check โ
Before starting cm-brainstorm-idea or cm-planning on a project using qmd, AI runs a health check:
// AI auto-runs this MCP tool
{
"name": "status",
"arguments": {}
}If status reports pending/un-embedded files, AI runs qmd embed before searching.
3. Git Hook (Recommended for Users) โ
For 100% safety outside AI's control (when users edit code manually):
# Add to .git/hooks/post-commit
#!/bin/sh
qmd embed > /dev/null 2>&1 &This ensures every commit silently updates the index in the background.
Position in Cody Master Lifecycle โ
cm-continuity (memory) โโโโโโโโโโโโโโโ always active
cm-deep-search (search) โโโโ optional โโค
โโโ feeds context to โโโ cm-brainstorm-idea
โ โโโ cm-planning
cm-dockit (generate docs) โโ produces โโค โโโ cm-executionIntegration โ
| Skill | Relationship |
|---|---|
cm-continuity | COMPLEMENT: continuity = RAM, qmd = semantic disk search |
cm-brainstorm-idea | TRIGGERED BY: Phase 1a codebase scan detects large corpus |
cm-dockit | TRIGGERED AFTER: docs generated, suggest indexing |
cm-planning | CONSUMER: uses qmd results for context during planning |
cm-execution | CONSUMER: searches for related code/docs during execution |
Requirements โ
System: macOS / Linux / Windows (WSL)
Runtime: Node.js 20+ or Bun 1.0+
VRAM: ~2-4GB for GGUF models (embedding + reranking)
Disk: ~2-5GB for models (downloaded on first run)Rules โ
โ
DO:
- Suggest qmd ONLY when detection threshold is met
- Keep suggestion non-intrusive (Pro Tip format, never blocking)
- Always include context command (qmd context add) โ this is qmd's killer feature
- Guide user to setup MCP server for seamless AI integration
โ DON'T:
- Force installation on every project
- Suggest qmd for small projects (<50 docs, <200 src files)
- Replace cm-continuity โ they solve DIFFERENT problems
- Assume qmd is installed โ always check firstThe Bottom Line โ
cm-continuity = "remember what I'm doing." cm-deep-search = "find what was written before." Together = complete memory.