๐ Full Skill Source โ This is the complete, unedited SKILL.md file. Nothing is hidden or summarized.
Continuity โ Working Memory Protocol โ
Context persistence across sessions. Mistakes captured. Learnings applied. Inspired by Loki Mode's CONTINUITY.md protocol (Autonomi).
When to Use โ
ALWAYS โ This is a background protocol, not an explicit invocation.
- Start of every session: Read
.cm/CONTINUITY.mdto orient yourself - End of every session: Update
.cm/CONTINUITY.mdwith progress - On error: Record in Mistakes & Learnings section
- On key decision: Record in Key Decisions section
Setup โ
# Initialize working memory for current project
cm continuity init
# Check current state
cm continuity status
# View captured learnings
cm continuity learningsThe Protocol โ
AT THE START OF EVERY SESSION: โ
1. Read .cm/CONTINUITY.md to understand current state
2. Read "Mistakes & Learnings" to avoid past errors
3. Check "Next Actions" to determine what to do
4. Reference Active Goal throughout your workDURING WORK: โ
PRE-ACT ATTENTION CHECK (before every significant action):
- Re-read Active Goal
- Ask: "Does my planned action serve this goal?"
- Ask: "Am I solving the original problem, not a tangent?"
- If DRIFT detected โ log it โ return to goalAT THE END OF EVERY SESSION: โ
1. Update "Just Completed" with accomplishments
2. Update "Next Actions" with remaining work
3. Record any new "Mistakes & Learnings"
4. Record any "Key Decisions" made
5. Update "Files Modified" list
6. Set currentPhase and timestampON ERROR (Self-Correction Loop): โ
ON_ERROR:
1. Capture error details (stack trace, context)
2. Analyze root cause (not just symptoms)
3. Write learning to CONTINUITY.md "Mistakes & Learnings"
4. Update approach based on learning
5. Retry with corrected approach
6. Max 3 retries per error pattern before ESCALATECONTINUITY.md Template โ
# CodyMaster Working Memory
Last Updated: [ISO timestamp]
Current Phase: [planning|executing|testing|deploying|reviewing]
Current Iteration: [number]
Project: [project name]
## Active Goal
[What we're currently trying to accomplish โ 1-2 sentences max]
## Current Task
- ID: [task-id from dashboard]
- Title: [task title]
- Status: [in-progress|blocked|reviewing]
- Skill: [cm-skill being used]
- Started: [timestamp]
## Just Completed
- [Most recent accomplishment with file:line references]
- [Previous accomplishment]
- [etc โ last 5 items]
## Next Actions (Priority Order)
1. [Immediate next step]
2. [Following step]
3. [etc]
## Active Blockers
- [Any current blockers or waiting items]
## Key Decisions This Session
- [Decision]: [Rationale] โ [timestamp]
## Mistakes & Learnings
### Pattern: Error โ Learning โ Prevention
- **What Failed:** [Specific error that occurred]
- **Why It Failed:** [Root cause analysis]
- **How to Prevent:** [Concrete action to avoid this in future]
- **Timestamp:** [When learned]
- **Agent:** [Which agent]
- **Task:** [Which task ID]
## Working Context
[Critical information for current work โ API keys paths,
architecture decisions, patterns being followed]
## Files Currently Being Modified
- [file path]: [what we're changing]4-Tier Memory System (Brain-Inspired) โ
Tier 1: SENSORY MEMORY (seconds โ within current tool call)
โ Internal variables, intermediate results
โ NEVER written to file โ discarded when action completes
โ Example: "File X has 200 lines" โ no need to remember next session
Tier 2: WORKING MEMORY (current session โ 7 days)
โ CONTINUITY.md โ the active scratchpad
โ Auto-rotates: entries > 7 days promote to Tier 3 or decay
โ Max 500 words (~400 tokens)
Tier 3: LONG-TERM MEMORY (30+ days, only if reinforced)
โ .cm/learnings.json โ error patterns with TTL + scope
โ .cm/decisions.json โ architecture decisions with supersedence
โ Entries MUST be reinforced (same pattern โฅ 2x) to survive
โ Decay: auto-archive if not relevant after TTL expires
Tier 4: EXTERNAL SEMANTIC MEMORY (optional โ for large projects)
โ tobi/qmd โ BM25 + Vector + LLM re-ranking, 100% local
โ Indexes entire docs/, src/, meeting notes folders
โ AI queries via MCP: qmd query "keyword" โ relevant snippets
โ See cm-deep-search skill for setup & detection thresholds
โ ONLY suggested when project >50 docs or >200 source filesCONTINUITY.md = "what am I doing NOW?"learnings.json = "what mistakes should I avoid?"decisions.json = "what architecture rules apply?"qmd (optional) = "find what was written across hundreds of docs"
Memory Audit Protocol (Auto โ Every Session Start) โ
When reading CONTINUITY.md at session start, SIMULTANEOUSLY run audit:
Step 1: Decay Check โ
Scan .cm/learnings.json:
For each learning where status == "active":
daysSinceRelevant = today - lastRelevant
IF daysSinceRelevant > ttl:
โ Set status = "archived"
โ Log: "Archived learning L{id}: {error} (TTL expired)"
IF reinforceCount โฅ 3 AND ttl < 90:
โ Extend ttl = 90 (proven pattern)
IF reinforceCount โฅ 5 AND ttl < 180:
โ Extend ttl = 180 (fundamental knowledge)Step 2: Conflict Detection โ
Scan .cm/decisions.json:
For each pair of decisions with same scope:
IF decisions contradict each other:
โ Older decision: set supersededBy = newer.id, status = "superseded"
โ Log: "Superseded D{old.id} by D{new.id}"
IF ambiguous (can't auto-resolve):
โ Flag in CONTINUITY.md Active Blockers
โ Ask user to clarifyStep 2b: Integrity Scan โ
Scan learnings for red flags that may CAUSE bugs:
For each active learning in scope:
IF lastRelevant > 30 days ago AND reinforceCount == 0:
โ Flag as LOW_CONFIDENCE (read but verify before applying)
IF prevention pattern conflicts with current codebase patterns:
โ Flag as SUSPECT (do NOT apply blindly โ verify first)
IF multiple learnings for same scope have conflicting preventions:
โ Flag as CONFLICT (resolve immediately: keep newer, invalidate older)
On flags found:
LOW_CONFIDENCE โ Read but treat as suggestion, not rule
SUSPECT โ Compare with actual code before following
CONFLICT โ Invalidate older, keep newer, log resolutionStep 3: Scope-Filtered Reading โ
When executing a task for module X:
ONLY load learnings where:
scope == "global" OR scope == "module:X" OR scope starts with "file:src/X/"
SKIP learnings for other modules entirely.
Token savings: Read 5 relevant learnings (250 tokens)
instead of 50 total learnings (2,500 tokens)Step 4: Reinforcement (Anti-Duplicate) โ
When recording a new error/learning:
IF similar learning already exists in learnings.json:
โ DO NOT create duplicate
โ UPDATE existing: reinforceCount++, lastRelevant = today, reset TTL
โ Log: "Reinforced L{id} (count: {reinforceCount})"
IF no similar learning exists:
โ CREATE new entry with scope, ttl=30, reinforceCount=0.cm/learnings.json Format (v2 โ with Smart Fields) โ
[
{
"id": "L001",
"date": "2026-03-21",
"error": "i18n keys missing in th.json",
"cause": "Batch extraction skipped Thai locale",
"prevention": "Always run i18n-sync test after each batch",
"scope": "module:i18n",
"ttl": 30,
"reinforceCount": 0,
"lastRelevant": "2026-03-21",
"status": "active"
}
]| Field | Purpose |
|---|---|
scope | global / module:{name} / file:{path} โ where this applies |
ttl | Days until auto-archive (default: 30) |
reinforceCount | Times pattern repeated (+1 each hit) |
lastRelevant | Last date this learning was accessed or reinforced |
status | active / archived / invalidated / corrected |
Status meanings:
activeโ Trusted, applied when in scopearchivedโ TTL expired, retrievable on demandinvalidatedโ Proven wrong (caused bug) โ NEVER read againcorrectedโ Was wrong, has been fixed โ read with caution
.cm/meta-learnings.json Format (Memory Self-Healing Log) โ
When memory itself causes a bug, record a meta-learning:
[
{
"id": "ML001",
"type": "memory-caused-bug",
"affectedLearning": "L003",
"action": "invalidated",
"reason": "Prevention pattern conflicts with new codebase architecture",
"bugDescription": "Deploy failed because learning suggested fetch but project uses axios",
"date": "2026-03-21"
}
]Meta-learnings are the system learning about its own mistakes. They prevent the same bad-memory pattern from recurring.
.cm/decisions.json Format (v2) โ
[
{
"id": "D001",
"date": "2026-03-21",
"decision": "Use React Hook Form over Formik",
"rationale": "Better performance with uncontrolled components",
"scope": "module:forms",
"supersededBy": null,
"status": "active"
}
]| Field | Purpose |
|---|---|
scope | Where this decision applies |
supersededBy | ID of newer decision that replaces this one (null if current) |
status | active / superseded |
Decay Timeline (Ebbinghaus-Inspired) โ
First recorded: TTL = 30 days
Reinforced 1x (count=1): TTL resets to 30 from today
Reinforced 2x (count=2): TTL = 60 days (pattern emerging)
Reinforced 3x+ (countโฅ3): TTL = 90 days (proven pattern)
Reinforced 5x+ (countโฅ5): TTL = 180 days (fundamental knowledge)
Not reinforced after TTL: status โ "archived" (retrievable on demand)Inspired by Ebbinghaus Forgetting Curve: Un-reinforced memories decay. Repeatedly reinforced memories become long-term knowledge.
Scope Tagging Rules (For All Skills) โ
When writing to Mistakes & Learnings or Key Decisions, ALWAYS tag scope:
scope: "global" โ Applies to entire project
(e.g., "Always run test before deploy")
scope: "module:{name}" โ Applies to specific module only
(e.g., "module:auth", "module:i18n")
scope: "file:{path}" โ Applies to one file only
(e.g., "file:src/api/routes.ts")
RULE: When in doubt, choose the SMALLEST scope.
file > module > global
WHY: Smaller scope = less noise = AI only reads what's relevant.Integration โ
| Skill | How it integrates |
|---|---|
cm-execution | RARV Mode D reads CONTINUITY.md in REASON phase |
cm-planning | Sets Active Goal and Next Actions |
cm-debugging | Records errors in Mistakes & Learnings |
cm-quality-gate | VERIFY phase updates CONTINUITY.md |
cm-code-review | Records review feedback as learnings |
cm-deep-search | Tier 4 โ extends memory with semantic search for large codebases |
Rules โ
โ
DO:
- Read CONTINUITY.md at session start (ALWAYS)
- Run Memory Audit at session start (decay + conflicts + scope filter)
- Update CONTINUITY.md at session end (ALWAYS)
- Tag EVERY learning/decision with scope (global/module/file)
- Reinforce existing learnings instead of creating duplicates
- Keep CONTINUITY.md under 500 words (rotate to Tier 3)
- Be specific: "Fixed auth bug in login.ts:42" not "Fixed stuff"
โ DON'T:
- Skip Memory Audit ("I'll read everything, it's fine")
- Write learnings without scope ("it applies everywhere" = almost never true)
- Create duplicate learnings (reinforce existing ones instead)
- Let learnings.json grow unbounded (TTL + decay handles this)
- Read ALL learnings regardless of current module (use scope filter)
- Ignore superseded decisions (they cause conflicting code)
- Keep stale context that no longer applies to current architectureThe Bottom Line โ
Your memory is your superpower. Without it, you repeat every mistake forever.