v0.8 addresses a correctness and cost problem that scales with project age. ADD’s learning system is append-only — every /add:verify, /add:tdd-cycle, /add:deploy, and /add:away writes a checkpoint entry. Every subsequent skill reads the full JSON into context during its pre-flight. On new projects that’s fine. On 6-month-old projects it’s punishing, and on 1-year-old projects it’s broken — large files hit Read-tool caps and entries silently drop.
tl;dr — A jq-based PostToolUse hook regenerates a compact learnings-active.md whenever the JSON is written. Agents read the small pre-filtered view instead of the full JSON. The canonical JSON is never modified. A three-layer fallback means no data is ever lost. Install v0.8, the migration runs automatically from any version ≥ 0.5.0.
The problem: autoload grows monotonically
Before v0.8, every ADD skill invocation did something like this during pre-flight:
1. Read .add/learnings.json # full file into context
2. Apply stack filter # in-context
3. Apply category filter # in-context
4. Rank by severity + date # in-context
5. Cap at top 10 # in-context
6. Proceed with skill # finally
Steps 2–5 were implemented as 60 lines of filtering instructions in an autoloaded rule — so the filtering logic was in context on top of the data being filtered. And because the JSON is append-only, the file grew without bound:
| Project | Entries | JSON size | Tokens | % of 200K context |
|---|---|---|---|---|
| ADD (this repo) | 34 | 19 KB | ~4,820 | 2.4% |
| Projected at 200 | 200 | ~113 KB | ~28,000 | 14.0% |
| Projected at 500 | 500 | ~280 KB | ~70,000 | 35.0% |
At 500 entries, more than a third of the context window is consumed just reading learnings — before the skill starts its actual work. Worse: at ~200 entries in a real project, the file exceeded the Read-tool’s per-read token cap and entries at the tail of the JSON silently failed to load. A performance issue quietly became a correctness issue.
The fix: move filtering out of the LLM context
v0.8 introduces a pre-filtered companion file. Instead of reading the full JSON, agents read learnings-active.md — the top entries already sorted by severity and date, grouped by category, plus a one-line index of everything else so agents can spot tail entries without pulling the whole file.
# Active Learnings (15 of 200)
> Pre-filtered by severity and date. Full data: .add/learnings.json
### anti-pattern
- **[critical]** Bare command names cause namespace drift (L-016, 2026-02-08)
Plugin files are Claude's instructions — whatever pattern is in the
files, Claude reproduces.
### technical
- **[high]** Namespace fix: 205 bare references across 30 files
(L-026, 2026-02-08)
...
## Index (185 more — title only, read JSON for full detail)
- [medium] L-007 process: Dog-fooding catches issues quickly (2026-02-08)
- [medium] L-002 technical: Local marketplace cache sync (2026-02-08)
- [low] L-011 architecture: Hill chart maps to ADD positions (2026-02-07)
...
Regeneration is driven by a PostToolUse hook: whenever .add/learnings.json or ~/.claude/add/library.json gets written, a dispatcher script runs filter-learnings.sh, which runs jq to sort, deduplicate, and cap the result. The LLM does none of the filtering work.
The numbers
| Project size | Before (full JSON) | After (active view) | Reduction |
|---|---|---|---|
| 34 entries (ADD today) | ~4,820 tokens | ~1,865 tokens | 62% |
| 200 entries (projected) | ~28,000 tokens | ~5,300 tokens | 82% |
| 500 entries (projected) | ~70,000 tokens | ~11,750 tokens | 83% |
Numbers measured with anthropic-tokenizer against the real Claude tokenizer — not char-count heuristics.
Data safety: three-layer fallback
The JSON is canonical and is never modified by the filter. If the optimized path fails, agents degrade gracefully:
1. Read learnings-active.md ← fast path (pre-filtered)
│ missing?
▼
2. Run filter-learnings.sh ← generate it on the fly
│ fails? (jq missing, parse error)
▼
3. Read full learnings.json ← pre-v0.8 behavior
+ filter in-context (cap at 10)
In the worst case — no jq, no active view, nothing cached — agents fall back to the v0.7 behavior. Learnings cannot be lost.
New user-facing pieces
/add:learnings skill — three subcommands:
migrate— generate active views (and convert pre-v0.4 markdown to JSON if needed)archive— interactive review of entries older than the configured threshold at low/medium severitystats— counts, sizes, and realized savings per tier
All three support --dry-run.
Configurable thresholds in .add/config.json:
"learnings": {
"active_cap": 15,
"archival_days": 90,
"archival_max_severity": "medium"
}
The migration injects these defaults for upgrading projects. High-volume codebases can tune without editing plugin files.
archived field on learning entries — set during retro to exclude stale entries from the active view. The entry stays in the JSON for audit history; it just drops out of what agents see. Critical and high severity entries can only be archived with explicit human selection.
run_hook migration action type — new in the migration engine. Lets migrations.json invoke a plugin hook script during version migration. Used here to generate the active view on upgrade, reusable for any future on-demand-generation migration.
How it was designed: a second community-driven release
v0.8 is the second consecutive release driven by a community contribution. Tomasz (@tdmitruk) opened PR #7 with the full proposal: benchmarks on real projects, the filter script, the skill, fallback chain, and migration glue. The first review raised five structural concerns (wrong layer — edits were in plugins/add/ instead of the v0.7 core/; deleted NEVER markers in human-collaboration.md; skill-side audit gaps; Codex parity; caching behavior under /add:cycle).
Tomasz relocated everything into core/, re-ran scripts/compile.py to regenerate plugins/add/ and dist/codex/, completed the migration chain so projects at any version from 0.5.0 can reach 0.8.0, and got all three CI gates green (compile-drift, frontmatter validation, rule-boundary diff). A follow-up review raised one blocker (the PostToolUse glob didn’t match ~/.claude/add/library.json) and four non-blocking nits. He fixed the blocker and all four nits in a single push: glob extension, inline case logic extracted to a post-write.sh dispatcher, fixture-based tests for filter-learnings.sh, configurable thresholds, and a readable if/else replacing a dense parameter expansion.
The collaboration pattern — contributor brings a proposal with real measurements, maintainer reviews with specific structural asks, contributor relocates and exceeds the asks — is becoming the default shape of ADD releases. v0.6 was driven by four contributors. v0.7 was designed by a 5-agent competing-swarm review. v0.8 is @tdmitruk’s second shipped contribution, with his rules-and-knowledge context-reduction PR (#6) queued for v0.9.
Installing / upgrading
claude plugin install add # fresh install
claude plugin update add # from any 0.5.0+ install
On first skill invocation after upgrade, the migration generates .add/learnings-active.md and ~/.claude/add/library-active.md, and you’ll see a one-time notification. No manual steps, no file edits, no lost data. If something misbehaves, the fallback chain means agents still have access to the full JSON.
Verify the signed release tag:
curl -fsSL https://github.com/MountainUnicorn.gpg | gpg --import
git tag --verify v0.8.0
# Good signature from "Anthony Brooke <anthony.g.brooke@gmail.com>"
Tagged: v0.8 · learnings · context-optimization · community · hooks