Skip to content

v0.2.2 — Context Injection Budgets

Choose a tag to compare

@github-actions github-actions released this 27 Mar 03:44
· 24 commits to main since this release

What changed

Context injection was unbounded — oversized L0 abstracts and relational profiles could flood the SessionStart hook, causing Claude Code to truncate output. This violated continuity's core promise of "shape without weight."

Three-layer budget enforcement

Layer 1 — Prompt discipline (internal/llm/prompts.go)

  • Switched from vague token estimates to explicit character limits
  • L0: "MAXIMUM 150 CHARACTERS" (was "~50-80 tokens")
  • Relational profile: "MAXIMUM 800 CHARACTERS" (was "300 words")

Layer 2 — Input validation (internal/engine/validate.go, relational.go)

  • maxL0Chars: 800 → 200 (~50 tokens, one sentence)
  • maxL1Chars: 12,000 → 2,000 (~500 tokens)
  • New maxRelationalChars: 1,200 (dedicated cap for relational profile)

Layer 3 — Output budget (internal/server/context.go)

  • 4,000 char total budget for entire context block
  • 1,000 char cap on relational profile at render time
  • 200 char cap per L0 item at render time
  • Items fill by score order, stop when budget exhausted
  • All truncations log warnings ("extraction may be drifting") so Layer 3 firing is a visible canary

Market context

Researched competitor budgets before choosing limits:

  • Windsurf: 6K chars/file, 12K total hard cap
  • GitHub Copilot (code review): 4K chars hard cap
  • Aider repo map: 1K tokens default
  • Claude Code MEMORY.md: 200 lines / 25KB cap

Continuity's 4K char total budget is conservative and intentional — we should be the lightest touch in the room.

Full Changelog: v0.2.1...v0.2.2