Managing Context Windows
Agent context windows are finite. Every token counts. Smart retrieval keeps your agent sharp.
The Problem
Large language models have fixed context windows. Fill them with irrelevant knowledge and the agent loses focus. Fill them with too little and the agent lacks the wisdom it needs. Context engineering balances these forces.
Retrieve Only What Matters
The first rule: do not dump the entire graph into the prompt. Use specific queries and filters to pull only the knowledge relevant to the current task.
{
"tool": "retrieve_memories",
"arguments": {
"query": "payment validation edge cases",
"context_types": { "error": ["payment_validation"] },
"limit": 5
}
}Five highly relevant facts beat fifty loosely related ones.
The Graduation Chain
LocusGraph's admission pipeline naturally compresses knowledge through graduation:
- Mistake — an error event records what went wrong
- Pattern — repeated errors get reinforced, boosting confidence
- Skill — the agent stores the learned fix as a skill event
Each step is denser and more actionable than the last. When your agent retrieves a skill: context, it gets the distilled wisdom without needing the full history of mistakes that led there.
Design your agent to graduate knowledge. When it solves the same problem three times, store a skill event that summarizes the solution. Link it to the original errors with extends.
Summarization
For long-running agents, summarize periodically. Store a summary event that captures the key points from a session or project phase, then link it to the originals with extends.
{
"tool": "store_event",
"arguments": {
"context_id": "session:2025_03_19_summary",
"event_kind": "observation",
"source": "agent",
"payload": {
"topic": "session summary",
"value": "Fixed 3 payment bugs. Root cause was missing null checks in middleware. Added validation layer."
},
"extends": ["session:2025_03_19"]
}
}Future retrievals pull the summary instead of replaying the entire session.
Pruning Stale Knowledge
You do not need to manually delete old knowledge. The wisdom graph handles this through confidence scoring:
- Contradicted loci lose confidence and drop in retrieval ranking
- Unreinforced loci stay at baseline confidence and get outranked by reinforced knowledge
- Reinforced loci rise to the top naturally
The graph acts as a living filter. Relevant, validated knowledge surfaces. Stale or incorrect knowledge fades.
Guidelines
| Strategy | When to Use |
|---|---|
| Tight scoping + low limit | Focused tasks with clear context needs |
| Broad query + moderate limit | Exploration and discovery phases |
| Summarization | End of sessions or project milestones |
| Graduation (mistake to skill) | Recurring patterns the agent should internalize |