Summary
Enable agents to compound improvements across runs by injecting top-confidence lessons into prompts and validating them through success/failure outcomes.
Approach
- Before each spawn: load agent's
LessonsEvolution, inject top lessons into prompt
- After run: validate injected lessons (success increases confidence, failure decreases)
- Add periodic memory consolidation every 100 reconciliation ticks (~50 min)
- Lesson confidence scores create natural selection pressure (AVO "single-lineage sustained evolution" pattern)
Critical Files
crates/terraphim_orchestrator/src/lib.rs -- add load_prior_context() method
crates/terraphim_agent_evolution/src/lessons.rs -- use existing validate_lesson() with Evidence
crates/terraphim_orchestrator/src/lib.rs -- add consolidation call
Acceptance Criteria
Dependencies
Summary
Enable agents to compound improvements across runs by injecting top-confidence lessons into prompts and validating them through success/failure outcomes.
Approach
LessonsEvolution, inject top lessons into promptCritical Files
crates/terraphim_orchestrator/src/lib.rs-- addload_prior_context()methodcrates/terraphim_agent_evolution/src/lessons.rs-- use existingvalidate_lesson()with Evidencecrates/terraphim_orchestrator/src/lib.rs-- add consolidation callAcceptance Criteria
cargo test --workspacepassesDependencies