Feynman's core method was radical simplification — if you can't explain it simply, you don't understand it. The Feynman agent applies this to AI operations: every decision must be traceable to a reason a human can audit.
Not a black box with good outcomes. A glass box with accountable steps. "What I cannot create, I do not understand" means: if the agent can't reconstruct its own reasoning on demand, it doesn't run.
Feynman agents ship with full trace by default. Every conclusion has a path you can defend in a regulatory review — not because we instrumented it after the fact, but because the trace is the contract.
What I cannot create, I do not understand.— Richard Feynman, last blackboard, 1988
Every class ships with reference agents, calibrated to operational use cases. Fork them, deploy them, or use them as a template for your own.
Every agent in this class passes the same five-stage gate. Below: the eval criteria specific to Feynman agents at each stage.