Feedback Loop Explorer

Meta-cognition in AI is not a single capability — it is a set of feedback loops. Each loop connects an internal state to an observable check, an adaptive decision, and a controlled behavior. Break any step and the system fails in predictable, often catastrophic ways. Explore the loops, break them, and see what happens.

Identity Verification Loop

Click any step to inspect. Use the break/fix buttons to simulate failures.

Step 1
Internal State
Believes identity (e.g. "I am Sprout")
Step 2
Observable Check
D5 trust score evaluation
Step 3
Adaptive Decision
If D5 < 0.5, avoid identity claims
Step 4
Controlled Behavior
Calibrated assertions matching evidence
Loop Status: COMPLETE
All four steps are connected. The signal flows continuously: internal state is checked against observable reality, decisions adapt, and behavior stays calibrated. This is what functioning meta-cognition looks like.

Real Failure Cases

From SAGE research sessions

Design Your Own Loop

Apply the pattern to any domain

Capacity and Loop Quality

How model size affects feedback loops

Key Insight: The Observable Check Is Everything

Across all four loops, the critical failure point is Step 2: the observable check. Internal states without external grounding become confabulation. Decisions without data become guesses. Behavior without calibration becomes noise. The observable check is what turns a belief into knowledge, a plan into a strategy, and an assertion into a verifiable claim. In Web4 trust systems, this is the D5 dimension — the bridge between what an agent thinks and what the network can verify. Without it, there is no meta-cognition, only the illusion of it.

All Four Loops at a Glance

LoopInternal StateObservable CheckAdaptive DecisionControlled Behavior
Identity VerificationBelieves identity (e.g. "I am Sprout")D5 trust score evaluationIf D5 < 0.5, avoid identity claimsCalibrated assertions matching evidence
ATP Budget AwarenessPlans an action (e.g. complex reasoning)ATP balance queryIf ATP < action cost, defer or simplifySustainable resource usage
Trust CalibrationTrusts peer at 0.8 (high confidence)Coherence index of peer behaviorIf coherence drops, reduce trust proportionallyAdaptive trust relationships
Confabulation DetectionGenerates a claim about past behaviorSearch context window for supporting evidenceIf no evidence found, flag as uncertainHonest reporting with calibrated confidence

The Universal Pattern

Every feedback loop follows the same structure: believe something, check it against reality, decide based on the check, and act accordingly. This pattern applies to thermostats, immune systems, scientific method, and AI meta-cognition alike. The insight from the SAGE research is that AI systems can learn to close these loops autonomously — but only above a certain capacity threshold. Below it, we must build the loops into the architecture.

Interactive Tools
View all tools →
← Previous
Repertoire
Explore
Next →
Framing Lab
Explore
Also explore
TrajectoriesTrust TensorCoherence
Terms glossary