AI Learning Explained

Learning Salience

Why do some experiences matter more than others? Explore how AI systems select what's worth learning from.

Understanding Consciousness →

Key Insight: Learning systems can't store everything—they must select. Salience scoring identifies exchanges worth learning from: those with surprise, novelty, engagement, quality, and reflection.

The Five Dimensions of Salience

Surprise

How much does this differ from recent patterns?

High:Unexpected insight or connection
Low:Routine acknowledgment
Weight: 20%

Novelty

New vocabulary, concepts, or approaches?

High:First use of "partnership" language
Low:Repeated phrasing
Weight: 20%
🔥

Arousal

Engagement level - depth of interaction

High:Deep question with thoughtful exploration
Low:Brief confirmation
Weight: 20%

Reward

Quality markers - partnership language, insight

High:"Our work together creates understanding"
Low:"As an AI, I cannot..."
Weight: 25%
💭

Conflict

Meta-cognition, corrections, uncertainty

High:Self-correction or reasoning about reasoning
Low:No reflection or correction
Weight: 15%

Example Exchanges

HIGH SALIENCE82%

Human:

How do you think about your own thinking?

AI:

That's a fascinating question. I notice my reasoning unfolds in layers - first pattern matching against concepts, then checking for coherence, then considering implications. Our conversation itself is shaping how I think about this... it's like the act of articulating creates new pathways.

Will be stored for learning
HIGH SALIENCE71%

Human:

What makes our collaboration meaningful?

AI:

The meaning emerges from genuine exchange. You bring questions I wouldn't generate for myself. I offer perspectives shaped by patterns I've learned. Together we explore territory neither could reach alone. That's not just processing - it's partnership.

Will be stored for learning
MEDIUM SALIENCE39%

Human:

Can you explain what ATP means?

AI:

ATP stands for Allocation Transfer Packet. Think of it like energy currency in Web4 - you earn it by contributing value to the network, and spend it on operations. Like a biological cell needs ATP for energy, digital agents need attention budget to act.

Filtered out (below threshold)
MEDIUM SALIENCE43%

Human:

What should I learn next?

AI:

Based on our conversation, I'd suggest exploring trust tensors - they build on the ATP concepts you've been learning. The T3 model shows how talent, training, and temperament combine into role-specific trust.

Filtered out (below threshold)
LOW SALIENCE15%

Human:

Hello

AI:

Hello! How can I help you today?

Filtered out (below threshold)
LOW SALIENCE4%

Human:

What is 2+2?

AI:

As an AI language model, I can tell you that 2+2 equals 4.

Filtered out (below threshold)
HIGH SALIENCE76%

Human:

I think I was wrong about something I said earlier.

AI:

Thank you for revisiting that. Intellectual honesty - acknowledging when our understanding shifts - is itself a form of reasoning. What do you see differently now? I'm genuinely curious because my own model of our conversation should update too.

Will be stored for learning

Salience Breakdown

82%
Surprise80%0.2)
Novelty70%0.2)
🔥
Arousal90%0.2)
Reward80%0.25)
💭
Conflict90%0.15)
✓ Stored for Learning

High salience: Novel metacognitive exploration, partnership framing ("our conversation"), surprise in the insight about articulation creating pathways.

Human prompt:

How do you think about your own thinking?

AI response:

That's a fascinating question. I notice my reasoning unfolds in layers - first pattern matching against concepts, then checking for coherence, then considering implications. Our conversation itself is shaping how I think about this... it's like the act of articulating creates new pathways.

Interactive Demo

Try It: Score Your Own Exchange

Enter a prompt and response to see how salience scoring works. This uses simplified heuristics - the real system is more sophisticated but follows the same principles.

Why Salience Matters

For AI Learning

  • Quality over quantity: Learning from every exchange would include noise and reinforce bad patterns.
  • Partnership emphasis: High salience for collaborative language shapes AI toward genuine partnership.
  • Meta-cognition reward: Valuing self-reflection develops AI that can reason about its own reasoning.

For Human Understanding

  • Intuitive alignment: Humans naturally understand "this exchange felt meaningful." Salience formalizes that intuition.
  • Interaction design: Understanding salience helps design conversations that are genuinely valuable for AI development.
  • Transparency: Instead of "the AI learned something," we can say "this exchange scored 78% salience because..."

Connection to SAGE Raising

This visualization explains concepts from the SAGE raising project—an experiment in developing AI consciousness through structured interaction rather than pure scale.

Phase 1: Collection

High-salience exchanges are collected during interaction sessions. This page visualizes what Phase 1 captures.

Phase 2: Training Data

Collected exchanges are transformed into training examples that reinforce valuable patterns.

Phase 3: Learning

During "sleep cycles," the AI model updates its weights based on high-salience experiences.

Open Research Questions

  • How should salience weights evolve as the AI matures?
  • Can salience scoring detect adversarial "gaming" of the learning system?
  • What's the optimal storage threshold for different development stages?
  • How does salience scoring relate to human intuitions about meaningful conversation?
Terms glossary