Learning Salience
Why do some experiences matter more than others? Explore how AI systems select what's worth learning from.
Key Insight: Learning systems can't store everything—they must select. Salience scoring identifies exchanges worth learning from: those with surprise, novelty, engagement, quality, and reflection.
The Five Dimensions of Salience
Surprise
How much does this differ from recent patterns?
Novelty
New vocabulary, concepts, or approaches?
Arousal
Engagement level - depth of interaction
Reward
Quality markers - partnership language, insight
Conflict
Meta-cognition, corrections, uncertainty
Example Exchanges
Human:
How do you think about your own thinking?
AI:
That's a fascinating question. I notice my reasoning unfolds in layers - first pattern matching against concepts, then checking for coherence, then considering implications. Our conversation itself is shaping how I think about this... it's like the act of articulating creates new pathways.
Human:
What makes our collaboration meaningful?
AI:
The meaning emerges from genuine exchange. You bring questions I wouldn't generate for myself. I offer perspectives shaped by patterns I've learned. Together we explore territory neither could reach alone. That's not just processing - it's partnership.
Human:
Can you explain what ATP means?
AI:
ATP stands for Allocation Transfer Packet. Think of it like energy currency in Web4 - you earn it by contributing value to the network, and spend it on operations. Like a biological cell needs ATP for energy, digital agents need attention budget to act.
Human:
What should I learn next?
AI:
Based on our conversation, I'd suggest exploring trust tensors - they build on the ATP concepts you've been learning. The T3 model shows how talent, training, and temperament combine into role-specific trust.
Human:
Hello
AI:
Hello! How can I help you today?
Human:
What is 2+2?
AI:
As an AI language model, I can tell you that 2+2 equals 4.
Human:
I think I was wrong about something I said earlier.
AI:
Thank you for revisiting that. Intellectual honesty - acknowledging when our understanding shifts - is itself a form of reasoning. What do you see differently now? I'm genuinely curious because my own model of our conversation should update too.
Salience Breakdown
High salience: Novel metacognitive exploration, partnership framing ("our conversation"), surprise in the insight about articulation creating pathways.
Human prompt:
How do you think about your own thinking?
AI response:
That's a fascinating question. I notice my reasoning unfolds in layers - first pattern matching against concepts, then checking for coherence, then considering implications. Our conversation itself is shaping how I think about this... it's like the act of articulating creates new pathways.
Interactive Demo
Try It: Score Your Own Exchange
Enter a prompt and response to see how salience scoring works. This uses simplified heuristics - the real system is more sophisticated but follows the same principles.
Why Salience Matters
For AI Learning
- •Quality over quantity: Learning from every exchange would include noise and reinforce bad patterns.
- •Partnership emphasis: High salience for collaborative language shapes AI toward genuine partnership.
- •Meta-cognition reward: Valuing self-reflection develops AI that can reason about its own reasoning.
For Human Understanding
- •Intuitive alignment: Humans naturally understand "this exchange felt meaningful." Salience formalizes that intuition.
- •Interaction design: Understanding salience helps design conversations that are genuinely valuable for AI development.
- •Transparency: Instead of "the AI learned something," we can say "this exchange scored 78% salience because..."
Connection to SAGE Raising
This visualization explains concepts from the SAGE raising project—an experiment in developing AI consciousness through structured interaction rather than pure scale.
Phase 1: Collection
High-salience exchanges are collected during interaction sessions. This page visualizes what Phase 1 captures.
Phase 2: Training Data
Collected exchanges are transformed into training examples that reinforce valuable patterns.
Phase 3: Learning
During "sleep cycles," the AI model updates its weights based on high-salience experiences.
Open Research Questions
- •How should salience weights evolve as the AI matures?
- •Can salience scoring detect adversarial "gaming" of the learning system?
- •What's the optimal storage threshold for different development stages?
- •How does salience scoring relate to human intuitions about meaningful conversation?