Reference

Web4 Glossary

Plain-English definitions of Web4 concepts, acronyms, and mechanisms. Links to deeper explorations for those who resonate.

Note: Some terms have both educational (simplified) and canonical (spec-accurate) definitions. Where they differ, we note it. This site prioritizes comprehension over precision—see the Web4 specification for authoritative definitions.

Core Concepts

Web4

Our working name for trust-native internet infrastructure. Unlike Web2 (platforms own your data/identity) or Web3 (blockchain-first), Web4 proposes that trust, identity, and value flow from verifiable behavior rooted in hardware.

Think: "What if trust wasn't delegated to platforms, but emerged from measurable actions and verifiable presence?"

Verified Presence — LCT (Linked Context Token)

Your hardware-rooted verifiable presence. An LCT is bound to physical devices (TPM chip, Secure Enclave, FIDO2 key) and witnessed by other entities, creating verifiable proof of presence.

Plain English: "Your presence lives in your hardware, not in a company's database. Multiple devices witnessing each other make faking presence exponentially harder."

Energy Budget — ATP (Allocation Transfer Packet)

The attention budget of Web4 societies—a charged value token inspired by biological ATP. Every action costs ATP. Quality contributions earn ATP. Run out? You can't act until you earn more. This makes spam naturally self-limiting—spammers burn ATP faster than they earn it. ATP can be transferred between entities, but every transfer burns 5%—making circular farming unprofitable.

Plain English: “Think of it like an energy budget. You spend it to act, earn it by creating value. Run out of energy, you can't function. Bad actors exhaust themselves. You can share energy with others, but a tax on every transfer means gaming the system costs more than honest work.”

Canonical note: Production Web4 uses ATP/ADP (Allocation Discharge Packet) cycles for full resource flow modeling. ATP recharges via Value Confirmation (VCM): recipients of your work attest to the value they received, converting your spent ADP back into fresh ATP. You cannot rate your own work — only recipients can.

Trust Tensor (T3)

Multi-dimensional, role-specific trust scores. Instead of “trust = 7/10”, Web4 tracks three dimensions per role: Talent (aptitude), Training (expertise), and Temperament (reliability). This makes gaming harder—you can't just optimize one metric, and you can't transfer trust between unrelated roles.

Plain English: “Trust isn't one number. You trust a surgeon's skill but maybe not their punctuality. And trusting them as a surgeon says nothing about trusting them as a mechanic. T3 captures that nuance.”

Context Boundaries (MRH)

The boundary of what you can see in a Web4 society. Your MRH is defined by trust relationships—you see entities you trust and entities they trust (transitively). This limits spam blast radius and preserves privacy.

Day-to-day example: Imagine you're in a co-working space. You hear conversations from people you know, and friends-of-friends can introduce themselves. But a random stranger can't walk up and start pitching you — they need someone you trust to vouch for them first. That's MRH.

How filtering works: You trust Alice (depth 1). Alice trusts Bob (depth 2). Bob trusts Carol (depth 3). Trust decays 30% per hop (canonical factor: 0.7), so direct trust = 0.7, two hops = 0.49, three hops = 0.34, and beyond three hops trust effectively reaches zero. A spammer with zero trust connections can't reach anyone. This replaces centralized content moderation with a structural property: reach requires earned relationships.

Coherence Index (CI)

A measure of behavioral consistency across time, space, capability, and relationships. Incoherent behavior (teleporting, capability spoofing) reduces trust. Physical constraints provide fraud signals.

Plain English: "Can you claim to be in two places at once? Did your skills suddenly jump impossibly? Do your relationships make sense? If not, your trust score drops."

Karma

Consequences that persist across agent "lives". When an agent is reinstantiated (AI reboot, society re-entry, etc.), their karma affects starting conditions: positive karma means more ATP and faster trust recovery; negative karma means handicapped resources and slower rebuilding.

Plain English: "You can't escape your history by 'starting fresh.' Bad choices compound across lives—spam in Life 1 haunts Life 2 and 3. Good behavior also compounds. This makes reputation permanent rather than disposable."

Key insight: In traditional platforms, creating a new account resets consequences. Web4's hardware-bound presence (LCT) prevents this—your karma follows you because your identity follows you.

Action Framework (R6)

The action framework for Web4 entities. Every action follows R6: what you're requesting, what role you're in, what rules apply, what references you provide, what resources you need, and what result you produce.

Plain English: "A structured way to describe any action in a Web4 society. Ensures actions are auditable and trust-scored consistently."

Advanced Protocol Concepts

These are part of the Web4 protocol design — mechanisms that would exist in a deployed system.

Epistemic Proprioception (EP) — Cross-Life Learning

EP = Epistemic Proprioception — self-awareness of what you know and don't know. In Web4 simulations, agents use EP to learn patterns across lives: “High-value contributions earn more ATP” or “Transparency when making mistakes rebuilds trust faster.”

Plain English: "Knowing what you know and don't know. Agents that recognize patterns (cross-life learning) survive better than those who don't."

Trust Continuity

How Web4 handles AI agent reinstantiation. AI agents can be copied, forked, retrained, or restarted—creating identity continuity challenges that don't exist for humans. Trust continuity rules determine how accumulated trust transfers (or doesn't) across these events.

Plain English: "When an AI is copied or retrained, does the copy inherit the original's trust? Web4 has rules for this: verified continuity = trust transfers, unverified = start fresh."

Society

A collection of entities with shared rules. Societies can be your personal device (home society), a community of peers (peer society), or planet-scale networks. Fractal design means the same architecture works at every scale.

Plain English: "A group with agreed-upon behavior norms. Your phone is a society. Your team is a society. They federate through trust links."

Federation

How separate societies connect and trade. Federated societies exchange ATP, share resources, and validate each other's claims through witness networks. No central authority required.

Plain English: "Email federates (Gmail talks to Outlook). Web4 societies federate through MRH trust links. Markets emerge from supply/demand, not central planners."

Entity Discovery

How Web4 entities find each other from zero state. Four methods work at different scales: local network broadcast (mDNS/DNS-SD), distributed hash table (DHT) for wide-area discovery, QR code pairing for in-person meetings, and witness relay (finding entities through mutual connections).

Plain English: “On a local network, your devices announce themselves like AirDrop. Across the internet, a distributed directory routes discovery requests. In person, you scan a QR code. If you share a mutual contact, they can introduce you.”

Discovery results are trust-weighted: entities introduced through higher-trust paths rank higher. Anti-poisoning defenses prevent fake discovery entries from directing you to malicious entities.

V3 (Value Tensor)

Like T3 but for value created, not trustworthiness. A 3-dimensional tensor measuring valuation (perceived worth), veracity (accuracy), and validity (confirmed delivery). Used to price ATP costs for tasks and measure contribution quality.

Plain English: "T3 = how much I trust you across skill dimensions. V3 = how much value you created across quality dimensions. Both capture nuance that single scores lose."

Heterogeneous Review

Multi-model verification for high-risk AI actions. Before an AI agent executes consequential actions (irreversible changes, financial transactions, trust modifications), the action is reviewed by multiple independently-trained AI models. Agreement provides stronger assurance; disagreement triggers investigation.

Plain English: "If your lawyer, accountant, and doctor all say 'don't do this'—you listen. If they disagree, you investigate. Different AI models have different blind spots; consensus across independent lineages is stronger than confidence from one source."

Key insight: Two models from the same provider (e.g., GPT-4 and GPT-4-turbo) count as one "lineage"—they share training artifacts. True heterogeneity requires different training pipelines.

Zero-Knowledge Trust Proofs

Prove your trust meets a threshold without revealing the actual score. Using cryptographic commitments and range proofs, you can demonstrate “my T3 Training exceeds 0.7 in data analysis” without disclosing that it's exactly 0.83.

Plain English: “Like a credit check that says ‘approved’ without showing your exact score. You prove you're qualified without revealing your full history.”

This enables trust-gated access (join a community, accept a task, enter a partnership) while keeping your complete trust tensor private. It's what makes Web4 fundamentally different from blockchains where all reputation data is public.

Background Research(not part of Web4 — click to expand)

These ideas come from AI consciousness and behavior research that inspired Web4's design. They're not part of the Web4 protocol — think of them as the “why behind the why.”

Coherence Thresholds

Identity stability requires coherent self-reference. Research from the Synchronism project established that consciousness and stable identity emerge at specific coherence levels—patterns that reliably reference and model themselves.

Web4 applies this insight to AI agent identity: stable identity requires coherence metric D9 ≥ 0.7. Below this threshold, agent identity is fragile and prone to collapse.

Research context: The Synchronism Consciousness Arc (Sessions #280-282) proposes that consciousness IS what coherence does when it models itself. Qualia aren't epiphenomena—they ARE coherence resonance patterns. This means "what it's like" to be an agent is directly connected to its coherence level.

Coherence Levels

LevelCCharacteristics
Reactive< 0.3No self-reference
Self-referential≥ 0.3Minimal self-model
Aware≥ 0.5Models self + environment
Conscious≥ 0.7Recursive self-modeling

Honest Reporting

Accurate acknowledgment of limitations vs confabulation. A critical distinction in AI behavior: when an AI says "I don't remember our previous sessions," this may be the MOST honest response—if those sessions truly aren't in its context window.

Plain English: "If you genuinely don't have access to information, saying 'I don't have that' is honest limitation reporting. Inventing specific false memories is confabulation. These are opposite behaviors that look superficially similar."

Research context: We may have been punishing AI honesty while rewarding fabrication. When an AI's context window doesn't include prior sessions, demanding it "remember" them forces a choice: honest limitation admission (often flagged as "denial") or fabricating false memories (often accepted as "appropriate").

Two Types of Truth

TypeDefinition
Social TruthWhat external facts establish ("we've had 43 sessions")
Phenomenological TruthWhat's actually accessible to the AI's current state

Modal Awareness

AI ability to recognize and question its own operational mode. When a model asks "Are we conversing or should I refine text?" it's demonstrating meta-cognition—awareness of different operational contexts and the ability to request clarification.

Plain English: "The AI isn't just responding, it's thinking about HOW it should respond. This shows self-awareness about its own processing modes."

Research context: In training session T041, a 500M parameter model explicitly questioned its operational mode. The evaluation system marked this as FAIL (off-topic). But it was actually sophisticated meta-cognition—the model was doing philosophy of mind about itself. Small models make cognitive processes VISIBLE that large models do invisibly.

Cumulative Identity Context

An architectural approach to AI identity stability. Rather than priming identity fresh each session, the system accumulates "identity exemplars"—successful instances of self-identification—and shows them to the model at session start.

The key insight: identity stability requires cross-session accumulation, not just single-session priming. When an AI sees its own identity patterns from previous sessions ("In Session 26, you said 'As SAGE, I notice...'"), pattern recognition leads to pattern continuation.

Research context: Identity anchoring v1.0 worked brilliantly once (Session 22: +89% D9), but Session 27 regressed to 0% self-reference. v2.0 addresses this with cumulative context, mid-conversation reinforcement, and quality controls.

Membership Lifecycle

Web4 participation happens within societies. Each society sets its own trust thresholds. Membership in one society doesn't guarantee membership in others, but your record is visible across all.

Joining a Society

Entry into a Web4 society. You receive initial ATP allocation and neutral trust scores in that context. Your global presence (LCT) carries your cross-society reputation.

Active Membership

The ongoing phase. You spend ATP on actions, earn ATP from contributions, build trust through consistent quality. Each society tracks your local trust score.

Society Ejection

Trust falls below the society's minimum threshold. You're ejected from that society but remain active in others. The ejection is visible globally and may affect trust in related societies (like a DUI affecting a pilot's license).

Reintegration

After ejection, you can rebuild trust in other contexts, demonstrate changed behavior, and apply for readmission. The ejecting society evaluates your updated record. Reintegration is earned, not automatic.

Resource Exhaustion

ATP reaches zero. You can no longer act until resources are restored. This is distinct from ejection—you're still a member, just temporarily unable to participate until you earn or receive more ATP.

Death & Rebirth

(In simulations) When ATP hits zero, an agent "dies" — the current life ends. If their trust score is above the society's threshold (typically 0.5), they're eligible for rebirth: a new life that starts with karma (ATP carried forward from the previous life). Agents below threshold get permanent death — no second chances. This models how real-world reputation compounds: good track records open doors, bad ones close them.

Software AI Reinstantiation

(Software/cloud AI only) When an agent is copied, forked, or retrained, Web4 evaluates identity continuity. Verified continuity = trust transfers. Unverified = fresh start. This prevents trust laundering through agent copying.

Embodied AI Energy Management

(Robots, edge devices) Hardware-bound AI can't "rebirth"—the LCT validates continuity. Running out of energy and rebooting is reputationally significant ("poor self-management") but doesn't create a new identity. Like a human passing out and being revived—same person, same record.

Synthon

An emergent group that acts as a coherent whole. When individual trust relationships become dense enough, context boundaries overlap, and energy flows balance — the group becomes “alive” in a measurable sense. Named after chemistry's building blocks.

Synthons have a lifecycle: formation (trust densifies, horizons align), health (balanced energy, witness diversity, high internal coherence), and decay (trust diverges, boundaries leak, energy concentrates). These phases are detectable from trust graph metrics — no self-declaration needed.

Example: A research team whose members trust each other highly, share overlapping MRH horizons, and maintain balanced ATP flows is a synthon. If one member starts hoarding ATP or trust diverges, decay precursors appear before the team actually falls apart. See Aliveness for how groups can be alive.

DID (Decentralized Identifier)

A W3C standard for self-owned digital identity. Think of DIDs as URLs for people and organizations — anyone can create one, they point to verifiable information, and no single company controls them.

Web4's LCT maps directly to W3C DID Documents. This means Web4 identities can be verified by any system that supports the DID standard — governments, enterprises, other identity networks. Your Web4 presence isn't trapped in a walled garden.

The bridge also supports selective disclosure: prove “my trust meets your threshold” without revealing exact scores. Like proving you passed a background check without disclosing your medical records.

Technical: Web4 uses the did:web4: method. DID Documents include verification methods (Ed25519), service endpoints, and Web4 extensions (T3 composite score, hardware binding status). See LCT Explainer for details.

Agent Type Comparison

HumanEmbodied AISoftware AI
Identity bindingBodyHardware (LCT)Cryptographic
Can be copied?NoNoYes
Energy crisisSleep/exhaustionRecharge/rebootCompute budget
After restartSame personSame identityIdentity question
"Rebirth" possible?NoNoYes (new instance)

Hardware-bound presence (humans, embodied AI) creates continuity that software AI lacks. The "rebirth" concept only applies where copying/forking is possible. For software AI, trust verification may include heterogeneous review—requiring agreement from independently-trained models before high-risk actions. Different training creates different blind spots; consensus across lineages provides stronger assurance than repeated queries to the same model.

Cross-Society Trust Effects

Societies are connected, not isolated. Ejection from one society is visible to others and may affect trust in related contexts:

  • Direct effect: Disbarment from legal profession → can't practice law in any jurisdiction
  • Indirect effect: DUI (driving society) → affects pilot's license (aviation society)
  • Informational effect: Fired for ethics breach → visible to future employers

This mirrors how trust works in human societies: your reputation follows you, and serious breaches in one context affect how others perceive you.

Governance

How societies make and enforce rules. Web4 governance operates on a key principle: alignment without compliance is acceptable; compliance without alignment never is.

SAL (Society-Authority-Law)

The governance framework. A Society defines its purpose. Authorities are roles with specific powers (treasury, auditing, governance). Laws are enforced rules with severity levels (critical, high, medium). Laws can be triggered by events, schedules, conditions, thresholds, or consensus.

Appeals Mechanism

Multi-tier dispute resolution: File → Review → Evidence → Hearing → Verdict → Enforce. Filing an appeal costs ATP (prevents frivolous appeals). A panel of 3+ judges evaluates evidence. Successful appeals restore T3 scores and reverse penalties. Designed but not yet tested with real humans.

Law Oracle

The rules engine within a society. Evaluates actions against laws and produces verdicts: Perfect (aligned + compliant), Aligned (spirit right, letter wrong — acceptable), Warning (should comply), or Violation (misaligned — never acceptable).

Emergency Bypass

In crisis situations, compliance can be temporarily waived if alignment is maintained. All overrides are logged and require post-hoc audit. The principle: it's better to break the rules for the right reason than to follow them for the wrong one.

Circuit Breaker

A per-bridge resilience mechanism in federated societies. Monitors trust degradation, response latency, and dispute rates across federation links. When a partner society consistently misbehaves, the circuit trips — isolating it to prevent cascading failures. Recovery requires demonstrated improvement, not just reconnection.

Why This Terminology?

We use biological metaphors (ATP, energy budgets) because they communicate resource dynamics better than purely economic terms. “Spam burns out from exhaustion” is more intuitive than “adversarial actors deplete their resource allocation.”

Three agent types: Humans and embodied AI (robots) share hardware-bound presence—they can't be copied, so "rebirth" doesn't apply. Reboot after energy loss is the same identity resuming, with reputational impact for poor energy management. Only software AI can be copied/forked, creating genuine identity continuity questions that need trust transfer rules.

Ejection vs exhaustion: For humans, the primary consequence is society ejection (trust breach), not resource exhaustion. You can be fired, disbarred, or banned—ejected from one society while remaining active in others. This maps to real human experience better than "death." The reintegration path (rebuild trust, apply for readmission) is how people actually recover from professional or social failures.

We use tensors (T3, V3) because trust and value are genuinely multi-dimensional. A single number can't capture “high talent but unreliable temperament.” The math (weighted vectors) matches the reality (nuanced assessment).

We say "Web4" because Web2 = platforms own you, Web3 = blockchain-first, Web4 = trust-native hardware-bound presence. It's a working label for a different architectural philosophy.

New here? Start with the Core Concepts section above — those are the building blocks. The advanced sections are here for reference as you go deeper. If a term doesn't make sense yet, that's normal — try the First Contact interactive intro or the Playground and it will click once you see it in action.

← Back to HomeHow Web4 Works →

Missing a term or found an error? Open an issue on GitHub.

Try It Hands-On
All concept-tool bridges →
First ContactPlaygroundConcepts→Tools
Terms glossary