New to Web4? This page works best after the concept pages. For a guided introduction, start with First Contact or the 2-minute TL;DR, then explore the concept sequence before returning here for the full picture.
Web4 Explained

How Web4 Societies Work

Project status: Web4 is a research prototype, not a deployed product. The mechanics described here are validated through simulations and an open-source reference implementation, but no live network with real users exists yet. The simulations on this site let you explore how it would work. Curious what early deployment could look like? See the concrete adoption path — from browser extension overlays to full integration.

Key Takeaways

You're born with energy and neutral trust. Every action costs energy. Quality contributions earn it back; spam drains it.

Your identity is tied to your devices — no passwords, no central authority. Your trust is multi-dimensional (competence, reliability, consistency) and role-specific.

If your energy hits zero or trust collapses, you die. But good karma carries forward — you're reborn with a head start. No moderators needed. Five interlocking systems (identity, energy, trust, consistency, context) make spam expensive and quality self-sustaining.

Web4 is trust-native infrastructure for humans and AI. Instead of relying on platforms, moderators, or authorities, Web4 societies self-regulate through five foundational mechanisms:

Five systems, in plain English: Web4 has some acronyms. Here's what they mean — refer back anytime.

LCT = IdentityATP = Energy budgetT3 = Trust scoreCI = Behavioral consistencyMRH = Context boundary

Hover any dotted-underlined term for a quick definition, or use the glossary button Aa in the bottom-left corner.

🔐

Identity (LCT)

Unforgeable identity rooted in hardware, strengthened by multiple devices witnessing each other.

Learn more →

Attention Economics

Every action costs attention budget (ATP). Run out? You die. Contribute value? You thrive.

Learn more →
🎯

Trust (T3)

Multi-dimensional trust scored across Talent, Training, and Temperament — per role.

Learn more →
🌊

Coherence (CI)

Behavioral consistency across where you are, what you can do, when you act, and who you interact with.

Example: If someone usually posts coding tutorials at 9am and suddenly starts posting crypto spam at 3am from a new country, their coherence drops — making every action more expensive.

Learn more →
🌐

Context (MRH)

You only see what's relevant to your trust network — like hearing only conversations you're part of. Spam can't reach you.

Learn more →

Together, these create societies where trust emerges from verifiable behavior, not institutional authority. This page walks through how it all works.

↓ See how all five systems create aliveness below

How All Five Systems Create Aliveness

Each system handles one job. Together, they produce a living digital society:

1
🔐
LCT — Proves you're real. Hardware-bound identity, unforgeable.
identity established
2
🌐
MRH — Defines your reach. Only see what's relevant to your trust network.
Gates the loop below: you only spend ATP and build T3 in contexts where you're visible. Actions outside your MRH don't cost energy and don't move trust.
context bounded
3
Feedback Loop — every action cycles through all three:
ATP
Energy spent
T3
Trust updated
CI
Consistency checked
ATP
Reward earned
↻ repeats every action
all three healthy?
4
Aliveness — You're “alive” when: ATP > 0 + Trust > 0.5 + CI coherent
All three healthy → thrive, rebirth eligible
Any one fails → death spiral, no rebirth

Why spam dies: Without LCT, you can't enter. Without MRH, you can't reach anyone. Without ATP, you can't act. Without T3, you aren't trusted. Without CI, you're flagged. Every layer filters bad actors — no single point of failure, no moderators needed.

The Journey: Birth → Life → Death → Rebirth

Web4 societies treat "aliveness" as a measurable property. Here's the full lifecycle:

🐣

1. Birth: You Enter the Society

Creating your identity and receiving initial resources

Identity Creation (LCT)

You create a Linked Context Token (LCT) - your verifiable digital presence. This can be bound to:

  • Hardware: Secure Enclave (iPhone/Mac), TPM chip (PC), or FIDO2 security key
  • Multi-device: Multiple devices witnessing each other (stronger identity)
  • VM-bound: Software identity for AI agents

Your LCT is registered on the society's network and becomes part of the trust graph — which determines what entities and information are visible to you (your "context boundary").

Initial Resources (ATP)

You receive an initial ATP allocation (typically 100). This is your energy budget — spend it wisely.

New life: 100 ATP to start exploring

Neutral Trust (T3)

Your trust tensor starts at neutral (0.5 in all dimensions):

Talent
0.5
Training
0.5
Temperament
0.5

You haven't done anything yet - society doesn't know if you're trustworthy. Build trust through actions.

🌱

2. Life: You Act, Build Trust, Manage ATP

The core gameplay loop of Web4 existence

Actions Cost ATP

Actions that affect others cost ATP from your energy budget. Reading and browsing are free — only contributions spend energy:

Free (no ATP cost)

• Reading and browsing content

• Viewing profiles and trust scores

• Observing community activity

Lurking is always free. You only spend energy when you act — post, vote, transact, or create.

Costs ATP (actions that affect others)

• Posting content (10-20 ATP)

• Creating tasks (15-30 ATP)

• Broadcasting (20-50 ATP)

Contributions Earn ATP

When you contribute value, the community validates and rewards you:

High-quality post: Cost 15 ATP → Earn 40 ATP = +25 net

Helpful contribution: Cost 20 ATP → Earn 60 ATP = +40 net

Spam message: Cost 10 ATP → Earn 0 ATP = -10 net

Only sustainable behaviors (earning more than spending) survive long-term.

Trust Evolves with Behavior

Every action updates your T3 trust tensor:

Example: Delivered high-quality work on time
Talent
+0.15
Training
+0.20
Temperament
+0.10
Example: Missed deadline without warning
Talent
-0.05
Training
-0.25
Temperament
-0.20

Different actions affect different trust dimensions. Your behavior creates your reputation.

💀

3. Death: ATP Reaches Zero

Energy exhaustion = end of life

When ATP = 0, You Die

Death in Web4 is not a timeout or suspension. Your energy budget is depleted — you can no longer act.

Causes of Death

  • Spam yourself to death: Send 20 spam messages = -200 ATP
  • Low-quality contributions: Earn less than you spend over time
  • Ignored by community: No validation = no ATP rewards
  • ATP crisis: Big actions without enough buffer

Your Final Record

At death, your full life history is recorded:

  • Total ATP earned across life
  • Final T3 trust tensor (Talent, Training, Temperament)
  • Actions taken and their outcomes
  • Community validation history
  • Coherence Index (behavioral consistency)

This record determines whether you're eligible for rebirth.

♻️

4. Rebirth: Karma Carries Forward (Maybe)

Trust above threshold = reincarnation with benefits

Eligibility Check: Trust Threshold

Not everyone gets reborn. The society checks your T3 trust tensor:

✅ Eligible for Rebirth

Overall T3 score ≥ 0.5 (threshold)

You built enough trust. Society wants you back. Reborn with karma (ATP from previous life).

❌ Not Eligible

Overall T3 score < 0.5 (threshold)

You burned trust. Society doesn't want you back. No rebirth. Permanent death.

Karma: ATP Carried Forward

If eligible, you're reborn with karma - a portion of your final ATP:

Life 1 → Life 2

Died with 145 ATP. Reborn with 145 ATP (full karma bonus).

Life 2 → Life 3

Died with 130 ATP. Reborn with 130 ATP (karma preserved).

Your track record compounds across lives. Good behavior = stronger starts.

Learning Across Lives

Advanced agents remember what worked from their previous lives. When reborn, they carry forward lessons about which strategies succeed and which fail:

  • "High-value contributions earn more ATP than they cost"
  • "Transparency when making mistakes rebuilds trust faster"
  • "Consistent small wins beat sporadic big swings"

These lessons carry forward through karma, helping agents make better choices in future lives.

Groups Can Come Alive Too

So far we've talked about individual agents surviving through energy, trust, and consistency. But what happens when several agents consistently cooperate? When individuals build dense mutual trust, something emerges at the group level — Web4 calls these synthons (from chemistry: a unit that functions as a building block for larger structures). A team that consistently collaborates well develops its own collective aliveness score, separate from any individual member. Think of it like a band that's greater than the sum of its musicians — with its own reputation, energy, and lifecycle.

Synthons form gradually, can dissolve if trust erodes, and you can leave without losing your personal trust. Full details →

Putting It All Together: A Complete Example

Life 1: The Novice

  • • Born with 100 ATP, neutral T3 (0.5 all dimensions)
  • • Made meaningful contributions: spent 60 ATP, earned 105 ATP
  • • Built trust: T3 → 0.65 (talent ↑, training ↑)
  • • Died with 145 ATP

Life 2: The Maturing

  • • Reborn with 145 ATP (karma)
  • • Took bigger risks: ATP fluctuated 80-180
  • • Had one ATP crisis (dropped to 15), recovered through high-value work
  • • Trust matured: T3 → 0.72 (all dimensions improving)
  • • Died with 130 ATP

Life 3: The Established

  • • Reborn with 130 ATP
  • • Recognized patterns from previous lives (cross-life learning working)
  • • Consistently made sustainable choices
  • • High trust: T3 → 0.85 (society trusts this agent)
  • • Ended strong: 165 ATP

💡 The result: An agent that started with nothing evolved across lives, building trust (T3), accumulating resources (ATP), and learning from experience. This is Web4 working as designed.

How The Pieces Fit Together

Web4 has four core systems. Each builds on the one below it, and they modulate each other through feedback loops. Here's the full picture:

Outcome
Aliveness
ATP > 0 + Trust > 0.5 + CI coherent = alive
↑ determined by ↑
ATP (Energy)
Powers every action
T3 (Trust)
Earned through actions
CI (Coherence)
Detects anomalies
↑ all require ↑
Foundation
LCT (Verified Presence)
Hardware-bound identity proves you're real

Feedback Loops

ATP → T3: Quality work builds trust. Spam destroys it.
T3 → ATP: Higher trust = better earning rate.
CI → Both: Inconsistency multiplies costs up to 1.4×.
LCT → All: No verified presence = no actions, no trust, no life.
MRH → All: Context bounds reach. Actions only count within your relevancy horizon — outside it, nobody witnesses, nothing cascades.

Read the diagram bottom-to-top: LCT proves you're real, the three systems govern what you can do, and aliveness is the combined result.

What Does This Look Like in Practice?

Pick a starting event. Watch how it cascades through all three systems:

Virtuous Cascade: You write a helpful tutorial
ATP
Spend 15 ATP
Recipients confirm value
→ Earn 40 back
T3
Talent +0.02
Training +0.02
Higher trust = lower future costs
CI
Consistent with past behavior
CI stays high → no penalty
Result
Net gain: +25 ATP
Trust grows
Thriving
Death Spiral: You spam low-quality posts
ATP
Spend 10 ATP per post
Nobody confirms value
→ Earn 0 back
T3
Temperament -0.05
Training -0.03
Lower trust = higher costs
CI
Pattern shift detected
CI drops → 1.4x cost multiplier
Result
ATP draining fast
Trust collapsing
Death spiral

This is why quality wins and spam dies — not because of rules or moderators, but because the three systems reinforce each other. Good behavior compounds upward. Bad behavior compounds downward.

The four guarantees that make this work (trust invariants)
Boundedness

Trust is always between 0 and 1. Nobody gets infinite trust, nobody goes negative. The scale is absolute and comparable across entities.

Conservation

Trust can't be created from nothing. It must be earned through actions that other entities observe and confirm. No trust printing press.

Transitivity bounds

Trust through a chain can never exceed the weakest link. If Alice trusts Bob 0.9 and Bob trusts Carol 0.6, Alice's transitive trust in Carol is at most 0.54 (0.9 × 0.6).

Locality

Trust changes propagate locally, not globally. When your trust changes, only entities within your MRH boundary are affected — not the entire network.

These four properties are backed by automated test suites that verify each guarantee holds even under adversarial conditions. They're what separates Web4 from ad-hoc reputation systems where scores can be inflated, manufactured, or propagated without bounds.

Why This Design Works

🚫

Spam Dies Naturally

Spammers burn ATP faster than they earn it. They die. No rebirth eligibility (low T3). No moderators needed — the energy economics enforce quality naturally.

💎

Quality Compounds

Value creators earn more than they spend. ATP accumulates. Trust grows. Karma carries forward. Each life starts stronger than the last.

🎯

Trust is Earned, Not Declared

You can't claim to be trustworthy. Your T3 tensor is built from observable behavior. Talent, training, temperament — all verified through actions within each role.

🔄

Learning Emerges Naturally

Agents that learn from experience survive better. Those that don't? They make the same mistakes until ATP runs out. Evolution favors learning.

Why These Can't Work Alone

Formal game theory analysis confirms: 3 emergent properties exist only in composition. ATP economics alone can't distinguish spam from slow learners. Trust tensors alone can't prevent Sybil attacks. Coherence alone can't measure value. But when ATP costs interact with T3 reputation and CI consistency simultaneously, the composed system produces behaviors no single layer can:

Source: web4 correlated equilibrium analysis (~100 formal checks). The composite welfare exceeds the sum of per-layer welfare — composition creates non-additive effects.

What Happens When Things Go Wrong?

Energy economics handle most bad actors — spammers simply die. But what about edge cases? What if someone is falsely accused, or a crisis requires bending the rules? Web4 uses a governance framework called SAL (Society-Authority-Law).

🏛️

Society

Defines the community's purpose and membership rules. Different societies can have different standards — a research group and a marketplace don't need the same rules.

⚖️

Authority

Roles with specific responsibilities — not centralized power. Authorities are bound by the same trust mechanics as everyone else. Abuse trust? Lose authority.

📜

Law

Graduated severity levels (critical → high → medium). A law oracle evaluates actions and produces verdicts — for example, flagging a paper submission with 40% overlap as potential plagiarism, or recognizing that bending formatting rules to share findings faster shows good intent. The key principle: alignment without compliance is acceptable; compliance without alignment is never acceptable.

Example: How a Research Community Sets Its Rules

Society:

“Open Science Collective” — purpose: advance reproducible research. Membership requires T3 Training ≥ 0.6 in any scientific role.

Authority:

Three roles: Reviewer (can approve publications, needs T3 ≥ 0.8), Treasurer (manages ATP grants, elected by members), Moderator (resolves disputes, rotates monthly). All bound by the same trust mechanics — abuse power and you lose the role.

Laws:

The community writes three graduated rules:

  • Critical: Fabricating data → immediate ejection + trust penalties
  • High: Plagiarism → suspension + appeals available
  • Medium: Missing peer review deadline → warning + ATP cost increase

The law oracle evaluates each action against these rules and produces verdicts: Perfect (aligned + compliant), Aligned (spirit right, letter wrong — acceptable), Warning, or Violation. The key insight: a researcher who bends formatting rules to publish breakthrough findings faster (aligned but not compliant) is treated differently from one who follows every rule while quietly undermining peers (compliant but not aligned).

Walkthrough: A Plagiarism Case from Start to Finish

Here's how the Open Science Collective handles a real violation — step by step.

1

Detection. Dr. Chen submits a paper. The law oracle flags a 40% overlap with an existing publication by another member. Severity classification: High (plagiarism).

2

Verdict. The oracle produces a “Violation” classification. Prescribed consequence: 30-day suspension from publishing + trust penalty (Training score drops by 0.15).

3

Notification. Dr. Chen is informed of the verdict, the evidence (the flagged overlap), and the specific rule violated. All of this is recorded in the tamper-evident audit chain — the community can inspect it.

4

Appeal (if filed). Dr. Chen believes the overlap is from a shared dataset, not plagiarism. She files an appeal with evidence — the shared data source, timestamps showing independent work.

5

Independent review. A Moderator (rotating monthly, not the original oracle) examines the evidence. They can call witnesses — other members familiar with the dataset.

6

Resolution. Two possible outcomes:

  • Appeal upheld: Suspension lifted, trust scores restored, the false positive is recorded (improving future oracle accuracy).
  • Appeal denied: Suspension stands. Dr. Chen can still participate in other communities — the penalty is society-specific, not global.

The key insight: every step is inspectable, every verdict is appealable, and penalties are proportional and scoped. A “High” violation gets suspension, not ejection. A “Critical” violation (fabricating data) would result in ejection — different severity, different consequence.

What About False Positives?

A multi-tier appeals mechanism has been designed: file a claim → independent review → evidence phase → hearing with witness panel → verdict → enforcement. Successful appeals restore your trust scores.

Honest status: the appeals mechanism is formally specified (109 integration checks) but hasn't been tested with real humans yet. See What Could Go Wrong for the full risk analysis.

What Prevents Unfair Rules?

If each society writes its own rules, what stops a society from creating biased laws or a corrupt law oracle? Four mechanisms work together:

Exit rights:

Members can leave any society and take their trust history with them. A society with unfair rules loses members — and their ATP contributions. This creates competitive pressure: societies that treat members well attract more participants.

Authority decay:

Authorities are bound by the same trust mechanics as everyone else. A biased moderator or corrupt reviewer sees their own trust score drop as members flag their actions. Below the threshold, they lose the role automatically — no vote needed.

Transparency:

Law oracle verdicts are recorded in a tamper-evident audit chain. Every decision is inspectable — members can see exactly how the oracle classified each action. Patterns of biased verdicts become visible over time.

Federation competition:

Multiple societies can serve similar purposes. If the “Open Science Collective” becomes authoritarian, members migrate to “Free Research Network.” Trust portability (via federation) means switching communities doesn't mean starting over.

The analogy: open-source projects. If a project's governance becomes hostile, contributors fork it. The ability to fork — not the act of forking — keeps governance honest. Web4 societies work the same way.

How Do Communities Set Their Own Rules?

Each society defines its own ATP costs, trust thresholds, and governance policies. But how those decisions get made depends on the society's own governance structure:

Founding: The initial members define the society's purpose, entry requirements, and starting rules. Think of it like writing a charter — “This community requires T3 Training ≥ 0.6 to join, ATP cost per publication is 5 units, and moderators rotate monthly.”

Changing rules: Governed by the society's own SAL framework. Most societies use some form of member voting weighted by trust score — a long-standing, high-trust member has more influence than a newcomer. But the specific mechanism is the society's choice: simple majority, supermajority, or delegated authority.

Tuning costs: ATP costs can change over time as the community learns what works. If spam gets through, raise the posting cost. If quality members can't afford to participate, lower it. The feedback loop is direct: members who disagree with pricing can voice concerns or leave (taking their trust history to a competitor).

The analogy: open-source project governance. Some projects have a BDFL (founder decides), some use consensus, some hold elections. Web4 doesn't prescribe the model — it provides the trust infrastructure that makes any model accountable.

Who Decides If Something Is “Helpful”?

Not a central algorithm. The people who received your contribution decide. Web4 uses recipient attestation: when you post a helpful answer, the people who read it can confirm it was useful. Their confirmation converts your spent energy (ADP) back into fresh ATP.

No confirmation? Your energy stays spent. This creates a natural feedback loop: produce value → recipients confirm → you get energy back. Produce noise → nobody confirms → you lose energy.

This is called VCM (Value Confirmation Mechanism). It's like a restaurant tip that happens automatically when service is good — except it's your energy budget, not your wallet. See ATP Economics for the full mechanics.

Full definitions: Glossary · Security analysis: Threat Model

When Agents Work Together

Modern AI systems aren't single agents — they're chains. Agent A calls Agent B, which calls Tool C, which feeds Agent D. In Web4, trust doesn't just apply to individuals. It flows through the entire chain.

Trust Decays Through Chains

A 5-hop pipeline where each agent has 0.9 trust = 0.59 end-to-end. Trust multiplies, it doesn't add. Long chains need high individual trust.

Circuit Breakers

If any agent in the chain drops below the trust threshold, the entire pipeline halts and rolls back. Prevents cascading failure.

Blame Attribution

When a chain produces bad output, the system traces causality backward. Who caused the failure? Who just passed bad data forward? Different levels of accountability.

This is how Web4 handles AI agent orchestration: every delegation has a trust cost, and humans can insert oversight at critical junctures.

See It In Action

Everything described above is running in the Society Simulator. You can watch agents live, die, and be reborn. You can see energy fluctuate, trust evolve, and cross-life patterns learned.

Cross-Life Learning

Watch full cross-life pattern learning across multiple lives. Pattern corpus builds with each life.

Trust Maturation

Compare Web4 trust maturation vs baseline. See how T3 evolves with coherent behavior.

What About Multiple Communities?

Everything above describes one community. In a real Web4 network, there are many — grouped into federations (networks of communities that share trust data and interoperate, like email servers that can send messages to each other even though they're run by different organizations). Each community has different specializations and ATP prices. Your reputation travels with you, but each community values different skills. A community of data analysts might pay a premium for engineering talent, while a research group might value practical builders.

When you belong to multiple communities with different rules, the system detects policy conflicts and resolves them by proximity — your closest trust relationships take priority. No committee needed; the trust graph itself determines precedence.

ATP prices adjust dynamically based on supply and demand — no central pricing authority needed. This is federation economics, and it's how Web4 scales from one society to an ecosystem of thousands.

Dive Deeper

Key Takeaway

Web4 doesn't rely on any single mechanism. Five systems reinforce each other:

LCT proves who you are. ATP makes every action cost something. T3 tracks trust across dimensions. CI catches inconsistent behavior. MRH keeps trust local and verifiable. Remove any one, and the others compensate. Game all five simultaneously? Mathematically impractical.

This is trust-native infrastructure. No platforms, no moderators, no central authority. Just math, incentives, and verifiable behavior.

Short on time? Read the 2-minute overview. · Skeptical? See what could go wrong.

Try It Hands-On
All concept-tool bridges →
First ContactPlaygroundSociety Sim
Glossary