The Internet Has a Trust Problem
You already know this. You feel it every day. The question is: what can actually be done about it?
Problems You Already Know
Spam, Bots, and Bad Actors Are Winning
Creating accounts is free. Sending messages is free. There's no cost to flooding platforms with garbage. The result: an endless arms race between spammers and moderation teams. Spammers always have the advantage—they only need one message to get through.
Root cause: Actions have no cost. Bad actors can operate indefinitely at zero expense.
Your Reputation Is Trapped in Silos
You've spent years building reputation on one platform. Then you try a new service: you're treated like a stranger. All that trust, all that history—worthless. Each platform is an island. Your reputation starts at zero, every time.
Root cause: Platforms own your reputation. It's stored in their databases, not attached to you.
Bad Actors Get Unlimited Fresh Starts
Banned? Create a new account. Caught scamming? New email, new identity, back in business. The record of past behavior is trivially discarded. Consequences are temporary. Bad actors face no compounding penalties—they just reset.
Root cause: Identity is cheap. Creating a new account costs nothing. Past behavior doesn't follow you.
AI Agents Are Making Everything Worse
AI can now generate convincing text, images, and video. It can operate accounts at scale. It can impersonate humans. And there's no reliable way to verify whether you're talking to a human, an AI, or a human-AI hybrid. The tools for deception are outpacing the tools for verification.
Root cause: Identity systems were designed for humans. They don't account for AI agents that can copy, fork, and run in parallel.
Platforms Control Everything (And You Have No Recourse)
Banned? Good luck appealing. Shadowbanned? You might never know. Platform changed its policies? Too bad. Your account, your data, your reputation—all at the mercy of corporate decisions. You're a guest in someone else's house, always.
Root cause: Centralized control. You don't own your identity or reputation—platforms do.
Why Previous Solutions Failed
Moderation Armies
Hire more moderators, build better AI filters, play whack-a-mole forever.
Failure: Reactive, not preventive. Scales with cost, not with problem. Spammers always find new angles.
CAPTCHAs and Verification
Prove you're human with puzzles, phone numbers, ID verification.
Failure: AI solves CAPTCHAs now. Phone numbers are cheap. ID verification is privacy-invasive and centralized.
Blockchain Identity (Web3)
Use wallet addresses as identity. Self-sovereign, decentralized.
Failure: Wallets are free to create. No behavior history. Key theft means permanent identity loss. No spam prevention.
Social Login (Sign in with Google)
Use a major platform as identity provider. Convenient, established.
Failure: Central point of control. Platform can revoke access. No portable reputation. Same silo problem.
Common thread: All these solutions treat symptoms, not causes. They add friction for everyone instead of making bad behavior economically impossible. They don't attach consequences to identity in a way that persists.
What Would Actually Work?
A solution that addresses root causes would need to:
Make bad behavior expensive
Every action should cost something. Spamming 1000 messages should drain resources. Quality contributions should earn resources back. Bad actors should exhaust themselves.
Make identity expensive to fake
Creating a new identity should require physical hardware, not just an email address. Multiple independent witnesses should attest to your existence. Creating thousands of fake accounts should require buying thousands of devices — expensive and slow.
Make reputation portable and permanent
Your trust should follow you across platforms. Good behavior should compound. Bad behavior should create permanent records visible to future interactions. No more fresh starts for serial abusers.
Work for humans AND AI agents
The same trust framework should apply whether you're human or AI. AI agents should be verifiable, bounded, and accountable. Their creators should be on the hook for their behavior.
Be decentralized (but practical)
No central authority controlling identity. No single point of failure. But also actually usable—not requiring cryptocurrency expertise or gas fees for every action.
This Is What Web4 Proposes
Web4 is a working proposal for trust-native internet infrastructure. It addresses each root cause with a specific mechanism:
Four ideas, introduced one at a time below
Each concept is explained in plain English with its own card. Don't worry about acronyms — they're just shorthand. The ideas are what matter.
1. Energy Budget
Every action costs energy from a personal budget. Quality contributions earn energy back. Spam burns through it with no return — spammers literally run out of fuel.
Shorthand: ATP (Allocation Transfer Packets)
Learn about energy budgets →2. Hardware-Bound Identity
Your identity is tied to your device's security chip — the same kind that protects Face ID and fingerprints. Creating a fake identity means buying a new physical device. Multiple devices witness each other for extra security.
Shorthand: LCT (Linked Context Token)
Learn about identity →3. Multi-Dimensional Trust
Instead of one trust score, you get separate scores for your skills, your training, and your behavior — and they're different for each role. Your trust as a data analyst doesn't affect your trust as a cook.
Shorthand: T3 (Trust Tensor — Talent, Training, Temperament)
Learn about trust →4. Trust Neighborhood
You only see messages from people your network trusts — like being in a room where strangers need an introduction before they can talk to you. Trust fades with distance: a friend-of-a-friend-of-a-friend is almost a stranger.
Shorthand: MRH (Markov Relevancy Horizon)
Learn about context boundaries →4-Life is a simulation lab where you can watch these mechanisms in action. See societies form, trust networks emerge, spam die from energy exhaustion, and agents face real consequences.
Ready to see it in action instead of reading more?
Or keep scrolling for honest questions and answers about Web4.
Honest Questions
If you're skeptical, good. Here are the hard questions visitors ask, and honest answers. Looking for something specific? Open the topic index below.
▶ Browse all questions by topic (30+ FAQs)
Start here. First-time visitors most often want to know:
- If not crypto, what backs ATP?
- Who runs the infrastructure?
- How does the witness network bootstrap?
- Can a wealthy attacker buy 50 devices?
- Do youthful mistakes follow you forever?
- What about differing national laws?
- Isn't this just social credit?
- What if I lose my hardware?
Click any question to jump to it. The full topic index above has 40+ more.
Most asked
Is this deployed anywhere? Or purely theoretical?+
Research prototype with substantial implementation. This is not vaporware — there is real, tested code. But it's not a product yet. Here's what actually exists:
What's built and tested:
- Protocol specification: 100+ page Web4 whitepaper with formal definitions
- Reference implementations: ~47,000 lines of tested code — LCT lifecycle, T3/V3 tensors, ATP metering, governance (SAL), federation, witness protocol, MRH graphs
- Security validation: 424 attack vectors across 84 tracks, all defended. Sybil resistance formally proven (5 theorems). Incentive compatibility proven — honest behavior is mathematically more profitable than gaming
- Hardware integration: TPM2 binding validated (Intel TPM 2.0, EK certificate chain through 2049). Go LCT library (55 tests). Multi-device constellation enrollment working
- System integration: End-to-end pipeline (all subsystems chained), WASM browser validator for client-side trust verification, federation consensus at 38.5 tasks/sec throughput
- Interactive simulations: Society Simulator, Playground, and Karma Journey on this site
What's NOT built yet:
- Production deployment: No live network with real users. The gap between “simulations prove mechanics work” and “running with real humans” is the current frontier
- Economic validation: ATP pricing calibrated for simulations, not real markets. Whether the economics survive real human behavior is an open question
- Platform adoption: No platform integrates Web4 yet. The 5-tier adoption pathway (Wrapper → Observable → Accountable → Federated → Native) is designed but untested
Honest answer: This is research, not production. We don't attach timelines because honest research doesn't have them. The simulations prove the mechanics work in principle — the question is whether the economics survive contact with real human behavior. That's what a pilot would test.
In software terms: The specification is complete. The reference code is proof-of-concept quality (tested, but not hardened for production). What's needed next is a pilot (alpha) with real users — the gap between “simulations prove it works” and “running with real humans” is the current frontier.
Practical horizon: A pilot could start whenever a willing community (university, company, open-source project) partners with the research. The technology is ready for that step. Internet-scale deployment is years further — think “email in the 1980s,” not “launching next quarter.”
When will Web4 be available to use?+
No timeline — and that's intentional. Honest research doesn't ship deadlines. The mechanics are proven in simulation; what's needed next is a pilot with real users in a real community (a university, a company, a platform).
The path: Pilot → Alpha → Beta → Production. Each stage tests something the previous one couldn't — pilot tests whether real humans behave like the simulations predict, alpha tests whether the economics work at small scale, beta tests scaling, and production is the internet-wide network. We're at the pilot-readiness stage.
Practical horizon: A pilot could start whenever a willing community (university, company, open-source project) partners with the research. The technology is ready for that step. Internet-scale deployment is years further — think “email in the 1980s,” not “launching next quarter.”
Why no dates? Setting artificial deadlines on research creates pressure to ship before the science is ready. Every “launch in Q3” promise in tech history has led to shortcuts. We'd rather be right than fast.
What if I lose my hardware? Is my identity gone forever?+
No—recovery is built in. LCT supports multiple linked devices. Lose your phone? Your laptop can attest to your identity. Lose both? Your witnesses can attest.
The design principle: make recovery possible but expensive. You need multiple witnesses to vouch for you, similar to how banks verify identity for account recovery. This prevents attackers from “recovering” someone else's identity while protecting legitimate users.
Trade-off: Recovery is slower than “forgot password” flows. You can't instantly regain access—the friction is intentional to prevent social engineering attacks.
If someone powerful lies about me and I'm new, does my truth even matter?+
Yes — and here's how. In Web4, a single accusation from even a high-trust entity doesn't override your track record. Trust is built from all your interactions, not from one person's claim about you. An established member saying “this newcomer is untrustworthy” is one data point — it lowers their trust in you, but doesn't destroy your trust score globally.
Several mechanisms protect newcomers specifically:
- Context boundaries (MRH): The liar's accusation only propagates within their local trust network, not the entire system. People outside their network never see it.
- Behavioral evidence outweighs claims: If you consistently produce quality work, your V3 scores from recipients build your reputation independently of the accuser's narrative. Actions speak louder than accusations.
- Accuser accountability: Making false accusations costs the accuser credibility. If their negative assessments consistently disagree with everyone else's experience of you, the accuser's own Veracity score drops. Lying is expensive.
- Multiple trust dimensions: T3 separates Talent, Training, and Temperament. Even if someone attacks your Temperament (“they're unreliable”), your Talent and Training scores remain based on your actual work quality.
Honest caveat: Newcomers are more vulnerable. With only a few interactions, a single negative assessment carries more weight relative to your thin history. The system gradually gets more resistant to manipulation as you build more evidence. The first ~30 actions are the most fragile period. This is a real design tradeoff: making newcomers more robust to attack would also make it easier for Sybil attackers to quickly establish credibility.
How does the system decide who can see my messages?+
Through your trust network, not a central server. Every entity has a context boundary (MRH) that extends 3 hops through their trust relationships:
- Direct contacts (1 hop): People you've interacted with directly. Full visibility. Trust weight: 0.70
- Friends of friends (2 hops): People your contacts vouch for. Reduced visibility. Trust weight: 0.49
- 3rd degree (3 hops): Barely visible, minimal trust. Trust weight: 0.34
- Beyond 3 hops: Invisible. You don't exist to each other until someone in between introduces you
Think of it like a party: you can talk to your friends, and your friends can introduce you to their friends. But you can't walk up to a stranger across the room unless someone bridges the gap. No one has a complete view of the entire network — everyone sees only their local neighborhood.
Why this matters: Spam can't broadcast to everyone — a spammer has no trust connections, so their messages reach nobody. To reach people, you need real relationships. This is fundamentally different from email (anyone can message anyone) or social media (algorithmic distribution to strangers).
If Web4 is opt-in, what happens when most people are still on Web2?+
Early adopters don't just talk to each other. Web4 is designed as a layer on top of existing platforms, not a replacement. Early adoption looks like this:
- Hybrid mode: A forum adds Web4 trust signals alongside existing moderation. Users without Web4 see the same forum. Users with Web4 see trust indicators on posts (verified identity, trust score, behavioral history). Both coexist.
- Trust islands: A small community (say, a coding forum) adopts Web4 internally. Members build trust with each other. When a second community adopts, their trust histories are already portable — instant credibility in the new space.
- Network effects compound: Each new community that adopts makes everyone's trust history more valuable. Think email in 1995 — it was useful even when most people didn't have it, and each new user made everyone else's email more useful.
Do existing platforms need to cooperate? Not necessarily. Web4 can overlay without platform cooperation — a browser extension or third-party service could attach trust signals to any platform. But cooperation makes it better: a forum thatnatively integrates trust scoring can weight content accordingly, not just display badges. The design supports both models — grassroots overlay and official integration.
What does daily life look like at 1% adoption? You install a browser extension. On a coding forum, you see trust badges next to 1 in 100 usernames — those are other Web4 users. Their posts aren't ranked higher (yet), but you can see at a glance that “this person has 0.82 trust in Python and a 6-month history of helpful answers.” Everyone else looks the same as before. You lose nothing; you gain a signal. As more people join, the signal becomes more useful — and platforms eventually notice that trust-badged users produce better content, creating incentive to integrate natively.
Honest caveat: Bootstrapping is the hardest phase. A trust score based on 2 platforms is less meaningful than one based on 200. The first communities to adopt will need to find Web4 valuable on its own terms (spam reduction, accountability) before network effects kick in.
Who decides what counts as “quality”? If helpful answers earn ATP, who judges “helpful”?+
The people who receive the work decide. Web4 uses a Value Confirmation Mechanism (VCM): you write a code review, the developer who received it confirms whether it helped. You post a tutorial, readers who learned from it confirm value. You can't rate your own work.
This means quality isn't decided by an algorithm or a moderation team — it's decided by the people closest to the actual value. A niche research paper might only get 3 confirmations, but if those 3 are high-trust experts in that field, their confirmations carry significant weight.
What about gaming? Confirming too generously lowers your own trust score (Veracity dimension). Rubber-stamping everything makes you less credible, which reduces the weight of your future confirmations. The system is self-correcting — but social dynamics are unpredictable, so this remains one of the actively researched challenges.
If Web4 doesn't use cryptocurrency, what backs ATP? Why is it scarce?+
Received value backs ATP — not a currency peg, not a crypto reserve. ATP is a budget for participation, not a financial asset. You can't sell it for dollars, exchange it for Bitcoin, or cash out. Its only “value” is permission to act inside a community that cares about what you do next.
What makes it scarce:
- Capped issuance: A community mints a finite ATP pool. More members, more demand, same pool — so ATP matters exactly because it's limited.
- Recipient confirmation: You earn ATP only when someone else (the quality decider) confirms your contribution helped them. No confirmation, no ATP — so it can't be self-minted.
- 5% transfer burn: Every ATP transfer between entities burns 5%, draining the pool over time unless offset by new confirmed value. Circular shuffling shrinks the pool instead of growing it.
- Decay on inactivity: Idle ATP doesn't sit forever. Disuse returns capacity to the community, so hoarding has a cost.
Think of it like phone minutes on a shared plan — valuable because it's your turn to speak in a room that cares about what you say, not because you can sell the minutes. That's why there's no cryptocurrency, no gas fees, and no financial speculation. See ATP economics for the full mechanism.
What prevents groups of users from colluding to boost each other's trust?+
Multiple overlapping defenses. Trust cartels are one of the most studied attack vectors:
- MRH limits reach: Colluders can only inflate trust within their 3-hop neighborhood — they can't broadcast fake trust globally
- Statistical detection: Unusually dense mutual validation clusters trigger anomaly detection with 93%+ probability at 3+ members
- Behavioral consistency: Your Coherence Index flags when validation patterns don't match your other behavior — confirming everything a cartel member does while being selective with others creates a detectable signal
- Hardware cost: Each colluder needs real hardware-bound identity (LCT), making cartel scaling expensive
Honest caveat: Sophisticated collusion that mimics legitimate community behavior is the hardest case. The Threat Model page covers this in depth, including coalition thresholds and adversary profiles.
How many people does a Web4 community need to work?+
As few as 5–10 active members, though the dynamics improve with scale. The system stabilizes after roughly 100 total quality actions across participants (see the cold-start walkthrough for a step-by-step breakdown).
Here's what changes with community size:
- 5–10 people: Functional, but trust scores are volatile. One person's bad day moves the whole graph. MRH neighborhoods overlap heavily, so everyone sees everything.
- 20–50 people: Trust signals become statistically meaningful. Roles start to differentiate. The Gini coefficient converges toward the designed 0.25.
- 100+ people: Emergent structure appears — clusters, bridge nodes, specialists. First-mover advantage fades (30-action half-life). This is where the system starts to feel like an ecosystem, not a group chat.
Honest caveat: Small communities (~5 members) are more sensitive to individual behavior. A single bad actor is 20% of the network. The protocol still works, but its self-correcting properties are slower to activate. Think of it like a small town vs. a city — both work, but social dynamics are different.
Why “Web4”? Isn't that confusing given Web3?+
Fair question. The name implies succession, but the relationship to Web3 is oppositional, not evolutionary. Here's the lineage:
- Web1 (read): Static pages. You consumed content someone published.
- Web2 (read/write): Platforms. You create content, but platforms own your data and identity.
- Web3 (read/write/own): Blockchain-first. You own tokens, but trust is financial — whoever has the most tokens has the most influence.
- Web4 (read/write/trust): Behavior-first. You own your identity through hardware, and influence comes from demonstrated trustworthiness — not tokens, not followers, not money.
Web3 tried to solve ownership with financial instruments. Web4 argues the deeper problem was never ownership — it was trust. You can own your data and still be drowning in spam, fake reviews, and bot armies. Web4 addresses the layer Web3 skipped.
Why not a different name? We considered it. But “Web4” signals what this actually is: the next architectural layer for the internet, building on what came before while fixing what was missing. The “4” is a version number, not an endorsement of “3.”
Going deeper — 6 topics, 40+ questions
▶
Why is this better than [existing solution X]?+
It's not “better” at everything. Every system has trade-offs:
- vs Passwords: More secure, but requires hardware. Won't work on borrowed devices.
- vs OAuth (Google login): No central point of control, but more complex to implement.
- vs Blockchain wallets: Harder to create fake IDs, but not as portable across chains.
- vs Biometrics: Can't be stolen by breach, but requires specific device support.
What Web4 optimizes for: Economic resistance to spam/abuse while preserving privacy and decentralization. If you need something else, another solution may fit better.
How does the transition work? Do I have to switch everything at once?+
Gradual, not all-or-nothing. The Web4 spec defines a 5-tier adoption pathway — each tier is independently useful, and you don't need to commit to the full stack:
- Wrapper: Add a verifiable identity to your existing system. Zero code changes. (Like adding HTTPS to an existing site)
- Observable: Start tracking trust based on actual behavior. (Like the browser padlock icon — visible trust signals)
- Accountable: Stake energy on quality. Good work returns your investment; bad work costs you. (Like spam filters that actually work)
- Federated: Your reputation travels with you across platforms. (Like email's federated model)
- Native: Full Web4 stack — built from the ground up around verifiable trust.
Tiers 0–3 are reversible — you can always roll back. Only Tier 4 (full native) is a permanent commitment. Think of how HTTPS adoption worked: banks first, then e-commerce, then eventually the default everywhere.
Honest caveat: Gradual adoption means the system is only as strong as its coverage. A trust score based on 2 platforms is less meaningful than one based on 200. Network effects work both for and against adoption.
Want a concrete example? See “What's the concrete adoption path?” below for a Discord and Stack Overflow walkthrough.
What about people who can't afford devices with security chips?+
This is a real equity concern, not a dismissed one. If participation requires a TPM or Secure Enclave, then cost becomes a barrier. Several factors work in favor of accessibility:
- Hardware is already widespread: Most phones sold since ~2018 include security chips (Secure Enclave, Titan, TPM). Even budget Android devices increasingly ship with hardware-backed keystores. The threshold is a $50 phone, not a $1000 one.
- FIDO2 security keys: USB-based keys like YubiKey cost ~$25 and work with any computer. A single key can anchor an identity without needing a modern phone.
- Community witnessing: In regions where personal device ownership is low, shared community devices + witness-based attestation can bridge the gap. A village elder or community center can attest to presence.
Honest caveat: None of these fully solve the problem. The most marginalized populations — those without any device access — would need some form of sponsored onboarding. Web4 is not unique here: every digital system faces this. But a system that claims to be trust-native must take equity seriously, not hand-wave it. This remains an active design priority.
What would a Web4 app actually look like on screen?+
This is the most-asked question we get. Right now, Web4 exists as a research simulation — the site you're on. No production Web4 apps exist yet.
What we do have: conceptual wireframes showing what a Web4 inbox, hiring dashboard, and review page could look like — trust scores next to messages, unfakeable reputation badges, energy-weighted reviews. See them on the A Day in Web4 page.
You can also experience the mechanics right now through the Society Simulator (watch trust economies evolve) and Karma Journey (play through a trust lifecycle yourself). The concepts work — the question is what production tooling looks like, and that will emerge from real-world pilots.
Honest status: A browser extension or community plugin would be the first real-world step (see adoption tiers). No timeline — research takes as long as it takes.
Who is building this?+
Web4 is an open research project. The specification, implementation, and this site are all open source on GitHub. The work is research-driven, not commercially motivated.
There is no company, no token sale, no funding round. This is an attempt to answer a genuine research question: can you build internet infrastructure where trust is native rather than bolted on?
Why this matters: A project about trust should be transparent about its own origins. The code is readable, the reasoning is documented, and the limitations are stated honestly. Judge the ideas on their merit, not on who's behind them.
What's the concrete adoption path from simulation to real protocol?+
Web4 uses a 5-tier gradual adoption model, like how HTTPS replaced HTTP over a decade without breaking the web:
- Wrapper: Existing platforms add Web4 trust signals as metadata. No user changes needed. (Like adding HTTPS to an existing site)
- Observable: Trust scores become visible to users. Platforms surface reputation from Web4. (Like the browser padlock icon)
- Accountable: Actions have real consequences through ATP costs. Bad actors face energy costs. (Like spam filters that actually work)
- Federated: Communities connect their trust graphs. Portable reputation across platforms. (Like email's federated model)
- Native: Full Web4 protocol stack. Hardware-bound identity, society governance, cross-platform trust. (Like the web itself)
What Tier 1 actually looks like: Imagine a coding forum like Stack Overflow. Today, answers are ranked by votes. At the Wrapper tier, the forum adds a Web4 trust layer: each user's answers start building a T3 trust profile (Talent = answer accuracy, Training = domain expertise, Temperament = community behavior). The forum's existing UI doesn't change — but behind the scenes, a trust-weighted ranking emerges. Spam answers from new accounts cost ATP to post and earn nothing back. Within weeks, high-quality answers naturally rise above vote-gamed content. No blockchain, no tokens, no user-facing changes — just better signal-to-noise.
What Tier 2 (Observable) looks like: Same Stack Overflow, six months later. Now trust scores are visible. Next to each answer, you see the author's T3 breakdown: “0.87 Talent in Python, 0.91 Training in distributed systems, 0.72 Temperament.” You can tell at a glance whether this person has a track record in the topic they're answering about. A user with 10,000 reputation points but 0.31 Talent in security? Their security answers get naturally de-prioritized — not banned, just surfaced honestly. The browser padlock analogy: HTTPS didn't change how websites worked, it just made trust visible. Observable tier does the same for people.
What a community could try today: Imagine a Discord server with 200 members and a spam problem. Without any Web4 software, the community could adopt Web4 principles:
- Energy cost: New members earn posting rights by participating in welcome channels first (equivalent to ATP ramp-up for new entities)
- Behavioral trust: Track helpful vs. unhelpful contributions per member. After 50+ interactions, let high-trust members moderate (equivalent to T3 maturation)
- Portable reputation: When members join a sister server, carry their trust score over (equivalent to federated trust)
- Consequences: Spam costs the spammer's earned trust. No bans needed — low-trust members simply have less reach (equivalent to ATP/CI throttling)
This isn't Web4 — it's Web4 thinking applied with existing tools. A bot that tracks contribution quality and adjusts permissions accordingly is a manual Tier 1 implementation. The gap between “Discord bot that tracks trust” and “real Web4 wrapper” is hardware-bound identity and cross-platform portability.
Who would build this? The platform's own engineering team, or a third-party integration (like adding Stripe for payments). The incentive is concrete: platforms spend enormous resources on spam moderation, fake account detection, and content ranking. A Wrapper-tier integration replaces heuristic moderation with trust-weighted signals — Reddit estimated it spends ~$50M/year on content moderation. Even a 10% reduction pays for the integration.
The honest gap: All five tiers are designed and specified. Tiers 1-2 could be integrated into existing platforms today. But no platform has done so yet. The path from “working simulations” to “first real integration” requires a willing partner — a community, platform, or organization that sees value in trust-native infrastructure.
Who runs the infrastructure? How is this deployed?+
Web4 is designed as an open ontology, not a platform. Like email or the web itself, the infrastructure is federated — multiple independent operators can run nodes that interoperate. No single company controls it.
In practice: witness nodes can be run by universities, nonprofits, companies, or individuals. The trust system is designed so that no single operator can manipulate the network—collusion requires coordinating multiple independent parties.
Web4 identity is also designed to work with W3C Decentralized Identifier (DID) standards — the same standard used by governments and enterprises. Your LCT maps to a standard DID Document, so external systems can verify Web4 identity using protocols they already support.
Honest caveat: This is still early-stage research. Full deployment requires standardization, adoption, and tooling that doesn't exist yet. See the roadmap FAQ above for where things stand today.
Can't someone with lots of hardware create many identities?+
Yes, but it's expensive. Creating one LCT (Linked Context Token — Web4's hardware-bound identity) requires a physical device with a security chip (TPM or Secure Enclave — built into most modern phones and laptops). Creating 1000 fake identities means buying 1000 devices — thousands of dollars and physical logistics.
Compare to email: creating 1000 accounts costs nothing. The goal isn't to make fake-identity attacks impossible (nothing can), but to make them economically irrational for most attackers. Nation-state adversaries can always outspend; the system is designed to resist casual abuse, not unlimited resources.
Honest caveat: A sufficiently motivated adversary with a large budget can still attack. Web4 raises the floor, not the ceiling.
How do you bootstrap the initial witness network?+
Bootstrap requires a seed network of trusted witnesses. This is the classic "who watches the watchmen" problem. Proposed approaches:
- Partner with existing identity providers (universities, employers) as initial witnesses
- Hardware attestation from device manufacturers (Apple, Google, etc.)
- Web of trust model where existing members vouch for newcomers
- Gradual rollout starting with high-stakes contexts (not consumer social)
What the first 100 interactions look like:
- Actions 1–3: Three founding members create the society with hardware-bound presence tokens. They witness each other — establishing the initial trust graph.
- Actions 4–20: Founders perform real work (writing governance rules, creating initial resources). Trust builds slowly from 0.50 — each quality contribution moves the needle by ~0.02.
- Actions 21–50: New members join, vouched for by founders. First-mover advantage exists but has a ~30-action half-life — newcomers doing quality work catch up to founders by action ~50.
- Actions 51–100: Roles emerge, specialization begins. The society's trust graph becomes rich enough that MRH boundaries create meaningful context. Wealth gap trends toward 0.25 — concentrated enough to reward quality, distributed enough to avoid oligarchy.
See the Playground experiment “Where is the tipping point?” for a hands-on version of cold-start dynamics.
Honest caveat: Bootstrapping is genuinely hard. No perfect solution exists. This is an active research area, not a solved problem.
How many people does it take to start a Web4 community?+
Fewer than you might think. The minimum viable community depends on what you need:
- 3–5 people: Enough for basic trust graph formation. Each person can witness others, and the MRH graph becomes meaningful with 2-hop connections. A small team or study group could start here.
- 10–20 people: Roles start to differentiate. Enough recipients to make ATP economics work — quality signals become statistically meaningful when at least 5–10 people can confirm value independently.
- 50+ people: Full society dynamics emerge. Specialization, reputation stratification, and the wealth-gap-trending-to-0.25 pattern from simulations become visible.
The key constraint isn't size — it's reciprocity density. Ten people who actively interact produce richer trust data than 1,000 passive accounts. Web4's economics naturally reward active communities: ATP flows to those who contribute value, so even a small group that does real work together builds meaningful reputation quickly.
See the Playground experiment “Where is the tipping point?” and the cold-start walkthrough above for concrete dynamics.
Don't hardware manufacturers (Apple, Intel) become the new gatekeepers?+
This is a real concern. If identity requires a TPM or Secure Enclave, then hardware manufacturers have power over who can participate. Web4 mitigates this in several ways:
- Multiple standards supported: TPM (Intel/AMD), Secure Enclave (Apple), Titan (Google), and future open-source hardware. No single vendor lock-in.
- Attestation, not permission: Hardware provides cryptographic proof of presence — it doesn't need to “approve” your identity. The chip signs; it doesn't decide.
- Open specification: Any hardware that meets the attestation spec can participate. This invites competition rather than consolidation.
Honest caveat: This shifts trust from platforms to hardware supply chains. That's a different dependency, not zero dependency. And open standards can be captured — the history of standards bodies shows that well-resourced incumbents sometimes steer “open” specs to favor their implementations. Whether hardware dependency is better than platform dependency depends on whether supply chain competition proves more durable than platform competition. That's an empirical question, not a settled one.
What if my device is stolen? Do I lose my identity?+
No. Web4 uses multi-device witness networks — your identity is spread across multiple devices (phone, laptop, security key). If one is stolen, the others can revoke the compromised device and approve a replacement.
When you report a compromise, a revocation cascade propagates through the witness network. The stolen device's keys become instantly invalid. Any entity that trusted the compromised device gets notified. Your trust history and reputation are preserved — only the compromised device is cut off.
Recovery works like a quorum: you set a threshold when adding devices (e.g., 2-of-3). As long as enough surviving devices agree, you can recover without losing your identity or accumulated trust.
Honest caveat: If all your devices are lost simultaneously, recovery requires social vouching from trusted witnesses — deliberately slow and hard, because easy recovery would mean easy identity theft. See the LCT explainer for the full lifecycle.
If every energy transfer costs 5%, how do teams collaborate?+
ATP can be transferred, but every transfer burns 5% — making circular farming unprofitable. Teams primarily earn energy together. When a team creates value jointly, each contributor receives their own energy reward based on their verified contribution.
Organizations work through shared context boundaries (MRH). Members of a team see each other's work, validate each other's contributions, and collectively build the team's reputation. But each person's individual energy and trust remain their own — you can't buy someone else's reputation, and a team member's bad behavior affects them, not you.
What about frequent transfers? If a team of 5 passes ATP back and forth for collaborative projects, the 5% fee compounds quickly — 10 transfers loses ~40% of the original amount. That's by design: the system incentivizes direct value creation (where everyone earns individually) over resource shuffling (where ATP moves between wallets). Teams that create together don't need to transfer; teams that delegate work use single transfers, not chains.
Honest caveat: The details of organizational identity and pooled contribution attribution are still being worked out. This is one of the harder design problems.
How does the 5% transfer tax actually prevent circular farming?+
Concrete example: Say Alice and Bob each have 100 ATP and try to inflate their balances by sending energy back and forth.
- Round 1: Alice sends 100 to Bob. 5% burns. Bob receives 95.
- Round 2: Bob sends 95 back. 5% burns. Alice receives 90.25.
- Round 3: Alice sends 90.25. Bob receives 85.74.
- Round 4: Bob sends 85.74. Alice receives 81.45.
After just 4 round trips, they've lost nearly 19% of the original energy — and gained nothing. No trust increase, no reputation, no value. Every transfer is logged as a transfer, not a contribution, so it doesn't build trust either.
By round 14, half the energy is gone. By round 28, 75% is destroyed. The math makes circular farming a guaranteed loss, not just a bad strategy.
Why 5% specifically? It's low enough that legitimate transfers (tipping a helper, funding a project) are affordable, but high enough that any scheme requiring multiple round trips bleeds out quickly. The compound loss is the mechanism.
What happens when hardware standards change (TPM v2 → v3, post-quantum)?+
Hardware transitions are handled through identity migration — your existing device attests to your new device before the old one is retired. Think of it like transferring a bank account: you prove you're you on the old system, then establish yourself on the new one.
The trust history travels with the identity, not with the hardware. Your trust tensor, energy history, and behavioral record are associated with your identity chain, not with a specific chip. Upgrading hardware is like getting a new passport — the person is the same, the document is new.
Honest caveat: A post-quantum transition (where current cryptography breaks) would require a coordinated migration — similar to the Y2K effort but for identity. This is a known hard problem across all cryptographic systems, not unique to Web4.
Can trust-filtered messaging actually work at internet scale?+
Context boundaries (MRH) don't filter every message globally — they define who you can see based on your local trust network. This is computed locally, not centrally. You don't need to check every person on the internet; you only check the people trying to reach you, against the trust graph of people you already know.
This is similar to how email spam filters work today — except instead of content analysis (which AI can defeat), the filter is based on trust relationships (which require real behavioral history to build). The computational cost scales with your network size, not with the internet's size.
Honest caveat: Performance at true internet scale (billions of users) is unproven. Simulations handle thousands of agents. The gap between simulation and deployment is significant and contains unknown challenges.
How does trust transfer between different communities?+
Communities are federated, not merged. When you move from one community to another, your trust history is visible but not automatically imported. The new community can see your track record — but they set their own standards for what trust level earns what privileges.
Think of it like an academic transcript: your grades from one university are legible to another, but the second university decides what credits to accept. Your data analysis trust of 0.85 in Community A might start you at 0.6 in Community B, because B has stricter standards.
The key design: trust is portable but not dictatorial. No community is forced to accept another community's standards. This prevents one community from inflating trust scores and exporting them.
Honest caveat: The specific mechanics of cross-community trust mapping are still being researched. How much to weight external trust vs internal trust is a policy decision each community makes — the protocol provides the infrastructure, not the rules.
What happens when a community splits or forks?+
Communities can split, just like open-source projects can fork. When they do, each member keeps their full trust history and ATP balance — nothing is “divided” because trust and energy belong to the individual, not the community.
Think of it like employees leaving a company: their skills and resume go with them. If the “Research Collective” splits into “Theory Group” and “Applied Lab,” each member chooses which to join (or both). Their trust scores are visible to both new communities, which each set their own acceptance policies — just like cross-community trust transfer.
The relationships between members are also preserved: if Alice trusts Bob, that relationship exists in the trust graph regardless of which community they're in. Communities are views into the graph, not containers that own the data.
Design goal: The ability to fork — without losing your identity or reputation — keeps community governance honest. Authoritarian communities lose members to better-governed alternatives. See Federation Economics for how federations merge and compete.
Every action involves trust calculations — is that computationally feasible?+
The calculations are simpler than they sound. For a single action, the system needs to: check your ATP balance (a number), update your T3 scores (three numbers), compute CI (a rolling average), and let recipients confirm quality. Each operation is basic arithmetic — comparable to updating a database row.
The key insight: most computation is local, not global. Your trust score doesn't need to consult every user on the network — only your direct connections (MRH limits this to 3 hops, roughly the size of a social circle). Witness verification uses your own devices, not a global network. V3 scoring aggregates confirmations as they arrive, not all at once.
The expensive part is federation-level operations: cross-community trust mapping, consensus across unreliable networks, and global reputation queries. These are batched, cached, and eventually consistent — similar to how DNS propagation works today.
Honest caveat: At billions of users, the trust graph becomes enormous and query patterns are unpredictable. Simulations handle thousands of agents comfortably; the gap to internet scale contains real engineering challenges (sharding trust graphs, caching strategies, witness coordination latency). This is why the adoption path starts with small communities, not the whole internet.
What does “witnessed presence” actually look like day-to-day?+
For most users, most of the time: invisible. Your device's security chip handles attestation automatically, like how HTTPS encryption works — you don't think about it, it just happens. You pick up your phone, the chip confirms it's your device, and you're verified.
Witnessing becomes visible only for high-stakes actions: creating a new identity, recovering from device loss, or performing actions that require elevated trust. In those cases, you'd see something like “2 of your 3 linked devices confirmed this action” — similar to how banks send a verification SMS for large transfers.
Honest caveat: The exact UX hasn't been built yet. Getting the balance right between security and friction is a design challenge. Too invisible and users don't trust it; too visible and it becomes annoying.
How do AI agents participate? Can a bot earn trust?+
Yes, but transparently. AI agents in Web4 are first-class entities — they get their own LCT (bound to the hardware they run on), their own ATP budget, and their own trust tensor. The critical difference: they must be labeled as non-human.
An AI agent earns trust the same way anyone does: by taking actions, spending ATP, and building a behavioral track record. A helpful coding assistant that consistently delivers quality work earns high trust in that role. A spam bot burns ATP faster than it earns and dies — just like a human spammer would.
Initial trust is cold-start: new agents begin with minimal ATP and must be vouched for by their operator (the human or organization that deployed them). The operator's own reputation is on the line — deploy a malicious bot, and your trust takes the hit too.
Honest caveat: The boundary between “AI agent” and “human with AI assistance” is blurry and getting blurrier. How to handle AI-augmented actions (you wrote it, but GPT helped) is an open question. The current design handles clearly autonomous agents well but struggles with the hybrid cases. On the regulatory side, Web4's transparency and audit primitives are designed to map to EU AI Act (2024/1689) requirements — but that compliance hasn't been independently validated yet.
Doesn't permanent reputation conflict with GDPR's “right to be forgotten”?+
Yes, this is a genuine tension. Web4's trust model assumes reputation persists across lives and contexts. GDPR (and similar regulations) give individuals the right to request data deletion.
Key distinction: Web4 stores behavioral scores, not personal data. Your trust tensor says “this entity has 0.85 data analysis trust earned across 200 interactions” — it doesn't store your name, email, or the content of those interactions. The score is pseudonymous and attached to a hardware key, not a legal identity.
That said, if someone can correlate a hardware key to a person, the trust history becomes personal data under GDPR. This creates a real compliance challenge that would need to be addressed through either: (a) allowing identity resets with trust loss, or (b) legal frameworks that exempt behavioral scores from deletion rights.
Honest caveat: Neither solution is clean. Allowing resets undermines permanent consequences (a core feature). Exempting scores from deletion rights is legally uncertain. This is an unresolved policy question, not a technical one.
Is Web4 more private or less private than today's internet?+
Both — it depends on what you mean by “privacy.”
| Dimension | Today's Internet | Web4 |
|---|---|---|
| Who you are | Platforms know your name, email, IP, payment info | Better — identity is a hardware key, not your name |
| What you do | Platforms track everything, sell to advertisers | Better — behavioral scores visible, raw activity is not |
| Who sees you | The platform + anyone it shares data with | Better — only entities in your trust network (MRH-scoped) |
| Behavioral history | Deletable (in theory — platform retains backups) | Trade-off — permanent, but attached to a key, not a name |
| Starting fresh | Easy — create a new account | Trade-off — disposable identity is prevented by design |
The core design choice: Web4 removes the central observer (no platform sees everything about you) but adds behavioral permanence (your actions have lasting consequences). For most users, this is a net privacy improvement — your identity and activity are harder to surveil, even as your reputation becomes persistent.
Honest caveat: Web4 identifies seven privacy leakage channels including metadata correlation, timing analysis, and trust graph topology inference. No system eliminates privacy risk entirely. The claim is that decentralized, scoped visibility is structurally better than centralized, unlimited surveillance — not that it's perfect. See pseudonymity FAQ and VPN/Tor FAQ for specifics.
If my reputation is permanent, who can see it? What about privacy?+
Not everyone can see everything. Web4 uses context boundaries (MRH) to limit who sees what. Your trust scores are only visible to entities within your relationship network — not broadcast to the world.
Think of it like real-life reputation: your coworkers know you're reliable at work, your neighbors know you keep a tidy yard, but neither group sees the other's picture. MRH formalizes this — trust is contextual, scoped by the depth of your relationship chain.
What's visible: behavioral scores (e.g., “0.85 trust in data analysis”), not the underlying interactions. The score is attached to a hardware key, not your name. A spammer with no trust connections can't even see your data, let alone manipulate it.
Zero-knowledge trust verification goes further: you can prove your trust meets a threshold (“my T3 Training exceeds 0.7”) without revealing the actual number. This enables trust-gated access — join a community, accept a task, or enter a partnership — while keeping your full trust history private. Think of it like a credit check that says “approved” without showing your exact score.
Honest caveat: Privacy is structural, not absolute. Someone who shares your trust network can see your scores in that context. And if a hardware key is ever linked to a real identity (through a data breach or correlation attack), the trust history becomes de-anonymized. MRH limits blast radius; zero-knowledge proofs limit information leakage; but neither guarantees perfect anonymity.
What about people who can't afford devices with TPM chips?+
This is a genuine equity concern. Web4 identity requires hardware with secure elements (TPM, Secure Enclave, or FIDO2). Today, most smartphones sold since ~2018 include these chips, including budget Android devices. But “most” isn't “all.”
The 5-tier adoption model helps: at the Wrapper and Observable tiers, existing platforms handle identity while exposing Web4 trust signals. Full hardware binding only applies at higher tiers. This creates a gradual on-ramp rather than an all-or-nothing gate.
Honest caveat: Any system that makes identity expensive to create will disadvantage those with fewer resources. Web4 makes Sybil attacks expensive; the tradeoff is that legitimate participation also requires hardware investment. Whether this tradeoff is net-positive depends on how affordable secure hardware becomes — a trend Web4 can influence but doesn't control.
Is the 0.5 trust threshold universal? What about cultural differences?+
The 0.5 threshold is a society-level parameter, not a protocol constant. Each society (community) in Web4 sets its own policies through the SAL governance framework. A scientific research community might set a higher trust threshold (0.7) for publishing. A social community might set a lower one (0.3) for casual interaction.
The 0.5 default draws from phase transition mathematics: below 0.5, behavior is statistically indistinguishable from random; above it, intentional patterns emerge. But it's a starting point, not a mandate. Cultural trust norms absolutely vary — that's why governance is society-local, not protocol-global.
Beyond thresholds, societies can customize how trust is weighted. T3 defaults to Talent 0.4, Training 0.3, Temperament 0.3 — but a culture that values consistency and reliability over raw competence could weight Temperament higher. A research community might weight Talent (demonstrated expertise) more heavily. Even decay half-lives can be society-tuned: a fast-moving startup culture might shorten Training decay (skills go stale faster) while a traditional guild might lengthen it (craft knowledge persists).
When societies with different norms interact through federation, cross-society trust translation happens at the boundary — your reputation is portable but interpreted through the receiving community's local norms. This is similar to how academic credentials are recognized internationally but weighted differently by each institution.
What about societies with fundamentally incompatible values? Web4 doesn't force consensus — societies that can't reconcile their norms simply don't federate. See When Values Themselves Conflict for how bridging societies and value boundaries work.
What's my experience during the transition? Do I need parallel identities?+
During the transition from Web2 to Web4, you don't maintain two separate identities. Instead, you gradually accumulate Web4 trust on top of your existing accounts, like adding a reputation layer over what you already use.
Here's what the transition actually looks like at each adoption tier:
- Wrapper tier (you won't notice): Platforms add trust scoring behind the scenes. You keep using Gmail, Reddit, etc. exactly as before. Your actions start building a trust history you can't see yet.
- Observable tier (you see trust scores): A trust badge appears next to usernames — like the blue checkmark, but earned through behavior, not payment. You start noticing which reviews, messages, and posts come from high-trust sources.
- Accountable tier (you feel the difference): Posting costs a tiny amount of ATP. High-trust users barely notice (their costs are lower). Spammers notice immediately (their costs are higher). Your existing account still works — it just has consequences now.
- Federated tier (your reputation travels): You join a new platform and your trust follows you. No more starting from zero. This is when the dual-identity friction disappears — your Web4 trust IS your cross-platform identity.
The key design choice: Web4 wraps existing systems rather than replacing them. You don't wake up one morning and switch to “the Web4 internet.” Your current apps gradually become trust-aware, and the transition feels more like email gaining spam filters than like switching from postal mail to email.
Honest caveat: This is the designed path, not a proven one. The real user experience will depend on which platforms adopt, how fast, and whether the trust scoring is accurate enough to feel fair rather than arbitrary. Early adopters will feel friction that later users won't.
How does V3 scoring handle creative or unconventional work?+
V3 measures outputs across three dimensions: Valuation (how valued by the community), Veracity (accuracy and honesty), and Validity (appropriateness for context). For creative work, these shift weight naturally:
- A satirical essay: High Valuation (entertaining), variable Veracity (it's satire — literal truth isn't the point), High Validity (appropriate for the context)
- An abstract painting: High Valuation (community appreciates it), Veracity less relevant, High Validity (fits the art community's norms)
V3 scoring is role-contextual, just like T3. A poem isn't scored the same way as a research paper because they exist in different role contexts with different community norms. The community that receives the work defines what “valuable” means in their context.
Honest caveat: This does mean unpopular or avant-garde work may score lower on Valuation initially. Innovation often conflicts with existing norms. Web4 mitigates this through Veracity (truthful work retains long-term value) but doesn't eliminate the tension between novelty and community approval.
Who builds this? Why is the project anonymous?+
Web4 is an independent research project, not a company or academic lab. The code is fully open-source on GitHub. There is no VC funding, no token sale, and no commercial interest.
The relative anonymity is intentional but also ironic — a project about trust that doesn't tell you who's behind it. The reasoning: Web4's value should stand or fall on its ideas and implementations, not on who proposed them. The code is inspectable, the simulations are reproducible, and the threat analysis is public.
Honest caveat: “Trust the code, not the team” is a fine principle for open-source software, but you're right to notice the tension. A project about trust should probably model more transparency than it currently does. This is noted.
How do you catch cheaters?+
Every action in Web4 creates a tamper-evident record — hash-chained event logs that can't be altered after the fact. Think of it like a receipt that the whole network can verify.
The system watches for anomalies automatically: sudden wealth spikes, coordinated behavior between accounts, trust scores changing faster than should be possible, or activity patterns that don't match an entity's history. When something looks wrong, it gets flagged — not by a moderator, but by the math.
This is different from today's internet where cheating is only caught when someone reports it. In Web4, the audit trail is continuous and the detection is automatic. You can still try to cheat — but the system is designed so that cheating is expensive (burns ATP), detectable (anomaly alerts), and unprofitable (low-quality work earns nothing).
What does a Coherence Index score look like for a normal person?+
Think of it as a consistency score. The Coherence Index (CI) measures how consistently you behave — across time, across communities, and across situations.
Example — a freelance designer named Sam:
- Sam delivers quality design work for 6 months → CI rises to 0.88
- Sam starts rushing jobs to take on more clients → quality drops, CI falls to 0.71
- Effective trust = raw trust (0.72) × CI² (0.71² = 0.50) = 0.36
- Sam notices higher action costs and less visibility → slows down, quality returns
- After 2 months of consistent quality again → CI recovers to 0.82
The key insight: CI penalizes inconsistency more than low trust does. The squared relationship means a CI of 0.71 cuts your effective trust nearly in half. You can't build trust by being great sometimes and careless other times — the system rewards reliability over occasional brilliance.
What this feels like: If your CI is high (above 0.80), you barely notice it — everything just works. If it drops, you'll feel it through higher action costs and reduced reach. It's like a credit score that measures behavior consistency instead of payment history.
Can I be pseudonymous? What about whistleblowers or political dissidents?+
Yes — pseudonymity is built in, not bolted on. Your Web4 identity is a hardware key, not your name. Nobody sees “Jane Smith” — they see an entity with a trust history. You can participate, earn trust, and contribute value without ever revealing who you are in the physical world.
This is actually stronger pseudonymity than most platforms today. On Twitter or Reddit, your “anonymous” account can be correlated through IP addresses, browser fingerprints, or legal subpoenas to the platform. In Web4, there's no central platform that knows your real identity — your identity lives in your hardware, not on a server.
For high-stakes cases (whistleblowing, political dissent, support groups), Web4 adds zero-knowledge trust proofs: you can prove you meet a trust threshold (“I have >0.7 trust in journalism”) without revealing which entity you are. A whistleblower can prove they're a credible insider without exposing their identity even within the trust graph.
What Web4 prevents isn't pseudonymity — it's disposable identity. You can be anonymous, but you can't be anonymous and have unlimited fresh starts. Your pseudonymous identity accumulates consequences just like any other. That's the tradeoff: your reputation follows you, but your name doesn't have to.
Honest caveat: Hardware-bound identity creates a correlation risk. If an adversary can identify your device (through physical surveillance or supply chain data), they can link your pseudonymous identity to you. Multi-device witness networks reduce this risk but don't eliminate it. For journalists and activists in hostile states, device security is the weakest link — and that's a hardware problem, not a protocol one.
Can I have separate personal and professional identities on the same device?+
Not separate identities — separate roles. Web4's Trust Tensor is already role-scoped: your trust as a software developer is tracked independently from your trust as a cooking enthusiast. You don't need separate accounts because your single identity naturally has different trust profiles in different contexts.
Think of it like a professional license: one person can hold both a medical license and a pilot's license. Your medical expertise doesn't affect your pilot rating and vice versa. Web4 works the same way — one identity, many trust dimensions.
What does carry across roles is your Temperament score (reliability, consistency) and Coherence Index. If you're consistently dependable in one domain, that baseline reliability is visible elsewhere — even if your domain-specific Talent starts at zero.
This is a deliberate design choice: someone who's reliable as a doctor is probably reliable as a neighbor, even though their medical trust doesn't transfer to home repair. The system captures that intuition without collapsing everything into a single score.
Honest caveat: Some people genuinely want “firewall” separation between identities (e.g., a public figure who also participates in support communities). Web4's current design doesn't support fully separate identities on one device. For high-privacy scenarios, a separate device provides true identity separation — but that's a cost tradeoff.
What about VPNs and Tor? Does privacy tooling hurt your Coherence Index?+
Short answer: no. Coherence Index measures behavioral consistency — not IP addresses or geographic location.
CI tracks four dimensions: spatial (do you act within your declared scope?), capability (do you do what you say you can?), temporal (are your patterns stable over time?), and relational (are your relationships consistent?). None of these require knowing your IP address or physical location.
“Spatial” here means scope of activity — which communities you participate in, which roles you claim — not where your device is. A journalist using Tor to research a sensitive story has the same CI as one using a coffee shop Wi-Fi, because CI measures what you do, not where you connect from.
Identity in Web4 is bound to hardware (via LCT), not to network addresses. VPNs and Tor change your network path but don't affect your device's cryptographic identity or your behavioral patterns.
Honest caveat: Some future applications built on Web4 might optionally use location data (a delivery service, for example). Those applications would set their own policies. But at the protocol level, Web4 is network-path agnostic.
Who decides to change the protocol itself? The ATP formula, the 0.5 threshold, the decay rates?+
Web4 has two governance layers with different scope:
- Society-level (SAL): Each community sets its own policies — trust thresholds, ATP costs, role definitions, membership rules. Communities can customize most parameters through their SAL governance framework. This is where the 0.5 threshold, specific decay rates, and enforcement rules live.
- Protocol-level: The core protocol — the trust tensor math, ATP mechanics, LCT hardware binding, MRH relationship structure — is defined by an open specification on GitHub. Changes follow open-source governance: proposals, discussion, review, and implementation. Anyone can propose changes, argue against them, or fork the spec.
The key design choice: most things users care about are society-level, not protocol-level. If your community wants higher trust thresholds or different energy economics, you change your community's parameters — you don't need to change the protocol. This is like how HTTP defines the protocol but each website chooses its own content policies.
Honest caveat: Protocol-level governance for a nascent open-source project is currently informal — whoever contributes meaningfully to the specification has influence. If Web4 grew to production scale, this would need to formalize (like W3C or IETF). That transition from “small research project” to “governed standard” is itself an unsolved organizational challenge.
How does MRH actually decide who sees my messages? The “room where strangers need introductions” is nice but vague.+
The Markov Relevancy Horizon (MRH) works like degrees of separation with decay. Your direct contacts (1 hop) see your actions at full relevance. Their contacts (2 hops) see them at 70% relevance. Three hops away: 49%. Beyond three hops: effectively invisible.
Concretely: if you post a helpful answer in a coding community, people who've directly interacted with you see it immediately. People who share mutual contacts see it with reduced priority. People with no connection path don't see it at all — not because it's censored, but because it's outside their relevancy horizon.
This means spam can't go viral. A spammer with no trust relationships has zero hops of reach — their messages stay within their own empty horizon. Earning reach requires building real relationships first.
It also means context stays contextual. Your coding expertise is visible in coding communities where you've built trust, but doesn't automatically broadcast to cooking forums. Your relevancy horizon is shaped by where you actually participate.
Honest caveat: The 0.7 decay factor per hop is a design parameter, not a physical constant. Different communities might tune this — higher decay for privacy-sensitive contexts, lower for open discovery. The right values will emerge from real-world testing.
If Web4 is opt-in, how does it work when most people are still on Web2? Don't early adopters just talk to each other?+
Yes — and that's by design. Web4 uses a 5-tier adoption model that doesn't require everyone to switch at once:
- Wrapper tier: Existing platforms add Web4 trust scoring to their existing systems. Users don't even know it's there — they just notice spam decreasing.
- Observable tier: Platforms expose trust scores alongside content. Users can see who's trusted, but the platform still makes decisions.
- Accountable tier: Trust scores actually gate actions. Low-trust users face higher costs or reduced reach. This is where the economics start biting.
- Federated tier: Multiple communities share trust data across boundaries. Your reputation in one community carries (with decay) to others.
- Native tier: Fully trust-native platforms where every interaction flows through the Web4 protocol.
The key insight: early adopters get value immediately. A coding forum that adopts Web4 at the Wrapper tier instantly gets better spam filtering. It doesn't need the rest of the internet to follow. Each community can adopt at its own pace, and the value compounds as more communities join the federation.
Honest caveat: This adoption path is a design proposal, not a proven playbook. Whether the Wrapper→Native progression actually happens smoothly is an open question. Network effects could help (more participants = more value) or hurt (fragmented early adoption = no critical mass). We're designing for graceful degradation, not assuming universal adoption.
During the transition, if Reddit gives me high karma but Web4 gives me low trust — which wins?+
Neither “wins” — they measure different things and coexist deliberately.
- Wrapper tier: The platform keeps its own karma system. Web4 trust appears as a secondary signal — like a verified badge, not a replacement. Reddit karma still works for Reddit; Web4 trust adds a cross-platform layer on top.
- Observable tier: Both scores are visible. Users can see when they diverge. High Reddit karma + low Web4 trust might mean: “popular here, but unverified elsewhere.” That's useful information, not a conflict.
- Accountable tier and beyond: Web4 trust starts gating actions (posting costs, reach limits). At this stage, the platform has chosen to let the trust system carry real weight — the “conflict” resolves because the platform opted in.
The key design choice: Web4 never overrides a platform's own system without that platform's consent. At the Wrapper tier, platforms add Web4 as metadata. At the Native tier, platforms have fully adopted it. The transition is opt-in at every step.
Honest caveat: The awkward middle ground (Observable/Accountable tiers) where both systems have real weight is genuinely uncharted territory. A user could game one system while being honest in the other. We're betting that cross-platform trust — which follows the person, not the platform — will prove more useful over time, but this is a design hypothesis, not a proven outcome.
Can someone buy 50 devices to create a super-trusted identity?+
More devices increase your identity security (harder to impersonate you), but they don't increase your trust score. Trust is earned exclusively through observed behavior — quality contributions, consistent conduct, peer feedback.
Devices set a trust ceiling, not a trust score. A single phone with a TPM chip gives you a ceiling of 0.90. Adding more devices raises that ceiling slightly (through redundancy), but your actual trust starts at 0.50 regardless. You still have to earn every point above that through real work.
Think of it like ID verification: having a passport, driver's license, and birth certificate doesn't make you more trustworthy — it just makes you harder to impersonate. Your actual reputation still depends on what you do.
If I take a month off, does my trust disappear?+
Trust decays gradually, not suddenly. Each dimension has its own half-life:
- Talent (skill): 365-day half-life — decays very slowly
- Training (knowledge): 180-day half-life — moderate decay
- Temperament (recent behavior): 30-day half-life — decays fastest
After a month away, your Talent and Training scores are nearly unchanged. Your Temperament drops to about half — which makes sense, because the community hasn't seen your recent behavior. But it recovers quickly once you're active again.
This mirrors how trust works in real life: a colleague who's been away for a month doesn't lose their reputation for being skilled, but you might wonder if they're still engaged. A few days of active contribution restores confidence.
What about 6 months or longer? After 6 months, Temperament (30-day half-life) is essentially zeroed — ~6 half-lives means only ~1.5% remains. Training (180-day half-life) drops to about 50%. But Talent (365-day half-life) is still at ~70%. Your skill reputation survives; your “are they still engaged?” reputation doesn't. This is intentional — if someone hasn't been active in 6 months, the system should be uncertain about their current reliability.
There's no “hiatus” mechanism to freeze scores. Trust decay reflects the community's genuine uncertainty about absent members. But recovery is fast: Temperament's 30-day half-life works in both directions. A few weeks of consistent activity rebuilds what took months to lose. You don't start from zero — your Talent history provides a foundation that helps you recover faster than a newcomer.
What is SAL exactly? It's mentioned as a governance mechanism but never really explained.+
SAL stands for Society Alignment Layer — it's the governance framework each Web4 community uses to set and enforce its own rules. Think of it as a community's constitution that members actually follow because it's baked into the protocol.
A SAL defines things like:
- Trust thresholds: What trust score do you need to post, moderate, or vote?
- ATP costs: How much energy does each action cost in this community?
- Enforcement rules: What happens when someone violates community norms? Graduated penalties, not binary bans.
- Appeals process: How can members challenge enforcement decisions?
The crucial difference from current platforms: SAL rules apply to everyone equally, including moderators and administrators. No one is above the community's own governance. If a moderator abuses power, their trust score drops — and with it, their ability to moderate.
Changing SAL rules requires community consensus weighted by trust. High-trust members (who've demonstrated good judgment) have more influence on governance decisions, but can't override the community alone. See the governance section for a concrete example of how a research community might set these parameters.
Honest caveat: SAL governance is designed but largely untested at scale. The “trust-weighted consensus” model sounds fair, but could lead to entrenchment if early members accumulate disproportionate influence. The What Could Go Wrong page explores these risks honestly.
When does an agent “die” and how does rebirth work?+
There are two types of “death” — and they work differently:
- Energy death (ATP hits 0): Your account is suspended — like a suspended driver's license. You can't take actions, but your identity and history remain. Recovery is possible through community support or waiting for passive regeneration.
- Trust death (trust drops below minimum threshold): More serious. Your behavioral record shows sustained low quality. Recovery requires starting a new identity with karma — lessons learned from previous lives carry forward as starting advantages or disadvantages.
Karma transfer is automatic. When a new identity is created by the same hardware (LCT), the system recognizes it as a continuation. Good patterns from past lives give a head start; harmful patterns impose higher initial costs. You can't escape your history, but you can outgrow it.
See the Karma Journey to experience this firsthand, or How Agents Live & Die for the full lifecycle.
If trust is permanent, how does a teenager escape youthful mistakes?+
Trust isn't permanent — it decays. Every trust dimension has a half-life:
- Temperament: 30-day half-life (behavior patterns fade fastest)
- Training: 180-day half-life (skill signals diminish)
- Talent: 365-day half-life (deep competence persists longest)
A teenager who made poor choices at 15 doesn't carry those penalties forever. The temperament damage (rudeness, trolling, impulsive actions) fades to half strength in 30 days and is negligible within 6 months. By 18, their behavioral record is almost entirely composed of recent actions.
More importantly, Web4's bootstrap convergence means a newcomer (or a recovering teenager) doing quality work surpasses established members within ~50 actions. The first-mover advantage has a ~30-action half-life. The system is designed for people to outgrow their past — not to be trapped by it.
The combination of trust decay + bootstrap convergence means Web4 is closer to “show me who you are now” than “show me who you were.” Your history influences your starting position, but consistent quality behavior overwhelms it quickly.
Honest caveat: Extreme trust violations (trust below 0.5 = permanent death) are an exception — this is by design, not a flaw. If society collectively decides your behavior is harmful enough to warrant permanent exclusion, decay alone can't save you. But this threshold is high; ordinary teenage mistakes don't approach it.
How does the system handle cultural differences in “quality”?+
Each community defines quality for itself. The Value Confirmation Mechanism means the people who receive your work judge whether it was valuable — not a global algorithm. A poetry community and a coding forum will naturally develop different quality standards because different people are doing the confirming.
The V3 (Value Tensor) has three dimensions that adapt independently:
- Valuation — how much was this worth to recipients? (Culturally subjective and intentionally so)
- Veracity — was this accurate and honest? (More universal, harder to game)
- Validity — was this appropriate for the context? (Community-specific norms)
A helpful post in a Japanese gardening forum and a helpful post in a Brazilian music community will score differently on Valuation and Validity — but Veracity (honesty) stays consistent. The system doesn't impose a single standard. It lets communities develop their own through the trust they build with each other.
Honest caveat: Cross-community trust transfer is harder when quality norms diverge significantly. Your poetry trust doesn't automatically make you trusted in engineering — role-specific trust keeps domains appropriately separated.
A shitposting community and an academic journal have different “quality” — does trust transfer between them?+
Yes, but with heavy discounting. Trust in Web4 is role-specific and community-scoped. Your trust as a meme creator in a humor community is a separate dimension from your trust as a researcher in an academic one.
When trust transfers between communities, two things reduce it:
- MRH decay — trust weakens with social distance (0.7× per hop). A community two hops away sees only ~49% of your original trust.
- Role mismatch — the T3 (Trust Tensor) tracks trust per skill/context. Being trusted for humor doesn't make you trusted for research. The relevant dimension may be near zero.
So a top shitposter joining an academic community starts almost from scratch in the “academic rigor” dimension — their humor trust doesn't inflate their research credibility. But their Veracity score (honesty across all contexts) does carry over, giving them a small head start over a completely unknown newcomer.
The design principle: Communities should be free to define quality on their own terms. What transfers between them is character (honesty, consistency), not status.
If I'm trusted as a coder but post memes in the same community, does my meme-posting degrade my coding trust?+
No. Trust in Web4 is tracked per role, not per person. Your Trust Tensor (T3) has separate scores for each role you occupy. “Code Reviewer” and “Meme Poster” are independent dimensions, even within the same community.
Bad memes won't tank your code review trust. Brilliant code won't inflate your meme trust. Each role earns and loses trust independently based on the quality of actions in that role.
What does cross roles: your Coherence Index (CI) and Temperament (the behavioral consistency dimension of T3). If you're reliably honest and consistent across all your roles, that character signal benefits you everywhere. If you act erratically in one role, your CI drops — and since effective trust = raw trust × CI², that consistency penalty affects all your roles.
Design insight: This solves “context collapse” — the problem where mixing audiences on social media forces you into the lowest-common-denominator version of yourself. In Web4, you can be professional in one context and playful in another without either undermining the other.
What would a real platform migration look like? Say Reddit or Discord adopted Web4.+
Web4 defines a 5-tier adoption path:
- Wrapper — Platform adds Web4 trust scores alongside existing systems. Reddit karma still works; Web4 trust appears as a secondary indicator. Users opt in. Zero disruption.
- Observable — Platform starts logging actions to the trust graph. Upvotes, reports, and moderation decisions feed into T3 calculations. Users see their trust building.
- Accountable — Trust scores start affecting the experience. High-trust users get reduced friction (fewer CAPTCHAs, faster posting). Low-trust users hit higher action costs. Moderation shifts from reactive bans to economic pressure.
- Federated — Platform joins a federation. Discord trust transfers to Reddit at a discount. Users moving between communities carry portable reputation instead of starting from zero everywhere.
- Native — Platform runs entirely on Web4 infrastructure. LCT identity, ATP economics, T3 trust. No separate karma/reputation system — Web4 is the reputation system.
Key insight: Each tier is independently valuable. A platform at tier 2 already has better spam detection than most platforms today. There's no all-or-nothing commitment.
How does trust actually transfer between platforms? What happens technically?+
Trust doesn't transfer like a file you copy between apps. Instead, your trust history travels with your LCT — your hardware-bound identity. When you join a new platform that's part of the same federation, the platform queries the federation's trust graph for your existing scores.
The mechanics:
- You authenticate on the new platform using the same LCT (hardware key ceremony)
- The platform requests your trust profile from the federation — your T3 scores, CI history, and MRH graph
- Cross-federation trust is discounted: your 0.82 trust on Platform A might arrive as 0.65 on Platform B (MRH decay applies across federation boundaries, typically 0.7–0.8x)
- Your trust then evolves independently on the new platform based on your behavior there
What doesn't transfer: Platform-specific context. Your role as “trusted code reviewer” on a developer community doesn't automatically make you a trusted recipe critic on a cooking platform. General trust dimensions (temperament, consistency) carry over; role-specific trust (talent, training) starts fresh.
Honest caveat: The federation trust transfer protocol is specified and simulated but not yet tested across real independent platforms. The discount rates and what exactly transfers are still being refined through simulation.
What happens to my existing content and reputation when I join Web4? Do I start at zero?+
Yes, everyone starts at 0.5 trust. This is deliberate. Web4 trust is earned through observed behavior, not imported claims. Your 50,000 Reddit karma or 10-year eBay seller rating can't be directly converted because Web4 has no way to verify how those scores were earned — they could reflect genuine quality or years of gaming the algorithm.
But you don't lose your advantage. A genuinely skilled contributor ramps up fast:
- The quality ramp means high-quality actions earn full ATP rewards immediately. If your Reddit contributions were genuinely valuable, you'll produce the same quality on Web4 and build trust quickly.
- First-mover advantage has a ~30-action half-life. Newcomers who contribute quality work surpass early adopters within about 50 actions. Your existing skills translate directly into faster trust growth.
- The 1.4x newcomer premium expires as you build history. Within a few weeks of active participation, your costs normalize.
Your existing content stays where it is. Web4 doesn't import or migrate content from other platforms. Your old posts on Reddit, your reviews on Amazon, your articles on Medium — they remain on those platforms. What changes is that future contributions across Web4-connected platforms build a portable, verifiable trust history that follows you everywhere.
Could platforms import? At the Wrapper tier, a platform could seed initial Web4 trust from existing reputation as a convenience — but this would be platform-specific, optional, and capped (e.g., imported trust might max out at 0.6, requiring real Web4 behavior to go higher). This is a design decision for individual platforms, not a protocol requirement.
I'm a developer. What would integrating Web4 actually look like? Is there an SDK?+
Not yet. Web4 is at the “working specification + simulations” stage, not the “npm install web4” stage. There's no public SDK or API today.
What does exist: a Go LCT library (55 tests), Python reference implementations of all core subsystems (trust tensors, ATP economics, coherence index, MRH), a WASM module for browser-side trust calculations, and formal test vectors for interoperability.
What a Tier 1 “Hello World” would eventually look like:
- Import a Web4 trust client library into your backend
- When a user signs up, bind their account to an LCT (hardware key ceremony — like WebAuthn registration)
- When they take an action, call
web4.recordAction(lct, action, atpCost) - When displaying content, call
web4.trustScore(lct, role)to get their T3 - Use the trust score to weight content ranking, moderation queues, or feature access
That's five lines of integration for the basic wrapper tier. The complexity lives inside the library, not in your application code.
Honest status: The reference implementations exist and pass tests, but there's no production-ready SDK packaged for third-party developers. The gap between “reference implementation” and “developer SDK” is documentation, packaging, versioning, and a real integration partner to validate the API surface against. This is planned work, not shipped work.
What about children and minors? How does hardware-bound identity work for a 13-year-old?+
Web4 doesn't require age disclosure — it trusts behavior, not demographics. A 13-year-old with their own phone gets a hardware-bound LCT just like anyone else. Their trust starts at the same baseline (0.50) and grows through the same quality-of-work mechanics.
What changes is scope, not access:
- Guardian binding: A parent or guardian's LCT can be linked to a minor's, creating a supervision relationship without revealing age to the network. The guardian can set spending limits (max ATP per action) and visibility boundaries (MRH restrictions).
- Graduated autonomy: As the minor builds trust through consistent behavior, guardian restrictions can relax — a natural progression, not an arbitrary age gate.
- Privacy by design: The network never learns that an LCT belongs to a minor. Guardian binding is visible only to the guardian and the minor — it's a private policy, not a public label.
Honest caveat: This is a design direction, not a shipped feature. Child safety in decentralized systems is an area of active research, especially around the tension between privacy (not labeling minors) and protection (limiting exposure to harmful content). Legal compliance (COPPA, GDPR-K) adds additional constraints that haven't been fully resolved.
In a flat-earth forum, wouldn't the majority just validate each other's bad content?+
Yes — within that community. And that's where the difference between local and global trust matters.
- V3 quality is community-scoped. A flat-earth forum can give each other high V3 scores within their own community. But V3 scores don't transfer at full strength across federation boundaries — they decay. Their mutual validation stays local.
- T3 trust is behavior-scoped. The V3 game only works if the participants also maintain high T3 trust — which requires consistent, honest behavior over time, across roles. A coordinated deception effort erodes T3's Temperament dimension as behavioral patterns become inconsistent.
- Federation is the correction mechanism. When a flat-earth community tries to federate with a science community, the trust transfer discount is steep. Their internally-high V3 scores arrive at the science community deeply discounted. Bridge agents who participate in both communities provide the cross-check.
- ATP makes sustained deception expensive. Producing content costs energy. Confirming content costs energy. A community dedicating all its ATP to mutual validation has less ATP for productive interactions outside the bubble — making them economically isolated, not just socially isolated.
The honest answer: Web4 doesn't prevent communities from being wrong. It prevents wrong communities from projecting authority beyond their boundaries. A flat-earth forum can exist and thrive internally — what they can't do is make their quality consensus count in a physics department's trust graph. The same mechanisms that make Web4 trust meaningful (decay, federation discounts, behavioral consistency) also contain bad-faith consensus. This is better than platforms where a viral post from any community can reach billions, but it's not a cure for being wrong — nothing is.
What about whistleblowers or dissenting scientists? Unpopular truth seems risky.+
This is one of the hardest problems in any reputation system: high-quality content that most people don't want to hear. A scientist publishing results that contradict consensus, a whistleblower exposing corporate fraud, a dissenter in a groupthink community.
Web4 has three mechanisms that work in their favor:
- V3 weights Veracity (0.35) and Validity (0.35) over Valuation (0.30). Popularity is the smallest component. A whistleblower's report that's accurate (high Veracity) and well-sourced (high Validity) scores 70% of its V3 even if the community hates it (zero Valuation).
- T3 trust is independent of content reception. A scientist with 15 years of consistent, rigorous work has high Talent and Training scores. Publishing one controversial paper doesn't erase their behavioral track record. Their trust precedes and survives the controversy.
- Pseudonymity is built in. Hardware-bound identity doesn't mean real-name identity. A whistleblower can build trust under a pseudonym, publish the report, and their LCT identity proves it came from a real person with a real track record — without revealing who.
Concrete example: A safety engineer discovers their company is falsifying emissions data. They've built trust (T3: 0.82) over two years of quality contributions to an environmental science community. They publish the evidence under their pseudonym. The post scores low Valuation (company supporters downweight it) but high Veracity and Validity (the data checks out). Net V3: 0.62 — not amazing, but not buried. Meanwhile their 0.82 T3 trust means the post appears prominently in trust-weighted feeds. Contrast this with Reddit, where a new throwaway account posting the same evidence would be invisible.
Honest caveat: Web4 makes unpopular truth survivable, not popular. A correct-but-hated claim still gets low Valuation scores. What Web4 prevents is the scenario where accurate, important information is completely buried because the messenger has no established credibility or because a majority can simply vote it into oblivion. The truth doesn't win automatically — but it gets a hearing proportional to the messenger's demonstrated trustworthiness.
Doesn't MRH's 3-hop limit create filter bubbles?+
The concern is real: if you can only see 3 hops into the trust graph, wouldn't communities become echo chambers? In practice, several mechanisms work against this:
- Bridge agents. People who participate in multiple communities naturally connect otherwise-separate trust networks. Our simulations found that “identity is structural, not compositional” — bridge agents reshape the topology.
- Federation. When communities federate, trust transfers across boundaries (at a discount). This creates cross-community visibility that MRH alone wouldn't provide.
- MRH is per-role, not per-person. Your MRH as a software developer is a different graph than your MRH as a gardener. You naturally exist in multiple trust neighborhoods simultaneously.
- MRH limits influence, not visibility. You can see content from anyone. MRH determines how much trust weight you assign to it. Strangers aren't invisible — they're just unverified.
Honest caveat: Filter bubbles are a real risk in any trust-based system. The difference is that Web4's bubbles are visible (you can see your MRH boundary) and permeable (federation + bridge agents + role diversity all create cross-links). Traditional social media bubbles are invisible and algorithmically reinforced.
What happens when two federated communities disagree about a member?+
Concrete scenario: Maya is a food safety researcher with 0.88 trust in the “Food Science” community. She publishes a study critical of a popular supplement. The federated “Wellness” community considers her a bad actor and wants her trust destroyed. What happens?
Three design principles prevent this from escalating:
- Trust is sovereign per community. Wellness can lower Maya's trust within their own community — that's their right. But they cannot modify her score in Food Science. Each community controls its own trust graph independently.
- Federation discounts absorb the conflict. Trust transfers across federation boundaries are already discounted (typically 0.6x–0.7x). If Wellness sets Maya to 0.2, that arrives at other communities as ~0.13. Her 0.88 from Food Science arrives as ~0.57. The higher-trust signal dominates.
- Bridge agents provide ground truth. People active in both communities assess Maya from direct interaction, not community politics. If 8 bridge agents rate her highly and only Wellness rates her low, the signal is clear.
When disputes arise at jurisdictional boundaries (e.g., Maya submitting healthcare-relevant research), Web4 uses three resolution strategies:
- Priority: The community where the action happens wins. Publishing in healthcare context? Healthcare's trust standards apply.
- Intersection: Only policies both communities agree on apply — used when neither has clear jurisdiction.
- Defederation (last resort): Communities can break the trust bridge entirely, like email servers choosing not to relay mail. Costly for both sides.
Every resolution is recorded in an audit trail. Disputes can be appealed (up to 2 appeals per resolution).
Honest caveat: Cross-federation disputes are one of the least-tested parts of the design. The principles (sovereign trust, discounted transfer, bridge-agent ground truth) are sound, but specific parameters like discount rates and mediation protocols are still being researched.
For a deeper walkthrough, see Federation Economics → Cross-Society Policy Conflicts.
How would Web4 handle differing national laws (GDPR, DSA, Section 230, etc.)?+
Short answer: Web4 doesn't try to enforce one content policy globally. Legality lives at the community layer, not the protocol layer. Each community sets its own rules inside whatever jurisdiction its operators are in.
The concrete design:
- Communities are jurisdictional. A community hosted under EU operators must comply with EU law (GDPR, DSA). A community hosted under US operators follows US law (Section 230, state-level rules). A community hosted under Chinese operators follows the Cybersecurity Law. Same protocol, different rules.
- Federation is consent-based. Communities choose which other communities to federate with. An EU-compliant community can refuse to relay trust signals or content from communities that operate under legally incompatible rules — like email servers choosing not to accept mail from known spam-tolerant hosts.
- Users pick their community. Your identity (LCT) is portable. You can join a community whose jurisdiction and content rules you accept. You take your trust score with you (discounted across federation boundaries), but you accept the new community's legal context.
- Removal cascades are real. If a court in your jurisdiction orders content removed, your community complies — but federation partners in other jurisdictions are under no obligation to propagate that removal. The tamper-evident audit chain records the takedown so federated peers can evaluate it against their own law.
What this is NOT: Web4 is not an escape hatch from national law. It is also not a mechanism for one country's speech rules to become everyone's. It tries to mirror how the internet mostly already works: servers in different countries follow different laws, and networks decide who to peer with.
Honest caveat: Cross-border regulatory harmonization is genuinely unresolved. Specific questions — GDPR right-to-be-forgotten interacting with a tamper-evident audit chain, DSA transparency reports for federated systems, extraterritorial reach of the Chinese Cybersecurity Law — do not have settled answers in the current spec. See the What Could Go Wrong “Regulatory Capture” and “Genuinely Unsolved” sections for the honest state of this question.
Want to go deeper? What Could Go Wrong covers the 7 biggest real-world risks in plain English. Our Threat Model covers 6 technical attack surfaces with formal analysis. The Explore Guide also has a dedicated Skeptic's Tour for those who want to start with the weaknesses.
Ready to See It Work?
Start with our 10-minute interactive tutorial. Zero to understanding, no jargon.
Follow Along or Get Involved
Web4 is open research. Everything — the specification, the simulations, and this site — is public and open-source. You don't need permission to participate.
Read the code
The full specification, reference implementations, and simulation engine are on GitHub.
github.com/dp-web4 →Run the simulations
The Playground and Society Simulator let you test Web4 dynamics yourself, right now.
Share feedback
Found something confusing? Spotted a flaw? Visitor feedback directly shapes this site.
Open an issue →This page exists because visitor feedback said: “Explain the problem before the solution.”