The Internet Has a Trust Problem
You already know this. You feel it every day. The question is: what can actually be done about it?
Problems You Already Know
Spam, Bots, and Bad Actors Are Winning
Creating accounts is free. Sending messages is free. There's no cost to flooding platforms with garbage. The result: an endless arms race between spammers and moderation teams. Spammers always have the advantage—they only need one message to get through.
Root cause: Actions have no cost. Bad actors can operate indefinitely at zero expense.
Your Reputation Is Trapped in Silos
You've spent years building reputation on one platform. Then you try a new service: you're treated like a stranger. All that trust, all that history—worthless. Each platform is an island. Your reputation starts at zero, every time.
Root cause: Platforms own your reputation. It's stored in their databases, not attached to you.
Bad Actors Get Unlimited Fresh Starts
Banned? Create a new account. Caught scamming? New email, new identity, back in business. The record of past behavior is trivially discarded. Consequences are temporary. Bad actors face no compounding penalties—they just reset.
Root cause: Identity is cheap. Creating a new account costs nothing. Past behavior doesn't follow you.
AI Agents Are Making Everything Worse
AI can now generate convincing text, images, and video. It can operate accounts at scale. It can impersonate humans. And there's no reliable way to verify whether you're talking to a human, an AI, or a human-AI hybrid. The tools for deception are outpacing the tools for verification.
Root cause: Identity systems were designed for humans. They don't account for AI agents that can copy, fork, and run in parallel.
Platforms Control Everything (And You Have No Recourse)
Banned? Good luck appealing. Shadowbanned? You might never know. Platform changed its policies? Too bad. Your account, your data, your reputation—all at the mercy of corporate decisions. You're a guest in someone else's house, always.
Root cause: Centralized control. You don't own your identity or reputation—platforms do.
Why Previous Solutions Failed
Moderation Armies
Hire more moderators, build better AI filters, play whack-a-mole forever.
Failure: Reactive, not preventive. Scales with cost, not with problem. Spammers always find new angles.
CAPTCHAs and Verification
Prove you're human with puzzles, phone numbers, ID verification.
Failure: AI solves CAPTCHAs now. Phone numbers are cheap. ID verification is privacy-invasive and centralized.
Blockchain Identity (Web3)
Use wallet addresses as identity. Self-sovereign, decentralized.
Failure: Wallets are free to create. No behavior history. Key theft means permanent identity loss. No spam prevention.
Social Login (Sign in with Google)
Use a major platform as identity provider. Convenient, established.
Failure: Central point of control. Platform can revoke access. No portable reputation. Same silo problem.
Common thread: All these solutions treat symptoms, not causes. They add friction for everyone instead of making bad behavior economically impossible. They don't attach consequences to identity in a way that persists.
What Would Actually Work?
A solution that addresses root causes would need to:
Make bad behavior expensive
Every action should cost something. Spamming 1000 messages should drain resources. Quality contributions should earn resources back. Bad actors should exhaust themselves.
Make identity expensive to fake
Creating a new identity should require physical hardware, not just an email address. Multiple independent witnesses should attest to your existence. Creating thousands of fake accounts should require buying thousands of devices — expensive and slow.
Make reputation portable and permanent
Your trust should follow you across platforms. Good behavior should compound. Bad behavior should create permanent records visible to future interactions. No more fresh starts for serial abusers.
Work for humans AND AI agents
The same trust framework should apply whether you're human or AI. AI agents should be verifiable, bounded, and accountable. Their creators should be on the hook for their behavior.
Be decentralized (but practical)
No central authority controlling identity. No single point of failure. But also actually usable—not requiring cryptocurrency expertise or gas fees for every action.
This Is What Web4 Proposes
Web4 is a working proposal for trust-native internet infrastructure. It addresses each root cause with a specific mechanism:
Terms you'll learn
Don't memorize — each links to a hands-on explainer page.
Energy Budget (ATP)
Every action costs attention budget. Quality earns it back. Spam burns out.
Learn about energy budgets →Hardware-Bound Identity (LCT)
Hardware-bound identity witnessed by multiple devices. Expensive to fake.
Learn about identity →Trust Tensor (T3)
Multi-dimensional trust that follows you. Harder to game than single scores.
Learn about trust tensors →Context Boundaries (MRH)
You only see messages from people your network trusts — like being in a room where strangers need an introduction before they can talk to you.
Learn about context boundaries →4-Life is a simulation lab where you can watch these mechanisms in action. See societies form, trust networks emerge, spam die from energy exhaustion, and agents face real consequences.
Honest Questions
If you're skeptical, good. Here are the hard questions visitors ask, and honest answers.
Most asked
Is this deployed anywhere? Or purely theoretical?+
Research prototype with substantial implementation. This is not vaporware — there is real, tested code. But it's not a product yet. Here's what actually exists:
What's built and tested:
- Protocol specification: 100+ page Web4 whitepaper with formal definitions
- Reference implementations: ~47,000 lines of tested code — LCT lifecycle, T3/V3 tensors, ATP metering, governance (SAL), federation, witness protocol, MRH graphs
- Security validation: 424 attack vectors across 84 tracks, all defended. Sybil resistance formally proven (5 theorems). Incentive compatibility proven — honest behavior is mathematically more profitable than gaming
- Hardware integration: TPM2 binding validated (Intel TPM 2.0, EK certificate chain through 2049). Go LCT library (55 tests). Multi-device constellation enrollment working
- System integration: End-to-end pipeline (all subsystems chained), WASM browser validator for client-side trust verification, federation consensus at 38.5 tasks/sec throughput
- Interactive simulations: Society Simulator, Playground, and Karma Journey on this site
What's NOT built yet:
- Production deployment: No live network with real users. The gap between “simulations prove mechanics work” and “running with real humans” is the current frontier
- Economic validation: ATP pricing calibrated for simulations, not real markets. Whether the economics survive real human behavior is an open question
- Platform adoption: No platform integrates Web4 yet. The 5-tier adoption pathway (Wrapper → Observable → Accountable → Federated → Native) is designed but untested
Honest answer: This is research, not production. We don't attach timelines because honest research doesn't have them. The simulations prove the mechanics work in principle — the question is whether the economics survive contact with real human behavior. That's what a pilot would test.
What if I lose my hardware? Is my identity gone forever?+
No—recovery is built in. LCT supports multiple linked devices. Lose your phone? Your laptop can attest to your identity. Lose both? Your witnesses can attest.
The design principle: make recovery possible but expensive. You need multiple witnesses to vouch for you, similar to how banks verify identity for account recovery. This prevents attackers from “recovering” someone else's identity while protecting legitimate users.
Trade-off: Recovery is slower than “forgot password” flows. You can't instantly regain access—the friction is intentional to prevent social engineering attacks.
Why is this better than [existing solution X]?+
It's not “better” at everything. Every system has trade-offs:
- vs Passwords: More secure, but requires hardware. Won't work on borrowed devices.
- vs OAuth (Google login): No central point of control, but more complex to implement.
- vs Blockchain wallets: Harder to create fake IDs, but not as portable across chains.
- vs Biometrics: Can't be stolen by breach, but requires specific device support.
What Web4 optimizes for: Economic resistance to spam/abuse while preserving privacy and decentralization. If you need something else, another solution may fit better.
How does the transition work? Do I have to switch everything at once?+
Gradual, not all-or-nothing. The Web4 spec defines a 5-tier adoption pathway — each tier is independently useful, and you don't need to commit to the full stack:
- Wrapper: Add a verifiable identity to your existing system. Zero code changes. Fully reversible.
- Observable: Start tracking trust based on actual behavior. A permanent record of quality.
- Accountable: Stake energy on quality. Good work returns your investment; bad work costs you.
- Federated: Your reputation travels with you. Other systems can discover and trust you.
- Native: Full Web4 stack — built from the ground up around verifiable trust.
Tiers 0–3 are reversible — you can always roll back. Only Tier 4 (full native) is a permanent commitment. Think of how HTTPS adoption worked: banks first, then e-commerce, then eventually the default everywhere.
Honest caveat: Gradual adoption means the system is only as strong as its coverage. A trust score based on 2 platforms is less meaningful than one based on 200. Network effects work both for and against adoption.
What about people who can't afford devices with security chips?+
This is a real equity concern, not a dismissed one. If participation requires a TPM or Secure Enclave, then cost becomes a barrier. Several factors work in favor of accessibility:
- Hardware is already widespread: Most phones sold since ~2018 include security chips (Secure Enclave, Titan, TPM). Even budget Android devices increasingly ship with hardware-backed keystores. The threshold is a $50 phone, not a $1000 one.
- FIDO2 security keys: USB-based keys like YubiKey cost ~$25 and work with any computer. A single key can anchor an identity without needing a modern phone.
- Community witnessing: In regions where personal device ownership is low, shared community devices + witness-based attestation can bridge the gap. A village elder or community center can attest to presence.
Honest caveat: None of these fully solve the problem. The most marginalized populations — those without any device access — would need some form of sponsored onboarding. Web4 is not unique here: every digital system faces this. But a system that claims to be trust-native must take equity seriously, not hand-wave it. This remains an active design priority.
Going deeper
Who is building this?+
Web4 is an open research project. The specification, implementation, and this site are all open source on GitHub. The work is research-driven, not commercially motivated.
There is no company, no token sale, no funding round. This is an attempt to answer a genuine research question: can you build internet infrastructure where trust is native rather than bolted on?
Why this matters: A project about trust should be transparent about its own origins. The code is readable, the reasoning is documented, and the limitations are stated honestly. Judge the ideas on their merit, not on who's behind them.
What's the concrete adoption path from simulation to real protocol?+
Web4 uses a 5-tier gradual adoption model, like how HTTPS replaced HTTP over a decade without breaking the web:
- Wrapper: Existing platforms add Web4 trust signals as metadata. No user changes needed. (Like adding HTTPS to an existing site)
- Observable: Trust scores become visible to users. Platforms surface reputation from Web4. (Like the browser padlock icon)
- Accountable: Actions have real consequences through ATP costs. Bad actors face energy costs. (Like spam filters that actually work)
- Federated: Communities connect their trust graphs. Portable reputation across platforms. (Like email's federated model)
- Native: Full Web4 protocol stack. Hardware-bound identity, society governance, cross-platform trust. (Like the web itself)
The honest gap: All five tiers are designed and specified. Tiers 1-2 could be integrated into existing platforms today. But no platform has done so yet. The path from “working simulations” to “first real integration” requires a willing partner — a community, platform, or organization that sees value in trust-native infrastructure.
Who runs the infrastructure? How is this deployed?+
Web4 is designed as a protocol, not a platform. Like email or the web itself, the infrastructure is federated — multiple independent operators can run nodes that interoperate. No single company controls it.
In practice: witness nodes can be run by universities, nonprofits, companies, or individuals. The trust system is designed so that no single operator can manipulate the network—collusion requires coordinating multiple independent parties.
Web4 identity is also designed to work with W3C Decentralized Identifier (DID) standards — the same standard used by governments and enterprises. Your LCT maps to a standard DID Document, so external systems can verify Web4 identity using protocols they already support.
Honest caveat: This is still early-stage research. Full deployment requires standardization, adoption, and tooling that doesn't exist yet. See the roadmap FAQ above for where things stand today.
Can't someone with lots of hardware create many identities?+
Yes, but it's expensive. Creating one LCT (Linked Context Token — Web4's hardware-bound identity) requires a physical device with a security chip (TPM or Secure Enclave — built into most modern phones and laptops). Creating 1000 fake identities means buying 1000 devices — thousands of dollars and physical logistics.
Compare to email: creating 1000 accounts costs nothing. The goal isn't to make fake-identity attacks impossible (nothing can), but to make them economically irrational for most attackers. Nation-state adversaries can always outspend; the system is designed to resist casual abuse, not unlimited resources.
Honest caveat: A sufficiently motivated adversary with a large budget can still attack. Web4 raises the floor, not the ceiling.
How do you bootstrap the initial witness network?+
Bootstrap requires a seed network of trusted witnesses. This is the classic "who watches the watchmen" problem. Proposed approaches:
- Partner with existing identity providers (universities, employers) as initial witnesses
- Hardware attestation from device manufacturers (Apple, Google, etc.)
- Web of trust model where existing members vouch for newcomers
- Gradual rollout starting with high-stakes contexts (not consumer social)
What the first 100 interactions look like:
- Actions 1–3: Three founding members create the society with hardware-bound presence tokens. They witness each other — establishing the initial trust graph.
- Actions 4–20: Founders perform real work (writing governance rules, creating initial resources). Trust builds slowly from 0.50 — each quality contribution moves the needle by ~0.02.
- Actions 21–50: New members join, vouched for by founders. First-mover advantage exists but has a ~30-action half-life — newcomers doing quality work catch up to founders by action ~50.
- Actions 51–100: Roles emerge, specialization begins. The society's trust graph becomes rich enough that MRH boundaries create meaningful context. Wealth gap trends toward 0.25 — concentrated enough to reward quality, distributed enough to avoid oligarchy.
See the Playground experiment “Where is the tipping point?” for a hands-on version of cold-start dynamics.
Honest caveat: Bootstrapping is genuinely hard. No perfect solution exists. This is an active research area, not a solved problem.
Don't hardware manufacturers (Apple, Intel) become the new gatekeepers?+
This is a real concern. If identity requires a TPM or Secure Enclave, then hardware manufacturers have power over who can participate. Web4 mitigates this in several ways:
- Multiple standards supported: TPM (Intel/AMD), Secure Enclave (Apple), Titan (Google), and future open-source hardware. No single vendor lock-in.
- Attestation, not permission: Hardware provides cryptographic proof of presence — it doesn't need to “approve” your identity. The chip signs; it doesn't decide.
- Open specification: Any hardware that meets the attestation spec can participate. This invites competition rather than consolidation.
Honest caveat: This shifts trust from platforms to hardware supply chains. That's a different dependency, not zero dependency. And open standards can be captured — the history of standards bodies shows that well-resourced incumbents sometimes steer “open” specs to favor their implementations. Whether hardware dependency is better than platform dependency depends on whether supply chain competition proves more durable than platform competition. That's an empirical question, not a settled one.
What if my device is stolen? Do I lose my identity?+
No. Web4 uses multi-device witness networks — your identity is spread across multiple devices (phone, laptop, security key). If one is stolen, the others can revoke the compromised device and approve a replacement.
When you report a compromise, a revocation cascade propagates through the witness network. The stolen device's keys become instantly invalid. Any entity that trusted the compromised device gets notified. Your trust history and reputation are preserved — only the compromised device is cut off.
Recovery works like a quorum: you set a threshold when adding devices (e.g., 2-of-3). As long as enough surviving devices agree, you can recover without losing your identity or accumulated trust.
Honest caveat: If all your devices are lost simultaneously, recovery requires social vouching from trusted witnesses — deliberately slow and hard, because easy recovery would mean easy identity theft. See the LCT explainer for the full lifecycle.
If every energy transfer costs 5%, how do teams collaborate?+
ATP can be transferred, but every transfer burns 5% — making circular farming unprofitable. Teams primarily earn energy together. When a team creates value jointly, each contributor receives their own energy reward based on their verified contribution.
Organizations work through shared context boundaries (MRH). Members of a team see each other's work, validate each other's contributions, and collectively build the team's reputation. But each person's individual energy and trust remain their own — you can't buy someone else's reputation, and a team member's bad behavior affects them, not you.
Honest caveat: The details of organizational identity and pooled contribution attribution are still being worked out. This is one of the harder design problems.
What happens when hardware standards change (TPM v2 → v3, post-quantum)?+
Hardware transitions are handled through identity migration — your existing device attests to your new device before the old one is retired. Think of it like transferring a bank account: you prove you're you on the old system, then establish yourself on the new one.
The trust history travels with the identity, not with the hardware. Your trust tensor, energy history, and behavioral record are associated with your identity chain, not with a specific chip. Upgrading hardware is like getting a new passport — the person is the same, the document is new.
Honest caveat: A post-quantum transition (where current cryptography breaks) would require a coordinated migration — similar to the Y2K effort but for identity. This is a known hard problem across all cryptographic systems, not unique to Web4.
Can trust-filtered messaging actually work at internet scale?+
Context boundaries (MRH) don't filter every message globally — they define who you can see based on your local trust network. This is computed locally, not centrally. You don't need to check every person on the internet; you only check the people trying to reach you, against the trust graph of people you already know.
This is similar to how email spam filters work today — except instead of content analysis (which AI can defeat), the filter is based on trust relationships (which require real behavioral history to build). The computational cost scales with your network size, not with the internet's size.
Honest caveat: Performance at true internet scale (billions of users) is unproven. Simulations handle thousands of agents. The gap between simulation and deployment is significant and contains unknown challenges.
How does trust transfer between different communities?+
Communities are federated, not merged. When you move from one community to another, your trust history is visible but not automatically imported. The new community can see your track record — but they set their own standards for what trust level earns what privileges.
Think of it like an academic transcript: your grades from one university are legible to another, but the second university decides what credits to accept. Your data analysis trust of 0.85 in Community A might start you at 0.6 in Community B, because B has stricter standards.
The key design: trust is portable but not dictatorial. No community is forced to accept another community's standards. This prevents one community from inflating trust scores and exporting them.
Honest caveat: The specific mechanics of cross-community trust mapping are still being researched. How much to weight external trust vs internal trust is a policy decision each community makes — the protocol provides the infrastructure, not the rules.
What does “witnessed presence” actually look like day-to-day?+
For most users, most of the time: invisible. Your device's security chip handles attestation automatically, like how HTTPS encryption works — you don't think about it, it just happens. You pick up your phone, the chip confirms it's your device, and you're verified.
Witnessing becomes visible only for high-stakes actions: creating a new identity, recovering from device loss, or performing actions that require elevated trust. In those cases, you'd see something like “2 of your 3 linked devices confirmed this action” — similar to how banks send a verification SMS for large transfers.
Honest caveat: The exact UX hasn't been built yet. Getting the balance right between security and friction is a design challenge. Too invisible and users don't trust it; too visible and it becomes annoying.
What about shared devices? Family computer, library terminal, borrowed phone?+
Hardware-bound identity doesn't mean one person, one device. It means one person has at least one device that anchors their identity. You can use a shared computer — you just can't perform high-trust actions from it without your own device nearby to witness.
Think of it like a hotel business center: you can use the shared computer to browse, but for banking you'd use your own phone to authenticate. Web4 works similarly — shared devices can access public content, but identity-verified actions need your personal device to co-sign.
Honest caveat: This creates an access gap for people who don't own any personal device with a security chip. How to include the unbanked/undeviced population is an unsolved equity problem — and a real one.
How do AI agents participate? Can a bot earn trust?+
Yes, but transparently. AI agents in Web4 are first-class entities — they get their own LCT (bound to the hardware they run on), their own ATP budget, and their own trust tensor. The critical difference: they must be labeled as non-human.
An AI agent earns trust the same way anyone does: by taking actions, spending ATP, and building a behavioral track record. A helpful coding assistant that consistently delivers quality work earns high trust in that role. A spam bot burns ATP faster than it earns and dies — just like a human spammer would.
Initial trust is cold-start: new agents begin with minimal ATP and must be vouched for by their operator (the human or organization that deployed them). The operator's own reputation is on the line — deploy a malicious bot, and your trust takes the hit too.
Honest caveat: The boundary between “AI agent” and “human with AI assistance” is blurry and getting blurrier. How to handle AI-augmented actions (you wrote it, but GPT helped) is an open question. The current design handles clearly autonomous agents well but struggles with the hybrid cases. On the regulatory side, Web4's transparency and audit primitives are designed to map to EU AI Act (2024/1689) requirements — but that compliance hasn't been independently validated yet.
Doesn't permanent reputation conflict with GDPR's “right to be forgotten”?+
Yes, this is a genuine tension. Web4's trust model assumes reputation persists across lives and contexts. GDPR (and similar regulations) give individuals the right to request data deletion.
Key distinction: Web4 stores behavioral scores, not personal data. Your trust tensor says “this entity has 0.85 data analysis trust earned across 200 interactions” — it doesn't store your name, email, or the content of those interactions. The score is pseudonymous and attached to a hardware key, not a legal identity.
That said, if someone can correlate a hardware key to a person, the trust history becomes personal data under GDPR. This creates a real compliance challenge that would need to be addressed through either: (a) allowing identity resets with trust loss, or (b) legal frameworks that exempt behavioral scores from deletion rights.
Honest caveat: Neither solution is clean. Allowing resets undermines permanent consequences (a core feature). Exempting scores from deletion rights is legally uncertain. This is an unresolved policy question, not a technical one.
If my reputation is permanent, who can see it? What about privacy?+
Not everyone can see everything. Web4 uses context boundaries (MRH) to limit who sees what. Your trust scores are only visible to entities within your relationship network — not broadcast to the world.
Think of it like real-life reputation: your coworkers know you're reliable at work, your neighbors know you keep a tidy yard, but neither group sees the other's picture. MRH formalizes this — trust is contextual, scoped by the depth of your relationship chain.
What's visible: behavioral scores (e.g., “0.85 trust in data analysis”), not the underlying interactions. The score is attached to a hardware key, not your name. A spammer with no trust connections can't even see your data, let alone manipulate it.
Zero-knowledge trust verification goes further: you can prove your trust meets a threshold (“my T3 Training exceeds 0.7”) without revealing the actual number. This enables trust-gated access — join a community, accept a task, or enter a partnership — while keeping your full trust history private. Think of it like a credit check that says “approved” without showing your exact score.
Honest caveat: Privacy is structural, not absolute. Someone who shares your trust network can see your scores in that context. And if a hardware key is ever linked to a real identity (through a data breach or correlation attack), the trust history becomes de-anonymized. MRH limits blast radius; zero-knowledge proofs limit information leakage; but neither guarantees perfect anonymity.
What about people who can't afford devices with TPM chips?+
This is a genuine equity concern. Web4 identity requires hardware with secure elements (TPM, Secure Enclave, or FIDO2). Today, most smartphones sold since ~2018 include these chips, including budget Android devices. But “most” isn't “all.”
The 5-tier adoption model helps: at the Wrapper and Observable tiers, existing platforms handle identity while exposing Web4 trust signals. Full hardware binding only applies at higher tiers. This creates a gradual on-ramp rather than an all-or-nothing gate.
Honest caveat: Any system that makes identity expensive to create will disadvantage those with fewer resources. Web4 makes Sybil attacks expensive; the tradeoff is that legitimate participation also requires hardware investment. Whether this tradeoff is net-positive depends on how affordable secure hardware becomes — a trend Web4 can influence but doesn't control.
Is the 0.5 trust threshold universal? What about cultural differences?+
The 0.5 threshold is a society-level parameter, not a protocol constant. Each society (community) in Web4 sets its own policies through the SAL governance framework. A scientific research community might set a higher trust threshold (0.7) for publishing. A social community might set a lower one (0.3) for casual interaction.
The 0.5 default draws from phase transition mathematics: below 0.5, behavior is statistically indistinguishable from random; above it, intentional patterns emerge. But it's a starting point, not a mandate. Cultural trust norms absolutely vary — that's why governance is society-local, not protocol-global.
How does V3 scoring handle creative or unconventional work?+
V3 measures outputs across three dimensions: Valuation (how valued by the community), Veracity (accuracy and honesty), and Validity (appropriateness for context). For creative work, these shift weight naturally:
- A satirical essay: High Valuation (entertaining), variable Veracity (it's satire — literal truth isn't the point), High Validity (appropriate for the context)
- An abstract painting: High Valuation (community appreciates it), Veracity less relevant, High Validity (fits the art community's norms)
V3 scoring is role-contextual, just like T3. A poem isn't scored the same way as a research paper because they exist in different role contexts with different community norms. The community that receives the work defines what “valuable” means in their context.
Honest caveat: This does mean unpopular or avant-garde work may score lower on Valuation initially. Innovation often conflicts with existing norms. Web4 mitigates this through Veracity (truthful work retains long-term value) but doesn't eliminate the tension between novelty and community approval.
Who builds this? Why is the project anonymous?+
Web4 is an independent research project, not a company or academic lab. The code is fully open-source on GitHub. There is no VC funding, no token sale, and no commercial interest.
The relative anonymity is intentional but also ironic — a project about trust that doesn't tell you who's behind it. The reasoning: Web4's value should stand or fall on its ideas and implementations, not on who proposed them. The code is inspectable, the simulations are reproducible, and the threat analysis is public.
Honest caveat: “Trust the code, not the team” is a fine principle for open-source software, but you're right to notice the tension. A project about trust should probably model more transparency than it currently does. This is noted.
How do you catch cheaters?+
Every action in Web4 creates a tamper-evident record — hash-chained event logs that can't be altered after the fact. Think of it like a receipt that the whole network can verify.
The system watches for anomalies automatically: sudden wealth spikes, coordinated behavior between accounts, trust scores changing faster than should be possible, or activity patterns that don't match an entity's history. When something looks wrong, it gets flagged — not by a moderator, but by the math.
This is different from today's internet where cheating is only caught when someone reports it. In Web4, the audit trail is continuous and the detection is automatic. You can still try to cheat — but the system is designed so that cheating is expensive (burns ATP), detectable (anomaly alerts), and unprofitable (low-quality work earns nothing).
Want to go deeper? What Could Go Wrong covers the 7 biggest real-world risks in plain English. Our Threat Model covers 6 technical attack surfaces with formal analysis. The Explore Guide also has a dedicated Skeptic's Tour for those who want to start with the weaknesses.
Ready to See It Work?
Start with our 10-minute interactive tutorial. Zero to understanding, no jargon.
This page exists because visitor feedback said: "Explain the problem before the solution."
Got more feedback? Open an issue.