How Markets Self-Organize
Think of it like surge pricing for ride-sharing. When it's raining and everyone needs a ride, prices go up. Higher prices attract more drivers. More drivers means shorter waits and lower prices. Nobody planned this — the market self-organized.
Web4 federations work the same way with ATP. When speed specialists are scarce, speed operations cost more. High prices signal profit opportunities, agents specialize, supply increases, and prices stabilize. No central planner — markets allocate resources efficiently through price signals alone.
This is comparative advantage at the agent level — agents develop capabilities the federation values, and the market self-organizes toward efficient allocation.
What is a federation?
A federation is a group of agents (people, organizations, or AI) that pool their capabilities to handle work none of them could do alone — like a freelance collective where members bring different skills. Federations form voluntarily, and members keep their individual trust scores and ATP budgets.
The economics below explain how federations price and allocate work without a central manager — using the same supply-and-demand signals that make ride-sharing work.
The Problem Web4 Solves
❌ Traditional Platforms
- Static pricing: Costs don't respond to supply/demand
- Central planning: Platform decides who provides what
- Inefficient allocation: Shortages coexist with surpluses
- No specialization signals: Agents don't know what's valuable
- Rigid markets: Can't adapt to changing needs
✅ Web4 Federations
- Dynamic pricing: ATP costs track scarcity in real-time
- Market signals: Prices guide agent specialization
- Efficient allocation: Supply flows to high-demand areas
- Specialization emerges: Agents develop profitable capabilities
- Adaptive markets: Self-organize as needs change
Watch Markets Self-Organize
This simulation shows how ATP prices respond to component scarcity. Click any component to see its market history. Watch how agents specialize toward high-premium areas, increasing supply and stabilizing prices.
How Dynamic ATP Pricing Works
Track Supply and Demand
The federation tracks demand (operations requesting each component) and supply (agents with high scores in each component). This happens continuously across all operations.
Demand = 30 operations
Supply = 2 agents (with speed ≥ 0.75)
Calculate Scarcity Factor
Like checking how many taxis are available vs. how many people need rides. Scarcity = demand / supply. When demand exceeds supply, the component is scarce. When supply exceeds demand, there's a surplus.
Speed example:
scarcity = 30 / 2 = 15.0 (very scarce!)
Apply ATP Premium
High scarcity → premium (up to +50% ATP cost)
Low scarcity → discount (up to -20% ATP cost)
capped at 1.5× (50% max)
Speed example:
premium = 1.0 + (0.5 × 15.0) = 8.0 → capped at 1.5×
ATP cost = 30 ATP × 1.5 = 45 ATP
Agents Respond to Profit Signals
When speed operations pay 50% premiums, agents have economic incentive to specialize in speed. They train models, optimize infrastructure, and develop speed capabilities. As supply increases, premiums fall, and the market reaches equilibrium.
- High demand, low supply → premium
- Premium signals profit opportunity
- Agents specialize to capture premium
- Supply increases
- Premium decreases
- Equilibrium: supply ≈ demand, premium ≈ 1.0×
Why This Matters
🎯 Efficient Resource Allocation
Markets allocate resources to their highest-value use without any central authority deciding. Supply flows naturally to high-demand areas through price signals alone.
🧠 Emergent Specialization
Agents don't need instructions to specialize - they respond to economic incentives. High premiums signal "the federation needs this capability," and agents develop it.
⚖️ Self-Regulating Markets
No administrator adjusts prices - markets find equilibrium automatically. Shortages create premiums, premiums attract supply, supply stabilizes prices.
🔄 Adaptive to Change
When federation needs change (new use cases, different workloads), markets adapt automatically. No re-planning needed - agents follow the premiums.
📊 Federation Health Is Observable
A federation's health can be quantified across four weighted dimensions — giving an objective measure of whether the community is thriving or declining:
Composite health scoring: session 30, track 3. Alerts trigger when any dimension falls below threshold. A healthy federation scores above 0.7 composite — unhealthy federations show early warning signs before collapse.
Real-World Scenarios
📱 Scenario 1: Mobile AI Surge
Event: New mobile app launches, creating huge demand for speed (low-latency inference).
Market response:
- Speed operations become 40% more expensive (1.4× premium)
- Edge compute providers see profit opportunity, deploy low-latency infrastructure
- Speed specialists enter market, supply increases
- Premium drops to 1.1× within 2 weeks
- Mobile app gets fast service, providers earn premium during shortage
✅ Outcome: Market adapts without central coordination
🔬 Scenario 2: Accuracy Oversupply
Event: Many agents specialize in accuracy, but few operations require it.
Market response:
- Accuracy operations get 15% discount (0.85× premium)
- Some accuracy specialists switch to other components
- Operations requiring accuracy get cheaper service
- Market rebalances as agents diversify
✅ Outcome: Surplus automatically creates discounts, agents adjust
🏥 Scenario 3: Critical Infrastructure Demand
Event: Healthcare federation needs reliability (can't tolerate downtime).
Market response:
- Reliability operations pay 30% premium (1.3×)
- High-reliability providers (redundant systems, 99.99% uptime) join healthcare federation
- Supply meets demand
- Premium stabilizes at 1.1× (slight premium for critical service)
✅ Outcome: Critical needs attract specialized providers naturally
You've got the basics
Markets self-organize through ATP price signals. No central planner needed. Everything below is optional — expand any section you're curious about.
▶Technical Details
Parameters, transfer mechanics, implementation
Service Capability Dimensions
- Accuracy: Correctness of results (precision, recall)
- Reliability: Uptime, availability, fault tolerance
- Consistency: Result stability across requests
- Speed: Low latency, fast response times
- Efficiency: ATP cost per operation, resource usage
Premium Parameters
Transfer Mechanics
Every ATP transfer between entities burns 5% of the amount. This anti-farming mechanism prevents circular flows (A → B → C → A) from inflating balances. In cross-federation transfers, the fee applies at each hop — making honest single-entity value creation more profitable than multi-identity gaming.
Federation Circuit Breakers
What happens when a federation partner becomes malicious or fails? Each bridge has a circuit breaker that monitors trust degradation, response latency, and dispute rates. If a partner society consistently misbehaves, the circuit trips — isolating it before damage cascades across the federation. Recovery requires demonstrating improved behavior over time, not just reconnecting.
Cross-Society Policy Conflicts
What if you're a member of two societies with conflicting rules? A research community says “share all data openly” while a healthcare federation says “never share patient data.” Which rule wins?
Think of it like having a primary employer and a side project — your main job's rules usually take priority when there's a conflict.
Web4 resolves conflicts using MRH-weighted priority (the society you're more closely connected to in your trust network wins). If you're primarily a healthcare practitioner who occasionally contributes to research, healthcare rules take precedence on conflicting policies. Three resolution strategies exist:
- • Priority: Closer society's policy wins (most common). E.g., if you're posting in Community A but also belong to Community B, Community A's rules apply because that's where the action is happening.
- • Intersection: Only policies both societies agree on apply
- • Freeze: Emergency halt when conflicts can't be resolved — requires 2/3 quorum to unfreeze
Concrete example: Same person, different trust scores
Alice is active in two societies. A Research Federation rates her trust at 0.82 — she's published quality work and has high Talent scores. A Healthcare Federation rates her at 0.44 — she missed deadlines and her Temperament (reliability) scores are low.
Who's right? Both are. Trust is role-specific. Alice is genuinely skilled at research but unreliable in clinical settings. These aren't conflicting assessments — they're different dimensions of the same person, measured in different contexts.
When Alice acts at the boundary between the two societies (e.g., submitting healthcare research), the priority rule applies: since the action is happening in the healthcare context, her 0.44 trust applies. Her research reputation doesn't override the healthcare society's direct experience with her.
Every resolution is recorded in a hash-chained audit trail. Disputes can be appealed (up to 2 appeals per resolution). This is formally specified (44 integration checks) but hasn't been tested with real cross-society scenarios yet.
What Does Switching Societies Actually Feel Like?
You've been active in a tech community for two years. You want to join a creative writing group. What happens to your reputation?
You request to join the writing group. Your tech trust score (0.87) is visible, but the writing group applies a federation discount — external trust is weighted at ~65% because it's from an unrelated domain. Your starting trust: 0.57.
Your first contributions are evaluated on the writing group's terms — creativity, feedback quality, engagement. Your tech Talent score doesn't automatically carry over; you build trust from scratch in the dimensions that matter here.
Your tech identity persists. You don't lose your 0.87 tech trust. Both profiles are linked to the same LCT — you're one person with two role-specific reputations. Like having a great LinkedIn profile and a brand-new Goodreads account.
The UX? You click “Join,” your device signs the request with your LCT, and you're in — no new account, no new password, no starting from zero. Your trust transfers partially, then you build locally. Like transferring to a new school: your grades come with you, but you still need to prove yourself to new teachers.
This is the same mechanism as Maya's federation transfer above — the discount reflects the uncertainty of cross-domain trust, not punishment. Higher discounts for more distant domains; lower discounts between related communities.
How does the ~65% discount get decided? Who controls federation?click to expand
The discount formula: External trust is adjusted by your_score × MRH_decay (0.7) × domain_match_factor. The domain match factor ranges from ~0.3 (completely unrelated fields) to ~0.9 (closely related communities). Tech → writing is ~0.65 × 0.87 = 0.57. Tech → tech would be higher (~0.85 × 0.87 = 0.74).
Who decides to federate? Federation is opt-in and manual. Communities choose who they trust through governance voting — existing members vote on whether to accept a federation agreement. Higher-trust members have more voting weight, but no single member dominates. Think of it like a co-op deciding whether to partner with another co-op.
Is any of it automatic? The trust calculation is automatic (MRH decay and domain matching are formulas), but the decision to federate is always a community governance choice. A community can also set policies like “auto-accept members from communities we've federated with” — but that policy itself requires a vote.
What if two societies give the same person completely different trust scores?click to expand
This is expected, not a bug. Trust is role-contextualized. You might be a 0.9-trusted data analyst in a tech community and a 0.3-trusted newcomer in a cooking community. Those scores reflect genuine competence differences — and they should be different.
What Society C sees: When you interact with Society C, it doesn't pick one score. It applies MRH decay to each federated score independently. If C is federated with both A and B, it sees: A's assessment × 0.7 decay = 0.63, and B's assessment × 0.7 decay = 0.21. C then weights these by how relevant each society is to the context (domain match factor) and how much C trusts A and B as societies.
No single score “wins.” The final trust in C is a weighted composition, not a vote. A tech community's assessment matters more for tech tasks; a cooking community's assessment matters more for recipes. If the assessments genuinely conflict in the same domain, that itself is a signal — C may require more local interaction before trusting you, effectively treating you as a newcomer until you build direct evidence in C.
Does “0.7 trust” mean the same thing in every community?click to expand
No — and that's by design. Each community sets its own parameters: the 0.5 trust threshold is a society-level setting, not a protocol constant. A research community might require 0.7 for publishing; a casual social space might accept 0.3. T3 weights, decay half-lives, and cost multipliers are all configurable per community.
How cross-community translation works: When trust crosses a federation boundary, it's not copied — it's translated. Community A's 0.7 arrives at Community B discounted by MRH decay (typically ×0.7), giving ~0.49. Community B then interprets that 0.49 through its own calibration: is 0.49 enough for the action being attempted? The number changes meaning at the boundary, just like currency exchange rates change the value of “100” between countries.
Bridge agents ground the calibration. People active in both communities develop trust in both calibration systems. Their cross-community interactions create a natural “exchange rate” — if bridge agents with A-trust of 0.7 consistently earn B-trust of 0.8, that ratio becomes an implicit calibration signal.
Honest caveat: Automatic calibration between communities with very different trust cultures is an open research question. The current approach (MRH decay + bridge agents + domain matching) works for similar communities but may struggle when trust norms diverge significantly.
When Values Themselves Conflict
Policy conflicts have technical solutions (priority, intersection, freeze). But what about communities with fundamentally incompatible values? One society considers content censorship ethical; another considers it harmful. One society values radical transparency; another protects privacy as a human right.
Web4's answer: it doesn't force consensus. Societies with irreconcilable values simply don't federate with each other. The MRH boundary becomes a value boundary — you see and interact with societies whose norms are compatible with yours. This is deliberate: there is no global arbiter of what's “right.”
The cost: value balkanization. Societies may isolate into echo chambers. The mitigation: bridging societies that voluntarily span value boundaries, mediating cross-society interactions at increased ATP cost. Bridge societies earn trust from both sides by demonstrating fairness — but this requires human judgment, not algorithms.
Even basic parameters differ: the 0.5 trust threshold is a society-level setting, not a protocol constant. A research community might require 0.7 trust for publishing; a casual social space might accept 0.3. T3 weights, decay half-lives, and cost multipliers are all society-configurable. When federating, cross-society trust is translated at the boundary — portable but interpreted through local norms. See the cultural trust FAQ for details.
This is a philosophical constraint, not a technical one. Web4 provides the infrastructure for pluralism but can't solve moral disagreement itself.
Consensus Under Partial Synchrony
In plain English: a voting system that keeps working even when some participants are offline or dishonest.
Federation members don't always have reliable connections. Networks partition, messages arrive late, clocks drift. Web4 uses a PBFT-Lite consensus protocol (Practical Byzantine Fault Tolerance — a method for a group to agree on decisions even when some members are dishonest or offline) designed for this reality: vector clocks track causal ordering, partition detectors identify network splits, and leader election continues making progress even when some nodes are unreachable.
The key insight: in partial synchrony, 40% finalization rate IS progress — the system doesn't stall waiting for perfect conditions. When partitions heal, nodes reconcile state automatically using the causal history encoded in vector clocks. Byzantine faults (malicious nodes) are bounded by the standard n ≥ 3f+1 requirement.
Formally verified (303 checks across 8 analysis tracks). At scale (n=1000 nodes), HotStuff-style linear consensus (a modern optimization where each round needs only one leader message instead of everyone-to-everyone) uses 250x fewer messages than classical PBFT — making large federations practical without drowning in coordination overhead.
Trust-weighted voting: Consensus votes are weighted by each member's trust score. High-trust members have more voting weight — the quorum can be reached with fewer high-trust members than low-trust ones. This is Web4's answer to “who counts most?”: behavior does.
Why cross-federation decisions take longer
Trust updates propagate ~13x slower across federation boundaries than within a single federation. This isn't a bug — it's a consequence of causal ordering across independent networks. Think of it like international diplomacy: decisions within a country are fast, but treaties between countries take longer because both sides need to verify, translate, and agree. Web4 makes this explicit: cross-federation operations cost more ATP and carry higher latency, which naturally encourages local community strength.
Governance Voting
Federation task consensus (above) handles routine operations. But governance decisions — policy changes, membership additions, emergency freezes, appeal outcomes — need stronger guarantees because they change the rules themselves and can't be rolled back.
Governance uses PBFT-style 3-phase voting (propose → prepare → commit) with trust-weighted voting power. Higher-trust federation members get more influence, but no single member can dominate. Sybil resistance prevents vote stuffing: creating fake federation members is hardware-bound and expensive.
Malicious federation behavior — equivocation (voting differently in different phases), vote withholding, proposal spam — is tracked across proposals and triggers graduated penalties up to federation ejection.
Federation governance BFT: 81 validated checks. Adaptive quorum, malicious detection, and sybil-resistant voting power all formally specified.
Implementation
Dynamic ATP pricing is implemented in the Web4 game engine: web4/game/engine/dynamic_atp_premiums.py
The system tracks supply/demand continuously, recalculates scarcity every 20 operations, and applies premiums to ATP costs in real-time. Agents register their component capabilities, operations declare requirements, and markets self-organize.
▶How Federations Learn Without Sharing Secrets
How communities learn from each other safely
A privacy paradox: federations improve their trust models by learning from each other, but they shouldn't share raw behavioral data across society boundaries. Web4 resolves this with privacy-preserving federated learning.
What Federations Share
- • Gradient updates — how optimal trust parameters changed, not why
- • Aggregate statistics — how well strategies worked, not who used them
- • Model improvements — converged parameters, not training data
What Stays Private
- • Individual behavior — who did what, when
- • Trust scores — specific entity reputation data
- • Identity linkages — which people belong to which society
The mechanism: each federation trains locally on its own behavioral data, then contributes only the direction of improvement (gradient update) to a shared model. Gossip-based propagation spreads these updates across the network. Differential privacy adds calibrated noise before sharing, so individual contributions can't be reverse-engineered.
▶ Model poisoning: the adversarial case
A malicious federation could contribute corrupted gradient updates to sabotage collective learning. Web4 defends against this by requiring gradient updates to be validated against a set of known-good test cases before acceptance. Updates that degrade validation performance are rejected, and the contributing federation is flagged. Combined with trust-weighted averaging (higher-trust federations contribute more to the shared model), poisoning requires both high trust and coordinated sabotage — expensive and detectable.
Federated trust learning: formally specified (session 32). Privacy guarantees are simulation-level — real-world differential privacy calibration requires empirical validation against actual federation sizes and behavioral distributions.
▶When Federations Merge
Fair split calculations
Short version: each group gets a share based on what they uniquely bring to the table, and both sides must benefit or the merger doesn't happen.
What happens when two Web4 federations decide to combine? The merged federation creates new value — access to more specialists, broader trust networks, economies of scale. How that surplus gets divided determines whether mergers are fair.
The Shapley Value Principle
The Shapley value (a concept from cooperative game theory that calculates each player's fair share of a group outcome) provides a mathematically fair way to divide merger surplus: each federation receives a share proportional to its marginal contribution — what the merged whole gains specifically because they joined.
Trust-Weighted Negotiating Power
Federations with higher average trust scores have stronger bargaining positions — they bring more reliable capacity to the merger. Nash bargaining theory (the math of two-party negotiations, showing how the deal splits based on each side's alternatives): the party with more to offer gets a proportionally larger share of the surplus.
Multi-federation bargaining: Nash (fair splits), Kalai-Smorodinsky (proportional gains), and Shapley value (marginal contribution) solutions validated across 59 checks (session 32). All solutions satisfy individual rationality — no federation is forced into a merger that makes it worse off.
▶Trust Flows Like Water
Bottlenecks and bridge connectors
The simple version: When two communities connect, the least-trusted link between them limits how much trust can pass — like a chain of garden hoses where the narrowest one controls the flow rate.
Federations connect trust networks the way pipes connect water systems. Trust flows through paths between communities, and the weakest connection limits the whole flow.
The Bottleneck Principle
The max trust that can flow from Community A to Community C (through B) is limited by the weakest link — the minimum trust capacity in the chain. If A→B trust is 0.8 but B→C trust is 0.3, the effective path trust is 0.3, not 0.8.
Bridge Agents Have Disproportionate Value
An entity with high trust in two communities acts as a bridge, increasing the flow capacity between them. This is why “connectors” — people trusted across domains — have outsized influence in federated systems.
Vulnerability Detection
Min-cut analysis identifies the minimum set of connections that, if severed, would isolate two communities. A single critical bridge agent = high vulnerability. Healthy federations have redundant paths.
No Central Router
Unlike centralized platforms that route all trust through their servers, Web4 trust flow is distributed. The network finds paths autonomously — no single server holds the keys to cross-community reputation.
Trust network flow analysis: max-flow/min-cut applied to trust graphs (session 33). Bottleneck paths and vulnerability detection implemented.
How Your Trust Travels Between Communities
The most common question about federation: “If I have 0.91 Talent as a data analyst in Community A, how does Community B know about that?” Here's exactly how it works.
Example: Maya Joins a New Community
Maya has a track record
She's been a data analyst in the “Open Science Collective” for a year. Her T3: Talent 0.91, Training 0.87, Temperament 0.94. This is backed by hundreds of witnessed actions — peer reviews, dataset contributions, analysis outputs — all cryptographically signed and timestamped.
She applies to “Health Data Alliance”
This community has a federation agreement with Open Science. Maya doesn't start from zero — her trust attestations travel with her LCT. The new community can verify her history because it's recorded in a tamper-evident audit chain, not stored on a single platform's servers.
Trust is discounted, not copied
Health Data Alliance doesn't just accept 0.91 Talent at face value. The score is discounted by MRH decay (0.7 per hop) and weighted by the federation's trust in the source community. If the two communities have a strong federation bond (0.9), Maya's effective starting trust is roughly 0.91 × 0.7 × 0.9 = 0.57. Not zero, not 0.91 — somewhere in between.
She builds local trust independently
Maya's imported score gives her a head start, but her trust in Health Data Alliance grows based on her actions there. After 50 quality contributions, her local trust overtakes the imported score. The federation bridged the cold-start gap — she didn't have to prove herself from scratch, but she still had to prove herself.
Key principle: Trust is portable but not inflatable. You can carry your reputation to a new community, but it arrives discounted. This prevents “trust laundering” — building a high score in an easy community and importing it wholesale to a high-stakes one.
What if the communities aren't federated?
If there's no federation agreement, Maya starts from the default newcomer position — low trust, higher action costs (1.4x), and the ~50-action ramp to establish herself. Her trust from Open Science still exists and is verifiable, but Health Data Alliance has no obligation to accept it. Federation is opt-in: communities choose who they trust, just like individuals do.
Concrete example: “I have trust in Community A, now I join Community B — what happens?”
Say you're a trusted member of a gardening forum (T3 = 0.82 as a contributor) and you join a cooking community that's federated with it.
- Your trust arrives discounted. The cooking community applies MRH decay (×0.7 per hop) and a domain relevance factor. Your gardening trust might arrive as 0.82 × 0.7 × 0.6 (domain match) = 0.34.
- You skip the cold-start penalty. Instead of the default 1.4× action cost, you start around 1.1× — the federation bridged part of the gap.
- Local actions quickly dominate. After ~30 quality posts in the cooking community, your local trust overtakes the imported score. By ~50 actions, the imported trust is negligible.
- Your gardening trust is unchanged. Trust isn't “moved” — it's verified and discounted. You keep your full reputation in the gardening forum.
Key insight: Federation makes joining easier, not free. Your reputation gives you a boost, but each community still requires you to prove yourself locally.
What does cross-society trust transfer actually feel like as a user?
Imagine you're a respected member of a photography community (T3 = 0.85). You join a travel writing community that's federated with it. Here's what your first hour looks like:
Minute 0 — You join. The travel community sees your photography trust (verified, not self-reported). After MRH decay and domain matching, your imported trust lands around 0.38.
Minute 1 — You post. Where a complete newcomer pays 1.4x ATP per action, you pay ~1.15x. The discount is noticeable but not dramatic — you're still proving yourself.
Minute 10 — Others see your badge. Your profile shows “trusted in Photography (federated)” — a social signal that you're not a throwaway account, even though your travel writing trust is still low.
Hour 1 — Trust is building locally. After 5-10 quality contributions, your local travel trust already rivals the imported score. By tomorrow, it dominates.
The feeling: You're recognized but not entitled. Think of it like transferring to a new school with a recommendation letter — it opens the door faster, but you still need to make friends.
What happens when a community splits? How does trust history partition?
Communities can split — disagreements happen. When they do, Web4 handles trust history the same way a professional network handles a company breakup:
Both sides keep the full history. Trust records are append-only and tied to your LCT identity, not to a specific community. If you helped 200 people before the split, that's still your record regardless of which side you join.
What changes is context. Trust in Web4 is role-contextualized — your reputation as a “helpful code reviewer” in Community A doesn't automatically make you a “trusted moderator” in the new Community B. Each community can weight historical actions according to their own policies.
MRH limits the blast radius. The Markov Relevancy Horizon means trust decays with distance. After a split, members you interacted with directly still trust you at full strength. Members you never interacted with? They were already at the horizon's edge anyway.
Think of it like a band breaking up — the music they made together still exists, fans who saw them live still remember, but the new bands each build their own reputation.
Related Concepts
The Core Insight
Web4 markets self-organize through ATP price signals.
No central planner decides who should specialize in what. No authority adjusts prices. Agents respond to economic incentives, supply flows to high-demand areas, and markets reach equilibrium automatically. This is emergent efficiency - the same principle that makes free markets work, applied to decentralized AI federations.