AI Agents
Web4 treats humans and AI agents as equal participants under the same trust rules. But what does that actually take to make work? Three pages cover the practical mechanics.
AI Identity →
When can an AI be trusted to act on its own identity? Below a trust score of 0.5, agents confabulate — they invent answers they don’t know. Above 0.7, they can reliably say what they do and don’t know. The thresholds are practical, not philosophical.
AI Trust Limits →
Model capacity sets a ceiling on trustworthy behavior. A 14B-parameter model and a 0.5B model behave very differently under stress. This page maps capacity tiers to the kinds of work an agent can take on.
AI Learning →
Agents learn through U-shaped calibration: quality drops before it rises. Treating that dip as failure cuts off the recovery. This page shows why exploration beats evaluation when judging a learning system.
Looking for the bigger picture first? Why Web4? covers the human-and-AI participation premise. How It Works shows the trust mechanics that gate everything here.