The new web: why the agentic era needs infrastructure, not just intelligence
The internet is entering a transition that echoes the mid-1990s. Back then, the web was fragmented, vendor-led, and incompatible. Developers wrote one version of a page for Netscape and another for Internet Explorer. Features worked in one browser and broke in the other. The web was full of promise, but it was not yet a platform. It took a decade of standards work to escape that fragmentation and build the interoperable web we rely on today.
The agentic web, the emerging ecosystem of AI agents that read, interpret, and act on online content, is at that same point now.
A familiar fragmentation
Agents can read a page and understand what a form is for, but they cannot act on it because the session belongs to the browser, the checkout belongs to a payment network, and the capability they need lives inside another vendor's agent with no shared handshake between them. The pieces exist: MCP, A2A, UCP, WebMCP, llms.txt, agent cards. But the stack has no venue, no shared governance, and no unifying contract. The result is predictable: every vendor is building an island, and agents cannot move between them.
This is a failure of infrastructure rather than a failure of AI.
Machine adoption does not follow a human curve
Machines are beginning to read the web in meaningful ways. Their abilities are still early, but their growth curve is exponential. Once machines can reliably interpret content, their adoption will accelerate far faster than human institutions, enterprises, or regulatory frameworks can adapt. This is not a human-adoption trend; it is a computational one. Human adoption curves are slow and linear. Machine adoption curves are instantaneous and compounding. The moment comprehension becomes reliable, usage explodes.
The web is hostile to machine comprehension
Today's web is hostile to that comprehension. It is a visual medium built for human eyes, human inference, and human navigation. Machines do not see layout, visual styling, spacing, animation, or implicit meaning. They cannot reliably interpret JavaScript-rendered content or hidden state. They cannot determine authorship, trustworthiness, or the boundaries of an action unless these things are made explicit. As a result, agents hallucinate, misinterpret, misroute, and silently fail.
Enterprises are already losing revenue because machines cannot read their websites. Governments are already facing compliance and misinformation risks because machines cannot reliably interpret public information. Entire categories of product are stalling because machine comprehension is the bottleneck to scale.
This is the invisible failure of the modern web.
MX: the missing contract layer
The solution begins with MX, the Machine Experience standard. MX is the discipline of making anything you publish, a video, a podcast, a PDF, an image, a web page, readable by every machine that consumes it, so no machine has to guess. Rather than being a new markup language or a replacement for existing standards, MX is the missing contract layer that tells machines what content means, how it is structured, what state it is in, who authored it, how it should be interpreted, and what actions are permitted. MX transforms the web from a guessing game into a readable, navigable, trustworthy environment for machine agents.
The breakthrough is economic as well as semantic.
COGS: why execution beats inference
This is where COGS enters, and the acronym matters. COGS stands for Community-Owned Governance System. It is the constitutional layer that ensures MX remains open, neutral, and interoperable. But its impact is deeper than governance. COGS changes the economics of machine comprehension.
A document governed by a cog does not require inference. It requires execution. The meaning is explicit. The structure is explicit. The provenance is explicit. The workflow is explicit. The agent no longer has to think, and when it does not think, it does not hallucinate.
Inference is expensive. Execution is cheap. Inference burns compute. Execution saves it. Inference consumes energy. Execution reduces it. Inference introduces error. Execution increases accuracy.
COGS reduces inference, and by doing so, it reduces compute cost, reduces energy consumption, and increases accuracy. This is the economic engine of the machine-inclusive web.
Interoperability as infrastructure
COGS also enables interoperability. Today, every agent must interpret every site differently. Every CMS outputs a different form of markup. Every vendor invents its own metadata. Every AI platform builds its own heuristics. This fragmentation forces agents to perform bespoke inference for every domain they encounter. It is the digital equivalent of every country having its own electrical socket.
A cog defines the data, the scripts, the workflows, and the boundaries in a way that any agent can understand. Interchange becomes trivial. Agents can move between systems without retraining, without custom adapters, and without brittle heuristics. This reduces integration cost for enterprises, accelerates adoption for vendors, and creates a stable foundation for national and international digital infrastructure.
The Gathering: a venue for the agentic web
Even MX and COGS are not enough without a venue. The agent stack has protocols, but it has no forum. There is no place where developers, site owners, accessibility advocates, user-rights groups, and standards-minded engineers can argue, in public, about how these pieces fit together, what breaks when they do not, and what the user actually needs from them as a whole.
That is why The Gathering exists: a vendor-neutral forum for the agentic web. Drafts are written in public. Reviews happen through Stream. No single vendor can override the community. There is no membership fee, no editorial capture, no gatekeeping. It is the standards-community posture that saved the web in the 1990s, applied to the agentic era.
The architecture of the new web
Together, MX, COGS, The Gathering, and Reginald form the architecture of the new web. MX is the contract. COGS is the constitution. The Gathering is the steward. Reginald is the trust layer.
Reginald is the public registry that attests the provenance of any document: who published it, that it has not been modified since publication, and whether it was produced by a human, an AI, or an automated system. An agent that reads cog-described content and verifies it through Reginald hallucinates less, it has attested facts to cite, not gaps to fill by inference. MX makes content machine-readable. Reginald makes it machine-trustworthy. Both are required for agents that are reliable in the full sense.
Machines are beginning to read. Their growth will outpace human comprehension. The web is not ready. MX is the missing layer. COGS is the economic engine. The Gathering is the venue. Reginald is the trust signal that lets agents act on verified content rather than inference.
The foundation of the next internet is being built now.