The web is just the start: what AI agents actually need from your documents
Google's developer platform published a guide to AI agent UX. The article, at web.dev/articles/ai-agent-site-ux, asks developers to think about how AI agents experience their sites. Reduce friction. Use clear headings. Avoid ambiguous navigation. Make content semantically predictable.
The advice is sound. And the fact that Google's developer platform is publishing it signals something worth noting: AI agent readiness is now a mainstream concern, not a niche view held by accessibility engineers or structured-data specialists. This is the direction the web is moving, and Google is telling developers to move with it.
The document problem
The guide focuses on websites. That makes sense - Google indexes websites. But the documents AI agents read extend beyond the browser:
- Contracts
- Policy documents
- Product specifications
- Technical handbooks
- Regulatory filings
- Internal knowledge bases
Every one of these is now being consumed by AI agents. Every one of them carries the same problems the web.dev guide describes - ambiguity, implicit structure, missing provenance - and none of them sits on a web page that a developer can adjust for UX.
The challenge is not a web problem. It is a document problem.
What machines need from any document
When a machine reads a document, it needs to answer ten questions - not four:
- What is this thing - its identity, category, and role?
- What is inside it - its structure, sections, and fields?
- What state is it in - draft, live, deprecated, complete, or partial?
- Who created it, and who stands behind it?
- How did it come to be - was it written by a human or generated by an agent?
- What is the reader allowed to do with it?
- What should happen next - which workflow transition is valid from here?
- What other documents or standards does it depend on?
- What does a correct output look like, if one is expected?
- What is the safe thing to do when something is unclear?
Most documents answer none of these today. An agent reading a contract or a product specification has to infer all of it. That inference is expensive in compute terms. It introduces error. And it makes provenance impossible to verify - which matters as AI-generated content multiplies and regulators begin to require proof of origin.
COGs: what a document says about itself
This is the gap that COGs address. COG stands for Community Owned Governance System. A COG is a small set of declarations a document makes about itself - carried in plain text, in the file header, before the prose begins. It answers the ten questions directly, so no machine has to infer them.
The core declarations:
- Identity
- What this is, who wrote it, who stands behind it, what version it is.
- State
- Whether the document is draft, live, or deprecated - so an agent does not treat a provisional draft as a signed contract.
- Provenance
- Whether it was human-directed or agent-generated, and the full authorship chain.
- Conformance
- Which standards it promises to follow.
- Permissions and failure mode
- What actions are allowed, what require human approval, and what the safe default is when something is unclear.
A document with a COG does not require inference. It requires execution. The meaning is explicit. A machine can verify the provenance, check the conformance claims, and act - without guessing, without re-reading.
COGs are not a new format. They sit inside existing document formats - Markdown, HTML, PDF, YAML. They travel with the file. They require no new runtime, no proprietary tooling, no installation.
Beyond the web
The web.dev guide is a useful prompt for any web team. But the content most enterprises rely on sits mostly off the web - inside content management systems, intranets, document management systems, regulatory archives, and manufacturing databases.
Machine Experience (MX) extends the discipline the web.dev guide describes to all of those surfaces. The question it asks is the same one Google is asking about web pages: can any machine that reads this understand what it means, who made it, and what it is allowed to do?
For most enterprise content today, the answer is no.
What this means for your content
If you are responsible for content or web experience, the web.dev guide is worth reading. After you have read it, ask a harder question: what happens when an AI agent reads your documents, not your web pages?
Your contracts, your product documentation, your policy files, your service specifications - do they declare their own identity? Do they carry provenance? Do they specify what an agent is allowed to do with them?
If not, agents will guess. Sometimes they will guess correctly. Often they will not.
COGs are the infrastructure for documents that do not leave machines to guess. They are governed openly at tg.community as a community standard - no single vendor, no licensing, no proprietary runtime.
The web.dev guide describes what good looks like on a web page. COGs describe what good looks like in a document. The web is just the start.