Index

GEO is a tactic. MX is the specification.

Generative Engine Optimization is not new. The term has been circulating in SEO and content circles for years now, and the underlying practice, adjusting content so it gets cited by AI-generated answers, predates the label. What is new is the volume at which platform vendors are now packaging GEO as a product story, complete with citation gap audits, optimisation roadmaps, and "structured data stacks" sold as the route to AI visibility.

If your brand is invisible in Claude, ChatGPT, Perplexity, or Google AI Overviews, GEO will help. It is a useful tactical layer. But tactics rest on something. The thing GEO rests on, and rarely names, is whether your content was built to be read by machines in the first place. That is what Machine Experience addresses, and that is the difference worth understanding before you commission another optimisation engagement.

The progression is straightforward once you see it. SEO got you found. GEO gets you understood by the AI systems that now sit between your content and your reader. MX gets you used: read, trusted, and acted on by any machine, in any context, on any time horizon. Each step solves the problem the previous step left behind. SEO was for the web. GEO is for the web until the next platform innovation. MX is infrastructure for all documents for all time, the web included.

What GEO actually does

GEO works on the surface of existing content. It audits which AI systems mention your brand, identifies the content patterns that suppress citations, and prescribes adjustments: clearer headings, more direct factual statements, schema markup, authoritative outbound links, content freshness signals. Done well, it moves the needle on citation rates inside the platforms it targets.

What GEO does not do is change the underlying nature of your content. The article still lives inside a CMS database. The product page still depends on a rendering pipeline to express its meaning. The maintenance context still sits in a separate ticketing system. The asset still cannot travel intact to a different platform, a different agent, or a different audience without being rebuilt or re-explained.

This matters because the surface GEO optimises for is one of several machine reading contexts, and not the most strategic one.

The contexts GEO ignores

Machines read content in distinct ways. Training corpora absorb it for model weights. Retrieval-augmented inference pulls passages at query time. Search engines index it for ranking. Browser agents traverse it on behalf of a user with an actual task. Voice assistants and LLM-mediated commerce act on it without rendering it visually at all.

GEO concentrates on the citation surface: primarily the third and fourth of those contexts, and only the parts visible to a public crawler. It has little to say about content that needs to be trusted by an agent making a purchase, content that has to survive ingestion into a private knowledge base, or content that has to remain accurate three years after publication when the author has left the organisation. Those problems are not reachable through optimisation. They require the content itself to carry its own context, its own provenance, and its own update instructions.

That is what MX specifies.

What MX adds underneath

Machine Experience is a framework for building content as a portable, self-describing artefact rather than as a database row dressed up by a template. Cogs are the unit of work: COGS stands for Community Owned Governance Standards, and a cog is a single document written to those standards, with structured metadata in its frontmatter and human-readable content in its body, expressed in a format that both humans and machines can read directly without intermediary tooling. The "community owned" part is load-bearing. Cogs are governed by an open community, not by a single vendor's product roadmap.

Cogs carry their purpose, their audience, their stability guarantees, their relationships to other content, and the instructions for keeping them current. They can be notarised through Reginald, currently in beta, so that downstream consumers, including AI agents, can verify they are genuine, unaltered, and authored by who they claim to be. They render the same in a browser, a training pipeline, a RAG retrieval, an agent traversal, and a voice query, because the content is the source of truth rather than a projection of it.

The Convergence Principle sits underneath this: interfaces optimised for machines turn out to be better for humans too. A document that an agent can read accurately is also a document that a screen reader handles cleanly, that a translator can localise without losing meaning, and that a new team member can understand without a handover meeting. Accessibility, machine-readability, and editorial maintainability stop being three separate workstreams.

GEO cannot deliver any of that, because GEO is not a content architecture. It is a remediation layer applied to content architectures that were never designed for machine audiences in the first place.

SEO, GEO, MX

SEO is the first generation. Optimise pages so search engines can find them and rank them. The audience is a crawler that returns a list of links to a human. The output is traffic. SEO did the job it was designed to do, and it still does, for the part of the web that lives behind a search box.

GEO is the second. Optimise pages so AI systems will cite them in generated answers. The audience is an LLM-mediated reader, deciding whether to mention you in a synthesised response. The output is visibility inside the model's reply. GEO is still optimisation, still bound to the public web, and still subject to whatever the next major platform change does to retrieval. The lever that quietly governs all of this is the system prompt, the hidden instructions a vendor runs before any user query, telling the model how to search, when to cite, what to attribute, and which sources to prefer. System prompts are not published, are rewritten without notice, and reshape citation behaviour overnight. When OpenAI moved ChatGPT to GPT-5.3 Instant in March 2026, third-party monitoring across 27,000 responses found cited domains dropped roughly 20%, from an average of 19 unique domains per response to 15, with crawl frequency falling in lockstep on independent log analysis. Nothing about the underlying web changed. The system prompt did. Every site that had been optimising for the previous behaviour had to start again. GEO is for the web until the next innovation, and the next innovation is usually a system-prompt rewrite the vendor never tells you about.

MX is the third. Structure content so any machine, in any reading context, on any time horizon, can act on it directly. The audience is everything that will ever read the document, including agents that have not been built yet. The output is a document that is usable, not just findable or quotable. Infrastructure, not optimisation. For the web, and for everything else that is not the web: PDFs, internal knowledge bases, regulatory filings, training corpora, voice surfaces, agent commerce flows, archival systems that have to read your content twenty years from now.

From being found, to being understood, to being used.

The structural-engineer view

Think of it the way a building works. GEO is the surface treatment: the paint, the cladding, the signage that helps people find the entrance. Useful, often necessary, sometimes the difference between a building that works and one that does not. But the building stands or falls on the structural specification underneath: the load paths, the materials, the connections, the codes it was designed to.

MX is that specification for content. It defines what a piece of content has to be in order to earn citation, recommendation, agent-trust, and long-term reuse, across any platform, any machine context, any time horizon. GEO is one tactic you can apply on top of an MX-compliant foundation. It is also a tactic you can apply on top of a foundation that will fail you the moment a major AI provider changes its retrieval policy, or a new agent commerce protocol arrives, or your hosting vendor decides to lock its structured data behind a paywall.

If your strategy depends on the ranking behaviour of a specific platform's AI system, you are renting visibility. If your content meets the MX specification, you own it.

Where this leaves the agency conversation

The agencies starting to win this work are the ones who can hold both layers in mind. GEO answers the immediate brief: the client wants to be cited in Claude by next quarter, and that is a real number on a real dashboard. MX answers the structural one: the client wants to still be cited in three years, in tools that do not exist yet, by audiences that include agents acting autonomously on their users' behalf.

The two are not competing. GEO done on top of MX-compliant content compounds. GEO done on top of fragile content gets undone by the next platform shift.

Found, used, acted on

The web is shifting toward AI agents as primary users. Most of what is being sold into that shift focuses on optimising content for AI; that is GEO. MX goes further. It makes your site directly usable by AI systems, not just findable by them. That is the difference between being found and being used.

Most of the conversation in the market right now is about GEO and machine-readable content. That is optimisation. MX is infrastructure. The work is not to help an AI understand your site; it is to make your site something an AI can act on, with the provenance and structural integrity that warrants the action.

SEO got you found. GEO gets you understood. MX gets you used. That is the progression worth holding in mind before the next optimisation engagement: not "how do I rank in this AI system?" but "what does my content have to be, so that any AI system, on any time horizon, can act on it without first having to interpret it?"

That question is what MX exists to answer, and it has been answerable for longer than the GEO acronym has been in marketing decks.

Where this is written down, and where it is debated

If the argument lands and you want to take it further, two places carry the rest of it. The MX book series is the long-form specification: MX: The Handbook for the framework and the day-to-day patterns, MX: The Protocols for the cog format, the carrier rules, and the agent-facing contracts, and MX: The Appendices for the field dictionary and recipes. The books are the place where the structural argument is written down once, in the form a serious team can adopt without having to reverse-engineer it from blog posts.

The Gathering is where the standard is debated, refined, and kept honest. It is the open community that owns the cog specification, reviews proposed extensions, and stops the format from drifting into any one vendor's interest. If you build content systems, run an agency, or operate a published corpus that AI systems are starting to read first, that is the room to be in. tg.community is the door.

Back to Top