Index

Adobe just bought the dashboard. The work is upstream.

Adobe has announced it will acquire Semrush for $1.9 billion in cash. Twelve dollars a share, all cash, expected to close in the first half of 2026 subject to the usual approvals. Semrush slots into Adobe Experience Cloud alongside Adobe Experience Manager, Adobe Analytics, and Adobe Brand Concierge.

If you have been watching this space, the price tag matters less than the framing.

The sentence that re-prices the category

Adobe's stated objective is a comprehensive solution that gives marketers a holistic understanding of how their brands appear across owned channels, LLMs, traditional search and the wider web.

Read that sentence twice. The order is doing work. Owned channels first, the things you control. LLMs second, things you do not control but increasingly cannot ignore. Traditional search third, the thing the SEO industry has been working on for two decades. The wider web fourth.

Anil Chakravarthy, who runs Adobe's Digital Experience business, put it more directly: "We're unlocking GEO for marketers as a new growth channel alongside their SEO." Bill Wagner, Semrush's CEO, named the customer concern: "With the advent of LLMs and AI-driven search, brands need to understand where and how their customers are engaging in these new channels."

This is the moment generative engine optimisation crosses from a niche topic discussed at SEO conferences into an enterprise budget line. The largest customer experience vendor on earth has just spent nearly two billion dollars to say so.

What the dashboard tells you, and what it cannot

A measurement platform is, by construction, a rear-view mirror. Semrush has built a good one for AI answer visibility: which LLMs surface your brand when asked about your category, what they say, how often, in what context. That is genuinely useful. Most enterprise marketing teams have been flying blind on that question for at least two years.

But measurement tells a CMO something like: "your brand appears in twelve percent of relevant LLM answers in your category."

It does not tell them what to publish, in what shape, with what governance metadata, so that the figure becomes forty.

That is upstream work. It happens at the carrier layer, the source documents that LLMs and AI agents read before they form an answer. It happens in the structured data, the descriptive metadata, the licensing signals, the agent-readable instructions on each page. Once a brand's content has been indexed and inferred over, no dashboard can retrofit clarity that was not there at publication.

The dashboard is downstream of the decision that determines its reading.

Where established standards leave a gap

The web has been here before. SEO did not invent visibility; it operationalised standards that already existed: HTML, sitemaps, robots.txt, structured data, canonical links. Accessibility did not invent inclusion; it operationalised WCAG. Each of those movements succeeded because it sat on top of a standard, not in front of it.

The same is true now. Schema.org tells you how to describe a product. WCAG tells you how to make a page accessible. llms.txt and robots.txt tell crawlers and AI agents what they may and may not consume. sitemap.xml tells them what exists. Each of these is well-defined, widely deployed, and not in dispute.

What has been missing is the governance layer for AI and agent traffic specifically. Who is allowed to read this content. On what terms. With what attribution. With what verification that the document is current and from the named source. These are not questions the existing standards answer, because they were not designed to.

That is the layer Machine Experience operates on. It does not replace Schema.org or WCAG or llms.txt or sitemap.xml. It adds the small set of governance fields where they leave gaps. A well-built MX page is, by construction, a well-built SEO page, an accessible page, and a GEO-ready page. The economic argument for caring about that just got a $1.9bn floor under it.

What changes for an author

Practically, very little. Authors who already care about how machines read their content were already doing this work. Headings in the right order. Descriptive alt text. Schema.org for the things Schema.org covers. Honest llms.txt. A sitemap that reflects what exists. Source documents written so that the answer to "what does this say" is the same whether a human reads it, an SEO crawler reads it, or an answer engine summarises it.

What changes is the conversation around the work. A year ago, explaining to a marketing director that the source document matters required a fifteen-minute preamble. This week, the largest CX vendor on earth said the same thing in a press release. The argument can now start at the second sentence.

The work has not changed. The market has caught up.

I have been working on this for two years. Drafting the standard, writing the books, building the audit tools, sitting in front of CMOs who needed the fifteen-minute preamble before the conversation could begin. That preamble is now redundant. The largest customer experience vendor on earth has just delivered it on my behalf, in a press release, with a $1.9bn signature at the bottom. If you have been waiting for a moment to take this seriously, this is it.