1. SEJ
  2.  ⋅ 
  3. SEO

Google Publishes Guide to Building Agent-Friendly Websites

Google's web.dev published guidance on building agent-friendly websites. Learn the semantic HTML and accessibility-tree changes SEOs and devs should make now.

New web.dev guidance outlines how AI agents interpret websites and what developers and SEOs should change to accommodate this emerging traffic source.

Google’s web.dev has published new guidance on building websites that work for AI agents, not just human visitors. The article explains how autonomous AI systems interpret web pages and offers concrete recommendations for making sites agent-ready.

The guidance arrives as AI agents increasingly act on behalf of users, browsing sites, filling out forms, and completing purchases. For SEOs and developers, this signals a shift: sites optimized only for human eyes may be, as Google’s guide puts it, “functionally broken” for the agents that are becoming a meaningful traffic source.

How AI Agents See Your Website

According to the web.dev guide, AI agents don’t view websites on a monitor. They operate on machine-readable representations of a page, and the quality of that representation determines how well the agent performs.

The guide identifies three primary ways agents interpret a site: screenshots, raw HTML, and the accessibility tree.

With screenshots, an agent takes a visual snapshot and uses a vision model to identify elements. It can recognize a search bar at the top-right as global search, or interpret a large “Delete” button with more caution than a small “Help” link. However, the guide notes that screenshot analysis is slow and token-expensive, making it better as a fallback.

With raw HTML, the agent reads the DOM to understand how elements are nested, their logical hierarchy, and attributes like IDs and classes. If a “Buy Now” button sits inside a product container, the agent infers that button belongs to that specific product.

The accessibility tree, described in the guide as a “browser-native API,” distills the DOM into roles, names, and states of interactive elements. For an AI agent, it functions as a “high-fidelity map that ignores the visual noise of CSS to focus on pure utility.”

“Relying on a single input creates a semantic gap. Modern agents, therefore, combine multiple modalities. They use the DOM and accessibility tree to get a clean, structured list of interactive elements, and then cross-reference that with a visual rendering to understand layout, grouping, and visual cues. Our job is to provide clean signals across all these channels.”

Kasper Kulikowski and Omkar More, Google (web.dev)

In other words, agents piece together understanding from multiple sources. A <div> styled to look like a button might fool a screenshot analysis, but the DOM won’t reveal its intended function. The accessibility tree might miss it entirely. Clean signals across all three channels is what matters.

What Google Recommends Changing

The web.dev guidance includes a set of concrete recommendations for making sites agent-friendly. Several of these overlap directly with long-standing accessibility and SEO best practices.

First, the guide emphasizes using semantic HTML. Developers should prefer <button> and <a> tags over modified <div> and <span> elements. Agents recognize semantic elements as interactive. When semantic HTML isn’t possible, the guide recommends adding the appropriate role and tabindex attributes (for example, <div role="button">).

Second, layout stability matters. Agents that rely on screenshots will be confused if your site layout constantly shifts. The guide specifically calls out scenarios where an “Add to cart” button appears in different locations across product categories.

Third, the guide warns against “ghost” elements or transparent overlays that might hide interactive elements. Visual analysis by agents may discard nodes that appear covered, even if the overlay is transparent.

Additional recommendations include setting cursor: pointer in CSS as a signal for actionability, adding the for attribute on <label> tags to link them to inputs, and ensuring interactive elements have a visible area larger than 8 square pixels to avoid being filtered out by visual analysis.

What This Means for Your Work

  • Audit your accessibility tree using Chrome DevTools. The guide specifically recommends this as a starting point. If your site’s hierarchy isn’t machine-readable and stable, agents will struggle to interpret it.
  • Replace non-semantic interactive elements. Swap <code>&lt;div&gt;</code> and <code>&lt;span&gt;</code> elements acting as buttons or links with proper <code>&lt;button&gt;</code> and <code>&lt;a&gt;</code> tags. Where that’s not possible, add <code>role</code> and <code>tabindex</code> attributes.
  • Stabilize your page layouts. Ensure key interactive elements like “Add to cart” or “Submit” buttons appear in consistent positions across similar page types.
  • Remove transparent overlays and ghost elements that could block agent interaction with underlying content.
  • Look into WebMCP, a proposed web standard for helping websites interact with agents. Google’s guide links to an early preview program for experimentation.

Why This Matters

The web.dev guide frames agent-readiness not as a separate discipline but as an extension of existing web fundamentals. As the authors note, “Everything we suggest to make a site ‘agent-ready’ also makes sites better for humans.” Making websites agent-friendly is, in their words, “an incentive to recommit to foundational principles of building well-structured, accessible, and semantic websites.”

For SEOs, the practical takeaway is that the same structural issues that hurt accessibility scores and crawlability (non-semantic markup, unstable layouts, hidden interactive elements) now also degrade performance for a growing class of automated visitors. Sites that already follow accessibility best practices have a head start.

The mention of WebMCP, a proposed standard for website-to-agent communication, suggests Google sees this as an evolving area. The early preview program signals that more formal tooling and standards are likely on the way. SEOs and developers should monitor this space closely, as agent traffic patterns and capabilities will continue to shift.


AI-generated first-pass scaffolding. This draft was produced by Search Engine Journal’s newsroom automation as a starting point for a writer. Rewrite before publishing.


Research notes (review and remove before publishing)

The bot collected this context while writing. Skim, verify, then delete this whole section before publish.

Headline alternatives

  1. Build agent-friendly websites | web.dev

Practitioner pulse

no data

Image search query

“robot browsing website on computer screen”

Flags

degraded research: synthesis

Fact-check flags

  • ⚠️ HIGH — “setting cursor: pointer in CSS as a signal for actionability” — The provided source excerpt does not mention cursor: pointer as a recommendation; this may be hallucinated or drawn from a portion of the source not included in the excerpt — needs verification against the full article.
  • ⚠️ HIGH — “adding the for attribute on <label> tags to link them to inputs” — The provided source excerpt does not mention the ‘for’ attribute on label tags; this recommendation may be hallucinated or from an unsupplied portion of the source — needs verification.
  • ⚠️ HIGH — “ensuring interactive elements have a visible area larger than 8 square pixels to avoid being filtered out by visual analysis” — The 8 square pixel threshold is not mentioned anywhere in the provided source excerpt; this is a hallucination candidate and must be verified against the full article.
  • · low — “Kasper Kulikowski and Omkar More, Google” — The source lists both authors but does not explicitly label them as Google employees, though web.dev is a Google property — low risk but worth confirming titles. (source: https://web.dev/articles/ai-agent-site-ux)
  • ◐ MED — “Google’s guide links to an early preview program for experimentation” — The source excerpt does not mention an early preview program for WebMCP; this claim may come from the full article but cannot be verified from supplied materials.
  • ◐ MED — “WebMCP, a proposed web standard for helping websites interact with agents” — The source excerpt does not mention WebMCP at all; this may exist in the full article but cannot be verified from the supplied excerpt and the characterization as a ‘proposed web standard’ needs confirmation.
  • ◐ MED — “The guide specifically recommends adding the appropriate role and tabindex attributes (for example, <div role=”button”>)” — The source excerpt does not mention role or tabindex attributes; this may be in the full article but cannot be verified from supplied materials.
  • ◐ MED — “The guide specifically recommends this as a starting point” — The source mentions you can preview the accessibility tree in Chrome DevTools but does not frame it as a recommended starting point for auditing — the draft overstates the source’s framing.

Drafter’s writer notes

FACTCHECK_FLAGS_GO_HERE

Degraded research note: The synthesis stage was flagged as degraded. The brief contained no prior SEJ coverage, no competitor coverage, no social pulse data, and no practitioner implications. All recommendations in the ‘What This Means for Your Work’ section are drawn directly from the web.dev source article. Verify independently whether any SEJ articles have previously covered AI agents as a traffic source, WebMCP, or accessibility-tree optimization, and add internal links if so.

Dateline note: The `dateline_age_tag` is unknown. Verify when this web.dev article was published and adjust tense accordingly. The article currently uses present tense.

WebMCP link: The source article links to WebMCP as a proposed standard with an early preview program. The exact URL for the WebMCP page and signup should be verified and potentially linked directly rather than through the web.dev article.

Suggested follow-up angles:

  • A deeper technical walkthrough of auditing the accessibility tree for agent readiness.
  • Coverage of WebMCP as it develops, especially once the early preview program produces results.
  • Testing how current AI agents (ChatGPT browsing, Gemini, etc.) actually perform on sites with good vs. poor semantic HTML.

Fact-check pass: Several specific technical recommendations (cursor: pointer, 8 square pixel threshold, for attribute on labels, role/tabindex) and the WebMCP/early preview program claims cannot be verified from the supplied source excerpt and are high-risk hallucination candidates; the writer must check these against the full web.dev article before publishing.

    • ⚠️ high — “setting cursor: pointer in CSS as a signal for actionability”

The provided source excerpt does not mention cursor: pointer as a recommendation; this may be hallucinated or drawn from a portion of the source not included in the excerpt — needs verification against the full article.

    • ⚠️ high — “adding the for attribute on &lt;label&gt; tags to link them to inputs”

The provided source excerpt does not mention the ‘for’ attribute on label tags; this recommendation may be hallucinated or from an unsupplied portion of the source — needs verification.

    • ⚠️ high — “ensuring interactive elements have a visible area larger than 8 square pixels to avoid being filtered out by visual analysis”

The 8 square pixel threshold is not mentioned anywhere in the provided source excerpt; this is a hallucination candidate and must be verified against the full article.

    • · low — “Kasper Kulikowski and Omkar More, Google”

The source lists both authors but does not explicitly label them as Google employees, though web.dev is a Google property — low risk but worth confirming titles. Source: https://web.dev/articles/ai-agent-site-ux

    • medium — “Google’s guide links to an early preview program for experimentation”

The source excerpt does not mention an early preview program for WebMCP; this claim may come from the full article but cannot be verified from supplied materials.

    • medium — “WebMCP, a proposed web standard for helping websites interact with agents”

The source excerpt does not mention WebMCP at all; this may exist in the full article but cannot be verified from the supplied excerpt and the characterization as a ‘proposed web standard’ needs confirmation.

    • medium — “The guide specifically recommends adding the appropriate role and tabindex attributes (for example, &lt;div role=”button”&gt;)”

The source excerpt does not mention role or tabindex attributes; this may be in the full article but cannot be verified from supplied materials.

    • medium — “The guide specifically recommends this as a starting point”

The source mentions you can preview the accessibility tree in Chrome DevTools but does not frame it as a recommended starting point for auditing — the draft overstates the source’s framing.

Category SEO
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...