1. SEJ
  2.  ⋅ 
  3. SEO

Mueller: Use Domain-Level Disavows, Not URL-by-URL

Google's John Mueller recommends domain-level and even TLD-level disavows instead of URL-by-URL cleanup, adding that spammy backlinks are unlikely causing ranking issues.

A three-sentence Reddit comment quietly rewrites the playbook on how most SEOs manage their disavow files.

A site owner hit by a large-scale spam link attack asked r/bigseo for help managing their disavow file. Google Search Advocate John Mueller showed up with a short answer that challenges how most practitioners handle the tool.

His advice boils down to two moves: switch to domain-level disavows, and stop assuming those spam links are hurting you.

The Comment That Started It

The original Reddit thread describes a site targeted by what the poster calls a “Casibom attack” — automated spam backlinks created at scale. The site’s team had been disavowing URLs in small batches, worried about hitting the disavow tool’s 100,000-line limit or causing unintended consequences from a large submission.

Mueller’s reply was three sentences:

“I’d prioritize and use domain level dismemberment. You can even do it by top level domain if you see they’re all from the same TLDs. I would be surprised if they’re the cause of issues though.”

— John Mueller, Google Search Advocate (via Reddit)

“Dismemberment” appears to be a playful word choice for “disavowal.” The practical advice is clear: instead of listing thousands of individual spam URLs, use the domain: prefix in your disavow file to cover entire root domains at once.

He goes further by suggesting you can group disavows by top-level domain when the spam is concentrated on specific extensions like .xyz or .tk.

How Domain-Level Disavows Work

Google’s disavow tool documentation supports a domain: prefix that tells Google to ignore all links from a given root domain. A single line like domain:spamsite.com covers every page on that domain, present and future.

This is far more efficient than listing individual URLs, especially during a large-scale spam attack where new pages are constantly being created on the same domains. A file that previously listed 50,000 individual spam URLs from 200 domains can be reduced to 200 lines.

Mueller’s suggestion to disavow by TLD is less commonly discussed. The official documentation does not explicitly describe a TLD-level wildcard syntax (such as domain:.xyz). His comment likely means manually listing all offending domains that share a TLD rather than using a single wildcard entry.

Practitioners dealing with spam concentrated on cheap TLDs would still need to identify and list each domain individually, but the domain-level prefix keeps the file manageable and the 100,000-line limit becomes much less of a concern.

Why Mueller Thinks the Spam Links Aren’t Your Problem

The more significant part of Mueller’s comment is the last sentence: “I would be surprised if they’re the cause of issues though.”

Google’s SpamBrain system is designed to identify and neutralize spammy link signals algorithmically, without requiring manual intervention from site owners. Mueller and other Google staff have stated that Google can simply ignore bad links rather than penalizing the sites they point to.

As Search Engine Roundtable reported, Mueller reconfirmed that Google may ignore outbound links from sites that violate spam policies. The implication: if your site has no manual action in Search Console, the ranking problem is almost certainly somewhere else.

For sites with a confirmed manual action related to unnatural links, disavowal remains part of the recovery process. But for the far more common scenario — a site owner who sees thousands of junk backlinks in a third-party tool and panics — Mueller’s advice is to stop treating those links as the root cause.

Who Should Change Their Workflow

  • If you maintain a disavow file with individual URL entries, switch to domain-level entries using the <code>domain:</code> prefix. This reduces file size, avoids the 100K-line limit, and automatically covers future spam pages on the same root domains.
  • When your backlink audit shows spam concentrated on specific cheap TLDs (.xyz, .tk, .top), list all offending domains from those TLDs in your disavow file. Mueller suggested this approach for exactly this scenario.
  • If your site has no manual action in Search Console, treat Mueller’s comment as permission to stop spending hours on reactive link cleanup. The ranking issue is more likely elsewhere.

What This Doesn’t Change

Mueller’s comment reinforces a message Google has been sending for years: most sites do not need to worry about spammy backlinks hurting their rankings. Google’s systems are built to ignore them. The disavow tool exists for edge cases, not routine maintenance.

For practitioners who do need to use the tool, the domain-level approach is faster, cleaner, and more future-proof than URL-by-URL submissions.


AI-generated first-pass scaffolding. This draft was produced by Search Engine Journal’s newsroom automation as a starting point for a writer. Rewrite before publishing.


Research notes (review and remove before publishing)

The bot collected this context while writing. Skim, verify, then delete this whole section before publish.

Headline alternatives

  1. Mueller: Use Domain-Level Disavows, Not URL-by-URL
  2. Why Spammy Backlinks Probably Aren’t Hurting You
  3. Mueller’s Disavow Advice: Prioritize Domains, Stop Panicking

Primary sources cited

Competitor coverage seen

Practitioner pulse

No practitioner discussion found on LinkedIn or X specifically addressing this Mueller comment; LinkedIn results were entirely off-topic (WordPress vulnerabilities, GEO/AEO, LinkedIn growth tips). X results surfaced generic disavow guides, not reactions to this specific thread.

Background

Google’s Disavow Links tool, launched in 2012, allows webmasters to submit a text file of URLs or domain-level entries (using the `domain:` prefix) they want Google to ignore when assessing their backlink profile. The tool has a 100,000-line limit per file. Over the past two years, Google’s SpamBrain system has been increasingly cited by Google staff as capable of automatically neutralizing spammy link signals, reducing the practical necessity of manual disavowal for most sites (support.google.com). Mueller has repeatedly stated that Google can ignore bad links algorithmically, and this latest comment extends that guidance by explicitly recommending TLD-level disavowal — a less commonly discussed capability — for sites hit by large-scale spam attacks (seroundtable.com). The original Reddit thread describes a ‘Casibom attack’ where a site was flooded with automated spam backlinks, and the site’s SEO team was slowly batch-disavowing individual URLs out of fear that a bulk submission would trigger penalties.

Open questions for follow-up coverage

  • Does Google’s disavow tool actually support wildcard TLD-level entries (e.g., `domain:.xyz`), or does Mueller’s ‘by top level domain’ comment mean manually listing all domains sharing a TLD? The official docs don’t document TLD syntax.
  • Has Google ever confirmed that SpamBrain fully neutralizes negative SEO attacks at scale, or is the ‘we ignore them’ guidance aspirational? Case studies of sites that recovered only after disavowal would complicate this narrative.
  • The original poster mentions a ‘Casibom attack’ — is this a named/known spam campaign pattern worth documenting for readers?
  • Mueller’s phrasing ‘domain level dismemberment’ appears to be a playful word choice (vs. ‘disavowal’) — worth confirming this isn’t a reference to a different tool or feature.

⚠ Unknown-tier sources surfaced (vet before quoting)

Image search query

“person reviewing backlink data on computer screen”

Flags

dateline=fresh · degraded research: exa.sej, wp_401

Fact-check flags

  • · low — “disavow tool’s 100,000-line limit” — The research brief states the 100,000-line limit; confirm it matches current Google documentation exactly. (source: https://support.google.com/webmasters/answer/2648487?hl=en)
  • ◐ MED — “Search Engine Roundtable reported in April” — The research brief dates this article to 2026-04-13, but the draft says only ‘in April’ without specifying the year; verify the year is correct and consider whether omitting it could mislead readers into assuming a different year. (source: https://www.seroundtable.com/google-may-ignore-links-from-sites-that-spam-41148.html)
  • · low — “Mueller reconfirmed that Google may ignore outbound links from sites that violate spam policies” — Supported by the SERoundtable prior coverage citation, but the draft attributes this specifically to Mueller — verify the SERoundtable article attributes it to Mueller and not another Google staffer. (source: https://www.seroundtable.com/google-may-ignore-links-from-sites-that-spam-41148.html)
  • ◐ MED — “Google’s SpamBrain system is designed to identify and neutralize spammy link signals algorithmically” — SpamBrain is referenced in the research brief’s background section but no specific source URL is cited for this claim; the original Reddit comment and other cited sources do not mention SpamBrain by name.
  • ◐ MED — “Stop submitting disavow files in small cautious batches. Google’s documentation and Mueller’s guidance confirm there is no penalty risk from submitting a large disavow file at once.” — Mueller’s comment does not explicitly address penalty risk from large submissions; the draft extrapolates ‘no penalty risk’ from his general tone. Google’s documentation should be checked for explicit language on this point.
  • ◐ MED — “Mueller explicitly endorsed this approach” — Mueller said ‘You can even do it by top level domain if you see they’re all from the same TLDs’ — this is a suggestion, not an explicit endorsement of listing all offending domains from those TLDs in a disavow file. The draft’s bullet overstates the specificity of his advice. (source: https://www.reddit.com/r/bigseo/comments/1t11ss9/spammy_links_removal_limit_for_seo/ojj7ooh/)
  • ◐ MED — “Google’s Disavow Links tool, launched in 2012” — The 2012 launch date for the disavow tool appears in the research brief background but no source URL is cited to verify it; this is likely correct but should be confirmed.
  • · low — “A file that previously listed 50,000 individual spam URLs from 200 domains can be reduced to 200 lines” — This is an illustrative example created by the draft writer, not sourced from Mueller or Google documentation; it’s mathematically sound but should be flagged as editorial illustration, not a sourced claim.

Drafter’s writer notes

FACTCHECK_FLAGS_GO_HERE

Degraded research stages: exa.sej and wp_401 were flagged as degraded. No prior SEJ coverage was found for this topic, which may be a gap caused by the exa.sej degradation. Writer should manually check whether SEJ has previously covered Mueller’s disavow guidance or negative SEO topics and add internal links if relevant.

Open question on TLD syntax: Mueller says “you can even do it by top level domain” but Google’s official disavow documentation does not describe a TLD-level wildcard syntax (e.g., domain:.xyz). The article notes this ambiguity. If the writer can test or confirm whether such syntax works, that would strengthen the piece.

Mueller’s word choice: He wrote “domain level dismemberment” rather than “disavowal.” This appears to be a joke or autocorrect artifact. The article flags it briefly. Writer may want to confirm or cut that note depending on editorial preference.

“Casibom attack”: The original poster references this term. It may refer to a known spam campaign pattern (Casibom is a gambling brand associated with Turkish-language spam). Could be worth a brief mention or footnote if the writer can verify.

No social pulse: No practitioner reactions were found on LinkedIn or X for this specific Mueller comment. The article does not reference social discussion.

Unknown sources: Several unknown-tier sources appeared in research (12amagency.com, inbound-seo.uk, seoatlantic.com, martech.org). None were used in the article.


Fact-check pass: Most claims track to Mueller’s Reddit comment, but several assertions — particularly about SpamBrain, no penalty risk from large disavow submissions, and the 2012 launch date — lack direct support in the cited sources and should be verified before publication.

    • · low — “disavow tool’s 100,000-line limit”

The research brief states the 100,000-line limit; confirm it matches current Google documentation exactly. Source: https://support.google.com/webmasters/answer/2648487?hl=en

    • medium — “Search Engine Roundtable reported in April”

The research brief dates this article to 2026-04-13, but the draft says only ‘in April’ without specifying the year; verify the year is correct and consider whether omitting it could mislead readers into assuming a different year. Source: https://www.seroundtable.com/google-may-ignore-links-from-sites-that-spam-41148.html

    • · low — “Mueller reconfirmed that Google may ignore outbound links from sites that violate spam policies”

Supported by the SERoundtable prior coverage citation, but the draft attributes this specifically to Mueller — verify the SERoundtable article attributes it to Mueller and not another Google staffer. Source: https://www.seroundtable.com/google-may-ignore-links-from-sites-that-spam-41148.html

    • medium — “Google’s SpamBrain system is designed to identify and neutralize spammy link signals algorithmically”

SpamBrain is referenced in the research brief’s background section but no specific source URL is cited for this claim; the original Reddit comment and other cited sources do not mention SpamBrain by name.

    • medium — “Stop submitting disavow files in small cautious batches. Google’s documentation and Mueller’s guidance confirm there is no penalty risk from submitting a large disavow file at once.”

Mueller’s comment does not explicitly address penalty risk from large submissions; the draft extrapolates ‘no penalty risk’ from his general tone. Google’s documentation should be checked for explicit language on this point.

    • medium — “Mueller explicitly endorsed this approach”

Mueller said ‘You can even do it by top level domain if you see they’re all from the same TLDs’ — this is a suggestion, not an explicit endorsement of listing all offending domains from those TLDs in a disavow file. The draft’s bullet overstates the specificity of his advice. Source: https://www.reddit.com/r/bigseo/comments/1t11ss9/spammy_links_removal_limit_for_seo/ojj7ooh/

    • medium — “Google’s Disavow Links tool, launched in 2012”

The 2012 launch date for the disavow tool appears in the research brief background but no source URL is cited to verify it; this is likely correct but should be confirmed.

    • · low — “A file that previously listed 50,000 individual spam URLs from 200 domains can be reduced to 200 lines”

This is an illustrative example created by the draft writer, not sourced from Mueller or Google documentation; it’s mathematically sound but should be flagged as editorial illustration, not a sourced claim.


Editorial self-critique pass: Revised dek to hook rather than summarize. Tightened intro to two paragraphs that set the table without information-dumping. Replaced generic headings (‘What Mueller Said’, ‘What To Do Now’, ‘The Bottom Line’) with specific ones. Removed the takeaway bullet about no penalty risk from large submissions (extrapolated beyond Mueller’s actual words). Removed the generic ‘redirect time toward content quality’ bullet (would exist without this announcement). Softened ‘Mueller explicitly endorsed’ to ‘Mueller suggested this approach.’ Removed double-linking of Reddit source in intro + blockquote (kept blockquote link only, moved intro link to thread generally). Split multi-topic paragraphs. Removed speculative ‘open question’ about TLD wildcard syntax from closing section. Dropped ‘in April’ date reference that lacked year specificity.


Editorial review (applied): Rewrote dek to hook instead of summarize; tightened intro to two paragraphs; replaced four generic headings with specific ones; cut two takeaway bullets that failed the announcement-specificity test (no-penalty-risk extrapolation and generic ‘redirect time to content quality’); softened overstated attribution (‘explicitly endorsed’ → ‘suggested’); fixed double-linking of Reddit source; split multi

  • dek at `dek` — The dek is two ideas joined by a comma (‘faster approach’ + ‘questions whether spam links are the problem’) — it summarizes rather than hooks; framework says dek should plant a question, not close it.
  • intro at `intro_html` — The intro is three paragraphs but paragraph 1 already delivers the full lede (domain-level disavows + spam links probably not the problem); framework says intro sets the table, doesn’t serve the meal — too much detail in sentence 1.
  • heading at `outline[0].h2` — ‘What Mueller Said’ is a generic heading that doesn’t sub-divide meaningfully — the entire article is about what Mueller said; framework says headings must promise something specific.
  • paragraph_coherence at `outline[2].paragraphs[3]` — The paragraph starting ‘This does not mean the disavow tool is useless’ mixes two topics: (1) manual actions still need disavowal, (2) panicking site owners should stop treating junk links as root cause. Framework requires one topic per paragraph.
  • ⚠️ takeaway at `outline[3].bullets[2]` — ‘Stop submitting disavow files in small cautious batches’ — Mueller’s comment doesn’t explicitly address penalty risk from large submissions; this extrapolates beyond the announcement. Framework: takeaways must derive from THIS announcement specifically.
  • takeaway at `outline[3].bullets[3]` — ‘Redirect time toward content quality, technical SEO’ is generic advice that existed before this announcement; framework’s takeaway test says if the advice would exist without the announcement, cut it.
  • · heading at `outline[3].h2` — ‘What To Do Now’ is a generic template heading; framework says avoid ‘What this means’ / ‘Key takeaways’ style headings without an attached object.
  • · heading at `outline[4].h2` — ‘The Bottom Line’ is a generic wrap-up heading; framework says headings must promise something specific.
  • · ai_tell at `outline[2].paragraphs[1]` — ‘This is consistent with what Google has said repeatedly over the past several years’ is a filler transition that restates rather than advances; framework flags restating with synonyms and filler transitions.
  • linking at `outline[0].paragraphs[1] and outline[0].paragraphs[0]` — The Reddit source is linked in the intro AND in the blockquote attribution — framework says one link per claim per quote-block; pick one.
  • industry_depth at `outline[4].paragraphs[1]` — ‘The open question is whether Google will ever support true TLD-level wildcard syntax’ — this is speculative filler that any working SEO would already wonder; framework says don’t ask obvious questions.
  • · paragraph_coherence at `outline[1].paragraphs[1]` — The paragraph about TLD-level disavows mixes (1) what Mueller likely meant, (2) what the documentation says, and (3) the practical workflow — three sub-topics in one paragraph.
Category SEO
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...