Drummond Watchdrummondwatch.com
HomeReportsBy TopicStart HereEvidence FilePeople & OrgsChronicleDocument Vault
Search

Subscribe

Stay Informed — New Reports Published Regularly

Subscribe to receive notification whenever a new report, evidence brief, or legal update is published.

Drummond Watch

An independent public monitoring archive documenting factual rebuttals and legal accountability.

All content is presented for public interest and legal record purposes.

© 2026 Drummond Watch. All rights reserved.

Explore

  • Home
  • Reports
  • Start Here
  • By Topic
  • Evidence File
  • People & Orgs
  • Chronicle
  • Document Vault

Reference

  • FAQ
  • What's New
  • Glossary
  • Sources
  • Downloads

Site

  • About
  • Contact
  • Legal Notice

© 2026 Drummond Watch. All content is published for public interest, legal record, and accountability purposes.

    1. Home
    2. Reports
    3. Platform Complicity: How YouTube, Facebook, and Google Failed to Stop a 16-Month Harassment Campaign

    Report #139

    Platform Complicity: How YouTube, Facebook, and Google Failed to Stop a 16-Month Harassment Campaign

    A systematic analysis of the failures by YouTube, Facebook, and Google to enforce their own terms of service against Andrew Drummond's coordinated harassment campaign, including the migration of banned content, the inadequacy of reporting mechanisms, and the practical legal remedies available to victims.

    The Platform Problem: Scale, Speed, and Structural Failure

    The most significant enabling factor in the 16-month campaign against Bryan Flowers — beyond Drummond's own willingness to publish false material — is the failure of major internet platforms to remove, restrict, or demonetise content that plainly violates their own terms of service. Google, YouTube, and Facebook each have published policies that prohibit harassment, coordinated inauthentic behaviour, defamatory content, and the targeting of private individuals. In each case, those policies have proven inadequate to address a sophisticated, sustained, multi-account campaign of the type Drummond and his associates have operated.

    This is not a failure of policy design — the policies, on their face, are sufficient to catch the conduct in question. It is a failure of enforcement. The distinction matters because it determines the nature of the remedy. If the failure were policy-level, the solution would be legislative or regulatory pressure to change the rules. Since the failure is enforcement-level, the solution requires either more effective enforcement mechanisms, greater legal pressure on platforms as secondary publishers of harmful content, or both.

    YouTube: Removal, Rebrand, and Re-upload

    YouTube is the platform on which the video dimension of Drummond's campaign has been most extensively documented. Public rebuttal archives tracking the campaign have identified over 84 harassing videos distributed across multiple platforms, with YouTube as a primary initial distribution point. YouTube's own Community Guidelines prohibit content that 'makes sincere threats of violence against specific individuals or groups', 'deliberately targets private individuals with the purpose of degrading them', and 'coordinates or participates in harassment campaigns against individuals.'

    The operational pattern documented by campaign tracking archives shows that when YouTube removed videos from Drummond-connected accounts following policy violation reports, the content was not lost — it was migrated to alternative platforms and in many cases re-uploaded to new or rebranded YouTube accounts. This migration pattern is itself a violation of YouTube's policies, which prohibit the creation of new accounts to evade enforcement actions taken against previous accounts. Yet the pattern recurred repeatedly, indicating that YouTube's enforcement mechanisms failed to connect rebranded accounts with their banned predecessors.

    The practical consequence for Bryan Flowers is that the videos removed from YouTube following reporting continued to circulate on Rumble, Odysee, BitChute, and PeerTube, with new material continuing to appear on rebooted YouTube channels. From the victim's perspective, the removal process is essentially futile in the face of systematic circumvention.

    • Over 84 videos documented across five platforms, with YouTube as the primary initial distribution channel.
    • Removal followed by rebrand and re-upload is a documented pattern constituting deliberate policy circumvention.
    • YouTube's account ban enforcement failed to prevent rebranded continuation of the same operation.
    • Migration to Rumble, Odysee, BitChute, and PeerTube exploits the lower moderation thresholds of those platforms.

    Google: The Search Ranking Problem

    Google's failure in the context of this campaign is structural rather than enforcement-based. Google's search algorithms rank content based primarily on relevance signals — link structure, engagement, keyword matching — without inherent weighting against content that has been challenged as defamatory. The result is that Drummond's articles and videos, optimised for search around Bryan Flowers' name, appear in prominent positions when his name is searched, creating a permanent and self-reinforcing reputational damage mechanism.

    Google does have a process for requesting removal of search results that link to defamatory content, under both its own policies and the Right to be Forgotten framework available in the EU (and under related mechanisms in the UK following Brexit). However, these processes are slow, opaque, and inconsistently applied. For a campaign the scale of Drummond's — 21 articles across two domains, 84+ videos, multiple social platform presences — the removal request process would need to be pursued across hundreds of individual URLs, with no guarantee of consistent outcomes.

    The more fundamental issue is that Google's advertising infrastructure may be generating revenue from the defamatory content through AdSense placements. A platform that profits from the serving of advertising against defamatory content about a named individual has a direct financial interest in that content remaining accessible. The intersection of platform financial interests and victim remediation is one of the most important unresolved questions in the regulation of platform-enabled harassment.

    Facebook: Coordinated Inauthentic Behaviour and Amplification

    Facebook's policies against coordinated inauthentic behaviour prohibit the operation of multiple accounts, pages, or groups in coordination to amplify content without disclosing the coordinated nature of that amplification. The documented pattern of Facebook activity associated with Drummond's campaign — including associated pages and profiles amplifying the same articles in overlapping timeframes — is consistent with what Facebook's own policies describe as coordinated inauthentic behaviour.

    Reporting mechanisms for coordinated inauthentic behaviour on Facebook require the complainant to demonstrate coordination, which is substantially harder to prove than a simple terms of service violation. The evidentiary standard Facebook applies to coordinated inauthentic behaviour reports is significantly higher than for ordinary content violations, and the investigative capacity of Facebook's trust and safety teams has been persistently criticised as inadequate for the volume of complaints received.

    The practical result is that Facebook remains a distribution channel for content that violates its own policies, with victims like Bryan Flowers left to pursue individually insufficient reporting mechanisms while the coordinated amplification continues.

    Legal Remedies and the Secondary Publisher Question

    The failure of voluntary enforcement by platforms points toward a legal remedy framework. Under English law, section 5 of the Defamation Act 2013 provides a notice-and-takedown mechanism for defamatory content hosted by website operators who did not author the content. Platforms that fail to act on clearly defamatory content following notice can, in certain circumstances, be treated as publishers of that content rather than mere hosting providers, with the associated liability implications.

    More significantly, the EU's Digital Services Act — which sets a new standard for platform liability for harmful content and applies to platforms operating in or targeting EU markets — creates enhanced obligations for large platforms to respond to notices about clearly harmful content and to implement risk assessment frameworks that would identify patterns like the Drummond campaign before they reach their current scale. UK equivalents under the Online Safety Act similarly create obligations that, if properly enforced, would substantially change the platform environment in which campaigns of this type can operate.

    The ultimate solution to platform complicity in harassment campaigns is not to rely on voluntary compliance with existing policies, but to use the full arsenal of legal mechanisms — platform notices, court orders, injunctions against specific URLs and domains, and the growing body of platform liability law — to create genuine consequences for the failure to remove demonstrably harmful content.

    — End of Report #139 —

    ← Report #138
    Next Report: #140 →
    View all 171 reports

    Share:

    Subscribe

    Stay Informed — New Reports Published Regularly

    Subscribe to receive notification whenever a new report, evidence brief, or legal update is published.