Spain has opened a new front in the fight against harmful synthetic media, ordering a criminal probe into how major social platforms may be enabling the spread of AI-generated child sexual abuse deepfakes. The move reflects a growing recognition that deepfakes are no longer a niche technical problem, they are a mass-distribution problem shaped by platform design, recommender systems, and weak enforcement.
On 17 February 2026, multiple outlets reported that Prime Minister Pedro Sánchez directed prosecutors to investigate X, Meta and TikTok over alleged AI-generated child sexual abuse material circulating on their services. Sánchez argued that “These platforms are undermining the mental health, dignity, and rights of our children,” and said the “impunity” of platforms must end, framing the issue as both a criminal matter and a question of corporate responsibility.
1) What Spain announced on 17 February 2026
According to Reuters, Spain’s government instructed prosecutors to investigate X, Meta and TikTok to determine whether the platforms helped spread AI-generated child sexual abuse deepfakes. The focus is not merely on individual offenders creating illegal material, but on whether the systems and policies of large platforms contributed to distribution and persistence.
Al Jazeera’s coverage emphasized Sánchez’s political message: that platform “impunity” must end. In practical terms, that language signals a willingness to look beyond takedown promises and scrutinize whether platforms have adequate detection, reporting channels, and safeguards, especially where minors are involved.
The Guardian described the move as a criminal investigation into potential platform liability for AI-generated child sexual abuse material and deepfake imagery, referencing expert reporting and Spain’s broader push to tighten accountability. Taken together, the reporting positions Spain’s action as part of a broader shift: from viewing deepfakes as “content moderation challenges” to treating them as potential criminal facilitation and compliance failures.
2) Why the probe centers on platforms, not only creators
Deepfakes that sexualize minors can spread quickly because social platforms enable frictionless sharing, algorithmic amplification, and re-uploading. Even when a single upload is removed, near-identical copies can proliferate across accounts, groups, and mirrored links, especially if hashing and matching systems are incomplete or inconsistently applied.
By investigating platforms, prosecutors can examine whether internal processes actually work at scale: how quickly reports are actioned, whether moderation teams are adequately resourced, and how recommendation systems might inadvertently steer users toward exploitative material. This approach also acknowledges a reality: creators may be hard to identify, while platforms are known entities with compliance duties and audit trails.
TIME’s write-up added detail about the modern AI context, highlighting concerns about AI tools and guardrails, including xAI’s Grok being hosted on X. The point is not that one tool alone causes abuse, but that an expanding ecosystem of generative AI increases the volume of synthetic sexual imagery and raises the stakes for rigorous preventive controls.
3) The legal and policy backdrop: Spain’s tightening stance on deepfakes
Spain’s 2026 investigation did not appear out of nowhere. In March 2025, Reuters reported that Spain approved a draft bill that would allow fines up to €35 million (or 7% of global turnover) for companies failing to properly label AI-generated content, aiming to curb deepfakes and increase transparency.
Euronews provided additional detail on the proposed fine ranges, €7.5 million, €35 million, or 2% and 7% of global turnover depending on severity, framing the draft law as aligned with the EU AI Act’s transparency goals. The central logic is that synthetic content should not masquerade as authentic, and that mislabeling or non-labeling can cause real harms, from misinformation to targeted abuse.
Separately, The Guardian reported in February 2026 that Sánchez was pushing tougher child online protections, including discussion of a proposed under-16 social media ban. Whether or not such a ban advances, it shows Spain’s political direction: treating online child safety as a regulatory priority and linking it to platform accountability.
4) How AI deepfakes enable child sexual abuse material at scale
AI-generated child sexual abuse material (CSAM) deepfakes can be created by synthesizing a minor’s face onto explicit imagery, generating entirely synthetic bodies, or producing sexualized “non-consensual” images that depict minors or appear to. Even when images are fully synthetic, the harm can be severe, normalizing exploitation, creating blackmail leverage, and driving demand for increasingly extreme content.
The distribution layer matters as much as the generation layer. Once an image exists, it can be propagated through public feeds, private messages, and cross-platform reposting. In many cases, the same piece of material will bounce between mainstream services and smaller, less moderated sites, with mainstream visibility acting as an accelerant.
For investigators, deepfake CSAM raises complex questions about intent, knowledge, and control: Did platforms implement reasonable safeguards? Did they respond promptly to notices? Did they allow repeat offenders to return? These are the types of issues a prosecutor-led probe can explore by compelling documentation and examining operational practices.
5) Real-world cases in Spain show the harm is not hypothetical
Spain has already faced concrete incidents involving sexualized deepfakes of minors. In August 2024, Spanish outlet SpainEnglish reported that the Guardia Civil investigated five youths accused of creating AI-generated nude deepfakes of 20 underage girls and sharing the images on social media. The case illustrated how readily available tools can be weaponized in peer contexts, including schools and local communities.
Such cases demonstrate a painful dynamic: the victims are often identifiable, the content spreads among acquaintances, and the social harm, shame, harassment, anxiety, can persist long after the images are removed. For minors, the impact can include long-term reputational damage and mental health consequences, particularly when peers participate in distribution.
These domestic incidents also help explain why Spanish leaders are emphasizing dignity and children’s rights. The political argument is that enforcement must address both the individuals who create the material and the infrastructures that allow it to circulate repeatedly.
6) Platforms, paid ads, and the “evasion economy” around deepfakes
Deepfakes are not limited to illegal sexual imagery; they are also used in scams and manipulation, revealing how coordinated actors exploit platform systems. In June 2024, DFRLab documented campaigns using audio deepfakes of public figures in Meta ads targeting Spanish-speaking audiences in Spain and Latin America, noting takedowns and removals alongside continued evasion by advertisers.
This matters because it highlights a broader enforcement challenge: malicious actors rapidly iterate on creative assets, rotate accounts, and exploit ad infrastructure and moderation gaps. If platforms struggle to prevent repeated deepfake scam ads in paid channels, critics argue it raises questions about whether they can effectively suppress even higher-priority harms without stronger oversight.
While scam deepfakes and CSAM deepfakes are distinct problems, they share a common pattern: industrialized production, fast distribution, and “whack-a-mole” enforcement. Spain’s probe can be read as an attempt to pressure platforms to move from reactive takedowns to systemic prevention and repeat-offender disruption.
7) What accountability could look like after the investigation
Depending on what prosecutors find, Spain’s approach could set expectations for demonstrable safeguards: robust reporting and escalation paths, faster removal timelines for high-severity content, and improved detection methods such as perceptual hashing, classifier-based signals, and cross-platform collaboration where legally feasible.
Accountability may also mean interrogating product decisions, how recommendations work, how virality is throttled for sensitive categories, and how private sharing is monitored within privacy constraints. The core tension is clear: platforms want to protect user privacy and enable expression, while governments demand measurable protections for minors and evidence that safety promises translate into outcomes.
Finally, the investigation interacts with Spain’s transparency push on AI-generated content from 2025. If labeling and provenance become more common, it could help moderation teams and users identify synthetic media earlier. But labeling alone is not enough for CSAM; effective safety requires prevention, rapid removal, and coordinated action against networks that generate and distribute abuse.
Spain’s decision to probe X, Meta and TikTok over AI deepfakes signals a shift from debating harms in the abstract to testing platform responsibility in criminal and regulatory terms. With Sánchez arguing that these platforms undermine children’s dignity and rights, the Spanish government is positioning child safety as a non-negotiable constraint on growth and engagement.
For the rest of Europe, and for platforms operating globally, the case will be watched closely. If Spain’s prosecutors treat distribution systems, not just individual uploaders, as central to the problem, it may accelerate a new era of enforcement where deepfake harms are met with higher legal risk, stricter compliance expectations, and far less tolerance for “impunity.”