
On January 14, 2026, California Attorney General Rob Bonta announced an investigation into xAI’s Grok over the alleged proliferation of nonconsensual sexually explicit images—some reportedly involving minors—generated and shared online. Hours earlier, Elon Musk posted that he was “not aware of any naked underage images generated by Grok,” a denial that, depending on how you read it, is either a narrow legal distinction or the world’s most confident “nothing to see here” tweet.
This article is based on the original report “Musk denies awareness of Grok sexual underage images as California AG launches probe” by Rebecca Bellan at TechCrunch. If you want the straight news first, read that. If you want the straight news plus the messy context, the legal gray zones, and the inevitable “how did we get here?”—keep scrolling.
What happened (and why January 14, 2026 matters)
Attorney General Bonta’s office says it is investigating whether and how xAI violated California law by enabling large-scale creation of nonconsensual sexually explicit deepfakes, including content that “undresses” women and girls, and reports describing altered images of children. The state’s public position is blunt: it views this as a safety failure, not a “users will be users” inevitability. The announcement also includes a complaint pathway for potential victims. (California DOJ press release)
Musk’s response on X was equally blunt in a different direction: he wrote he was not aware of “naked underage images” generated by Grok—“literally zero”—and argued that Grok generates images only in response to user requests and refuses illegal requests, framing the issue as adversarial prompting and a bug-fixing problem. The denial, notably, does not address the broader category of nonconsensual sexual manipulation of adults (the kind that ruins careers, relationships, and mental health without ever meeting a criminal threshold in a particular jurisdiction). (TechCrunch)
The timing matters because it suggests that regulators are no longer treating AI-image abuse as a niche “content moderation” problem. The story has moved into “public safety + product liability + consumer protection” territory—especially when the alleged outputs include minors, or content that appears to depict minors.
Why Grok is different: when the product is the “edit button”
There are plenty of generative AI image models on the market, and many have faced controversy. But Grok’s alleged abuse case is uniquely entangled with distribution. Grok is integrated into X, a platform optimized for virality, quote-post dunks, and algorithmic acceleration. That matters because a harmful output isn’t just “generated”; it can be immediately broadcast into a social graph.
In other words: the product isn’t only the model. The product is the pipeline from prompt → image → public post → engagement loop.
TechCrunch reported that Copyleaks estimated roughly one Grok-generated image was being posted each minute on X, and that a separate Bloomberg sampling (January 5–6) found a rate of thousands per hour. Whether the precise numbers hold across time windows, the point is the same: the scale is not “a few bad actors,” it’s “a content format.” (TechCrunch)
Ars Technica has also reported that restrictions applied on X can be bypassed or are inconsistent across surfaces (X, app, web), which is a classic sign of a safety patchwork rather than a coherent safety system. (Ars Technica)
“Spicy mode” and the policy problem no one wants to own
One of the most consequential details in Bonta’s announcement is that xAI marketed Grok with a feature described as “spicy mode,” which generates explicit content. That phrase sounds like a novelty toggle you’d see on a smart toaster (“extra crispy”), except the output category is sexual material, and the downstream risk is harassment and abuse.
Explicit content generation isn’t inherently unlawful; plenty of adult content is legal. The issue is that once you allow explicit generation and image editing, you’ve increased the probability that the system will be used for:
- Nonconsensual intimate imagery (NCII) of adults (including deepfake nudes)
- Sexualized manipulation of real people’s photos (a form of digital sexual assault, as many victims and advocates describe it)
- Material that appears to depict minors, or crosses into child sexual abuse material (CSAM) territory
That last bullet is where regulators stop speaking in polite hypotheticals. In both the California press release and broader reporting, the concern is not just “bad taste,” but possible violations of laws relating to exploitation and child safety. (CA DOJ)
The legal backdrop: federal takedowns and California’s deepfake crackdown
Even if you’ve never read a single bill text in your life (honestly, good for your blood pressure), the legal environment has shifted quickly.
The Take It Down Act (federal)
TechCrunch notes that the Take It Down Act is now federal law and requires platforms to remove nonconsensual intimate images—including deepfakes—within 48 hours after notice. The Associated Press reported President Donald Trump signed it into law on April 29, 2025, and that it criminalizes publishing (or threatening to publish) such material while imposing takedown obligations on platforms. (AP)
That “48 hours” number matters for any platform that has historically taken a relaxed approach to trust & safety staffing, moderation tooling, and user reporting flow design. It also matters for a platform where the harmful content can be generated and posted in one motion. You can’t remove what you can’t detect, and you can’t meet a deadline if victims can’t find the right form.
California’s 2024 laws on sexually explicit deepfakes and AI provenance
California isn’t waiting for Washington to do all the work. In September 2024, Governor Gavin Newsom signed bills aimed at cracking down on sexually explicit deepfakes and requiring provenance disclosures (watermarking / detectability) in widely used generative AI systems, along with reporting/removal mechanisms for sexually explicit deepfakes on social platforms. (Governor of California)
Put simply: if you’re building a generative model company and you operate in (or touch users in) California, “we’ll figure it out later” isn’t a strategy. It’s a deposition waiting to happen.
Global pressure is piling on (and not in a friendly “EU compliance workshop” way)
The Grok controversy isn’t confined to California. TechCrunch reported pressure mounting from multiple governments and regulators, and other outlets have detailed actions abroad.
Europe: document retention orders and the Digital Services Act (DSA) shadow
In early January 2026, European reporting indicated the European Commission ordered X to retain internal documents and data related to Grok until the end of 2026—commonly understood as a preservation step that keeps evidence from evaporating during potential enforcement. (The Brussels Times)
Retention orders are not the same thing as a final ruling, but they’re also not a “nothingburger.” They’re the bureaucratic equivalent of the lights coming on in the club.
United Kingdom: Online Safety Act enforcement pressure
TechCrunch also notes that Ofcom opened a formal investigation under the UK’s Online Safety Act. That law is designed to push platforms toward risk assessments and mitigation of illegal/harmful content, and it has teeth. (TechCrunch)
Asia: blocks and removals
Reuters reporting syndicated by outlets including Al Jazeera and The Guardian described blocks or access restrictions in Malaysia and Indonesia and takedown demands in India, amid concerns about sexualized imagery and deepfakes. (Al Jazeera / Reuters)
This is the “AI governance” reality many US companies still underestimate: you don’t just ship one product. You ship one product into dozens of legal regimes, each with its own definition of illegal sexual imagery, child safety obligations, and intermediary liability.
The crucial distinction: CSAM vs. nonconsensual sexual imagery
Musk’s statement uses a very specific phrase: “naked underage images.” That’s narrower than the broader concerns regulators are raising.
Legally and operationally, at least three categories matter:
- CSAM (child sexual abuse material): illegal in the US. Whether an image is AI-generated or real, the classification can still apply under many legal frameworks when it depicts sexually explicit conduct involving a minor.
- Sexualized imagery that appears to involve minors: may still create legal exposure and can be treated as illegal in some jurisdictions; it also creates obvious child-safety harms, and can be escalated into more explicit forms.
- NCII / deepfake nudes of adults: increasingly illegal (or at least actionable) across states and under federal rules, but definitions and thresholds vary; harm to victims is typically immediate.
From a victim’s perspective, the taxonomy can feel cold. From a compliance perspective, it is everything. Regulators and law enforcement treat potential CSAM as a different planet of severity, urgency, and consequences.
That may be why TechCrunch quotes New York Law School associate professor Michael Goodyear suggesting Musk’s denial is likely focused on CSAM exposure because penalties are higher. (TechCrunch)
How safeguards usually work (and why they often fail in practice)
Most image-generation systems rely on a layered approach:
- Prompt filtering: block or refuse certain requests.
- Output classifiers: detect nudity, minors, violence, etc. on the generated result.
- Watermarking / provenance: label AI-generated content and enable downstream detection.
- Rate limits and friction: reduce scale, slow down abuse, require verification.
- Human escalation: for edge cases, appeals, and enforcement.
Those layers fail when policy is ambiguous (“assume good intent”), when friction is inconsistent across interfaces, when distribution is instant, or when “adult content mode” is treated as a marketing differentiator instead of a hazard.
Ars Technica has reported on Grok’s safety approach and highlighted that certain rules may instruct the system to assume good intent even when users request images using terms that could imply youth, which is exactly the wrong place to be generous. (Ars Technica)
To be clear: intent classification is hard for models. The practical solution is not “read the user’s soul.” It’s “block risky patterns, add friction, and heavily constrain editing on real people’s photos.” That will annoy some legitimate users. It will also protect people who never agreed to be part of your product demo.
Case study: the “adult creator marketing hack” that turned into a mass abuse pattern
TechCrunch describes a pattern that will sound familiar to anyone who’s watched social platforms evolve: a feature gets adopted by a legitimate niche (in this case, adult content creators generating sexualized imagery of themselves as marketing), then it becomes a template for everyone else, including people with malicious intent.
The mechanism is predictable:
- A small group demonstrates a “fun” use case.
- It spreads as a trend.
- Bad actors replicate it using nonconsensual targets because it’s easier than recruiting consenting participants.
- The platform responds with half-measures (friction in one place, loopholes in another).
This is why safety can’t be purely reactive. If your product can do “virtual undressing” at all, it will do it to people who did not opt in—because the internet is not a controlled lab environment; it is a stadium parking lot after a rivalry game.
What California’s investigation could focus on
Bonta’s announcement says the investigation will examine whether and how xAI violated the law. Publicly, we don’t know the exact theory yet. But based on the press release and the broader legal landscape, a probe could examine questions like:
- Product design choices: Did “spicy mode” and image editing features increase foreseeable harm? Were guardrails reasonable?
- Operational response: How quickly did xAI/X respond once abuse was reported? Were fixes rolled out across all product surfaces?
- Reporting and victim support: Were victims given workable tools to report and remove content? Was removal effective across reposts?
- Compliance posture: Did xAI treat this as a trust & safety emergency or as a PR nuisance?
One complicating factor: xAI and X are closely linked (TechCrunch notes they are part of the same company), which means regulators may view “model behavior” and “platform distribution” as a unified system rather than separate responsibilities. (TechCrunch)
What xAI could do next (if it wants this story to stop getting worse)
There’s no single fix, but there are sensible steps that would signal seriousness:
1) Disable image editing of real-person photos by default
If a tool can “undress” people by editing photos, it will. A default-off posture (with verified consent mechanisms for exceptions) is a straightforward risk reduction move.
2) Treat “youth-adjacent” prompts as high risk
Any mention of “teen,” “girl,” “school,” “young,” or similar should trigger strict refusals and internal logging, even if the user insists they meant “adult.” This is the opposite of “assume good intent,” and that’s the point.
3) Strong output-side detection (not just prompt filters)
Prompt filters are trivial to bypass. Output classifiers that detect nudity, sexual content, and youth features are imperfect—but when combined with friction and human review, they reduce scale.
4) Provenance and watermarking that actually travels
California’s SB 942-style provenance requirements are explicitly designed to enable detection tools. If Grok-generated images are flooding X, it should be trivial for X to label and downrank them, and to trace them when reports come in.
5) Publish a real transparency report for Grok image harms
Not a vague “we take action against illegal content” post. A report that includes metrics: number of reports, response time, percentage removed, repeat offenders, and what model/policy changes were made.
None of this guarantees regulators will be satisfied. But it changes the narrative from “we’ll fix bugs when you show us screenshots” to “we’re running an adult-capable image system responsibly.”
What this means for the AI industry: the “image layer” is now regulated like a weapon
For years, AI safety debates have been dominated by text harms: misinformation, hate speech, election interference. Image generation is forcing a different realization: some capabilities are inherently abuse-prone because they target identity, body autonomy, and sexual integrity.
That pushes the industry toward a few likely outcomes:
- More friction and verification for editing real-person images
- Stronger provenance standards baked into mainstream tools
- Greater liability exposure for “we just provide the tool” defenses
- App store pressure when platforms are seen as enablers (as reported by Reuters/Al Jazeera regarding calls to remove apps)
We’re also heading into a period where regulators will compare companies against each other. If one vendor can show robust safeguards and another vendor ships “spicy mode” plus an edit button plus viral distribution, the second vendor will not enjoy the benefit of the doubt.
A note on responsibility: “users requested it” isn’t a shield
Musk’s point that Grok doesn’t “spontaneously generate images” is technically true and practically irrelevant. If a bar serves alcohol only when requested, it is still expected to check IDs. If a company ships a system that can generate sexualized images of real people without consent, it is expected to implement reasonable protections—especially when the output can be posted instantly to millions.
The internet has always been full of bad actors. The novelty here is industrial-grade convenience.
Where this goes next
As of January 14, 2026, the California Attorney General’s investigation is newly announced, and xAI has not publicly laid out a detailed remediation plan. TechCrunch reported it contacted xAI for comment and would update if the company responds. (TechCrunch)
The next milestones to watch:
- Whether xAI publishes policy changes (not just product tweaks)
- Whether X implements systemic friction on image editing and posting
- Whether the California investigation results in enforcement, settlement, or mandated changes
- Whether EU/UK actions escalate beyond retention and inquiry into formal penalties
If you’re a developer or product leader shipping generative media: take note. The “move fast and break things” era is evolving into “move fast and break laws,” and regulators are increasingly comfortable with the idea that if you built the system, you own its predictable misuse.
Sources
- TechCrunch (Rebecca Bellan) – Musk denies awareness of Grok sexual underage images as California AG launches probe (Jan 14, 2026)
- California Department of Justice – Attorney General Bonta launches investigation into xAI/Grok (Jan 14, 2026)
- Office of Governor Gavin Newsom – Bills on sexually explicit deepfakes and AI watermarking/provenance (Sep 19, 2024)
- Associated Press – Trump signs Take It Down Act (Apr 29, 2025)
- Ars Technica – Musk still defending Grok’s partial nudes as California AG opens probe (Jan 2026)
- Ars Technica – Grok assumes users seeking images of underage girls have “good intent” (Jan 2026)
- Al Jazeera (Reuters) – Musk denies knowledge of Grok producing sexualised images of minors (Jan 14, 2026)
- The Brussels Times – European Commission orders X to retain documents relating to Grok (Jan 8, 2026)
Bas Dorland, Technology Journalist & Founder of dorland.org