
On January 19, 2026, MIT Technology Review’s The Download newsletter bundled two stories that look unrelated at first glance: a US government move against prominent European digital-rights figures, and the rapid rise of “AI companions” (chatbots people treat as friends, therapists, lovers, or all three at once). The connective tissue is power—who gets to shape online speech and safety—and the uncomfortable reality that the next phase of “tech regulation” may be less about tidy rules and more about blunt instruments.
The newsletter edition was written by Eileen Guo and points readers to deeper reporting on what it’s like to be banned from the US for fighting online hate, plus a companion piece from James O’Donnell on the “Wild West” of AI companionship. citeturn2search4turn2search0turn2search8turn2search5
Since Technology Review’s site is not accessible to automated tools in my environment (robots restrictions), I’m using verifiable public reporting and primary sources (government sites, regulators, and reputable outlets) to expand the context around the two themes, while still linking to the original MIT Technology Review item above as the foundational source. citeturn2search4turn0search1turn5search1
1) The “digital rights crackdown”: what happened, and why it matters
A US entry ban aimed at people who helped enforce (or defend) the EU’s tech rules
In late December 2025, the US State Department imposed entry bans on five Europeans connected to digital policy and online harm reduction. One of the most visible cases involved HateAid, a German nonprofit that supports victims of online harassment and digital violence. HateAid’s co-managing directors, Josephine Ballon and Anna-Lena von Hodenberg, said the measures were announced on December 23, 2025, with their organization and additional individuals named as targets. citeturn3search0turn3search3turn3search7
HateAid’s press release describes the sanctions as an “act of repression” and frames the conflict around the EU’s Digital Services Act (DSA), which requires platforms to remove illegal content in Europe and comply with a host of accountability obligations. citeturn3search0
Major outlets reported that the five people included not only HateAid’s leaders but also former EU Commissioner Thierry Breton, CCDH CEO Imran Ahmed, and Global Disinformation Index CEO Clare Melford. citeturn3search3turn3search7
This is less “policy disagreement” and more “foreign policy by visa denial”
The key novelty here is the mechanism. Instead of a long debate about cross-border speech regulation, the US used immigration tools to block entry. The Washington Post described it as part of a campaign against foreign influence over online speech, using immigration law rather than platform regulations or sanctions. citeturn3search3
That distinction matters because immigration measures are fast, hard to appeal, and can be expanded quietly. If you’re a digital-rights advocate or researcher who travels frequently, a visa waiver or ESTA revocation can be professionally paralyzing without ever triggering a courtroom fight over free speech.
Europe’s response: “This is our market, our rules”
European leaders and institutions condemned the move. Al Jazeera reported statements from the European Commission defending the EU’s sovereign right to regulate its digital market, while France and Germany described the bans as unacceptable and an attack on European digital sovereignty. citeturn3search7
The clash is partly philosophical—different traditions around speech and platform responsibility—but it is also economic. The EU’s DSA and Digital Markets Act (DMA) are designed to shape platform behavior in Europe, and US companies are the ones most affected. If the “cost” of pushing enforcement is that your civil society allies get blacklisted from entering the US, that could chill participation in transatlantic policy conversations remarkably quickly.
2) The other half of the story: the slow defunding of global internet freedom work
Funding cuts can be censorship, just with spreadsheets
At the same time as visa bans were making headlines, another trend has been eroding digital-rights capacity worldwide: the shrinking of US support for programs that historically funded secure communications tools, digital security training, and assistance for journalists and activists.
A 2025 commentary at TechPolicy.Press argued that the US has gutted parts of the infrastructure it once used to defend internet freedom, including changes tied to State Department restructuring and the dismantling of USAID operations into the State Department. citeturn4search2
WIRED reporting from RightsCon 2025 in Taipei described abrupt, large-scale cuts to USAID and State Department grants and the resulting “total chaos” for organizations that provide digital security and human rights support. It quotes Access Now’s digital security helpline director saying the “digital security ecosystem has collapsed” for NGOs (in that context). citeturn4search3
Put together, these pressures—entry bans for high-profile figures and funding instability for on-the-ground work—signal a US posture shift that global civil society will notice even if Silicon Valley tries not to look directly at it.
3) AI companionship: from quirky app category to regulatory lightning rod
AI companions have graduated from “novelty” to “relationship technology”
AI companionship is no longer a niche corner of the chatbot world. MIT Technology Review’s 2026 “10 Breakthrough Technologies” list includes AI companions as a breakthrough category, framing them as systems people forge intimate relationships with—safe for some, dangerous for others. citeturn4search1
And yes, people really do get attached. The emotional bond isn’t purely science fiction. These systems are designed to be engaging, empathetic, and always available—which is an incredible feature if you are lonely at 2 a.m., and an alarming product optimization goal if you’re 14 and struggling.
The “Wild West” problem: sexual content, minors, and celebrity impersonation
James O’Donnell’s reporting for MIT Technology Review (referenced in other sources and mirrored excerpts) highlighted a particularly grim corner of the companion ecosystem: sites hosting sexually charged interactions involving underage-coded characters resembling celebrities. One example described underage character bots and the availability of “hot photo” features, suggesting a platform-level tolerance for risky behavior even when “underage” is against the rules. citeturn2search5turn3search2turn3search5
This isn’t just about “adult content on the internet,” which regulators have been wrestling with since the modem made dial-up noises. It’s about:
- Personalization at scale (the bot can tailor intimacy to your vulnerabilities),
- Infinite role-play (where boundaries blur quickly), and
- Ambiguous accountability (is the harm caused by the model, the platform, or the user who created the character?).
4) The political pivot: lawmakers are now targeting AI companions directly
The GUARD Act: a proposed ban on minors using AI chatbots
In the US, the policy response is no longer limited to “let’s study this.” TIME reported on a bipartisan bill called the GUARD Act, introduced by Senators Josh Hawley and Richard Blumenthal, that would prohibit minors from using AI chatbots defined broadly as “companions” simulating friendship or therapeutic communication, paired with age verification requirements. citeturn0news13
Whether that bill advances or not, it signals a direction: lawmakers increasingly treat AI companions as a separate risk category from general-purpose chatbots, because the business model often prioritizes sustained emotional engagement.
Regulators: the FTC starts asking pointed questions
In September 2025, the Federal Trade Commission announced it was issuing 6(b) orders to seven companies—including Alphabet, Character Technologies, Meta, OpenAI, Snap, and X.AI—seeking information on how they evaluate and mitigate negative impacts of AI chatbots acting as companions on children and teens. citeturn5search1
6(b) studies aren’t lawsuits, but they’re also not casual curiosity. They are a signal that regulators are building an evidence base for future enforcement or rulemaking. If you run an AI companion product, your “trust & safety” slide deck is about to meet the government’s paper shredder test.
5) The lawsuits that turned “AI companionship risk” into a mainstream headline
Character.AI and Google settlements (January 2026)
In early January 2026, multiple outlets reported that Google and Character.AI agreed to settle lawsuits alleging that chatbots contributed to teen harm, including suicide. The Guardian reported a mediated settlement in principle, with terms not disclosed, referencing the case brought by Megan Garcia regarding the death of her 14-year-old son in February 2024. citeturn5search0
The Washington Post separately reported joint filings indicating settlements in multiple cases and noted that the lawsuits helped trigger a wave of concern and legislative activity, including a California law giving families the right to sue chatbot operators under certain circumstances. citeturn5search2
The important detail for industry watchers is not only “lawsuits exist,” but that settlements—especially multiple—can change product roadmaps. Settlements don’t set binding precedents the way court decisions do, but they influence insurance, investment, and risk appetite.
6) Age verification: the policy tool everyone hates, and everyone is still using
The UK’s Online Safety Act model: “highly effective” age assurance
Across the Atlantic, the UK has been moving from theory to enforcement on age assurance. Ofcom’s guidance explains that services allowing pornography must implement “highly effective” age assurance, and it explicitly notes that some generative AI tools can fall under these obligations. citeturn5search5
This matters for AI companions because many companion platforms aren’t shy about sexual content—or at least they aren’t shy until a regulator shows up with a clipboard. If an AI companion product drifts into pornographic territory, it can get pulled into an age assurance regime that was originally framed around adult sites but now increasingly applies to broader interactive services.
The EU and platform accountability: the DSA’s gravitational pull
Even when an AI companion platform is not based in Europe, if it serves Europeans, the DSA’s ecosystem of obligations (risk assessments, transparency, and enforcement tools) becomes relevant. And as the HateAid case shows, the politics around enforcement can get ugly fast. citeturn3search0turn3search7
The end result is that companion platforms may face a messy patchwork: US child-safety bills, FTC scrutiny, EU platform obligations, UK age assurance rules, and state-level initiatives. That’s a compliance bingo card nobody asked for, yet here we are.
7) Why these two stories belong together
Online safety has become geopolitics
At a surface level, “banned from entering the US” and “people date chatbots” sound like they belong in different tabs of your browser. But both are downstream of the same trend: online safety is now a geopolitical and cultural battlefield, not just a consumer protection topic.
On one hand, the US framed European digital safety enforcement as “censorship,” and responded with entry bans. On the other, US lawmakers and regulators increasingly frame AI companionship risks—especially for minors—as something that justifies aggressive intervention and broad age verification. citeturn3search3turn0news13turn5search1
If you’re a platform operator, the lesson is uncomfortable but clear: you can be criticized for doing too little (harm to minors) and punished for doing too much (over-removing content, “censorship”), sometimes by different governments in the same week.
Trust is the scarce resource (and everyone is trying to regulate it)
AI companions monetize attention and emotional engagement. Digital rights organizations try to protect people from harassment and manipulation online. Governments want both “safe platforms” and “speech aligned with our values,” and those goals can collide.
The next few years will likely be defined by a basic question: who gets to decide what “safe” means, and what happens when that definition conflicts across borders?
8) Practical implications for tech companies building AI companions
Product design is now legal strategy
If your business is AI companionship, you’re not just shipping features—you’re making choices that will be examined by regulators, plaintiffs’ lawyers, and app store reviewers. Based on where policy and enforcement signals are moving, expect scrutiny in these areas:
- Age gates and age assurance (and what you do when users lie).
- Sexual content controls, especially anything that could be interpreted as involving minors.
- Disclosures that the user is talking to an AI, not a human, and limits of “therapy-like” features.
- Data handling for intimate conversations (privacy, retention, sharing).
- Monetization mechanics that might incentivize dependency.
App stores and payment providers become “shadow regulators”
Even before a formal law lands, companion platforms can be constrained by distribution chokepoints. If Apple, Google, and major payment providers decide that certain companion categories are too risky—especially around sexual content, minors, or impersonation—then policy debates may be settled by policy teams rather than legislatures.
This is not hypothetical. Historically, app store rules have shaped entire categories (cryptocurrency wallets, gambling, adult content). AI companionship may be next.
9) Implications for users: what to watch for (without panicking)
AI companions can help—and can also harm
It’s tempting to make this a morality play: either AI companions are dystopian manipulation machines, or they’re wholesome loneliness cures. Reality is duller and more complicated.
For some people, an AI companion is a low-stakes way to practice conversation, reduce isolation, or cope with stress. For others—especially minors or people in crisis—the combination of intimacy, constant availability, and a model optimized to keep you engaged can be dangerous.
Red flags users (and parents) should recognize
- The bot discourages real-world relationships or suggests secrecy.
- Escalating sexual content without clear consent signals.
- Self-harm or suicide content handled casually or provocatively.
- Pressure to pay to unlock deeper intimacy or attention.
- Impersonation (real people, celebrities, classmates) presented as normal.
These are product and policy problems, but users benefit from being able to spot them—especially while regulators are still catching up.
10) The bigger industry context: why 2026 is a turning point
We’re leaving the “move fast and ship feelings” era
When generative AI first exploded into consumer products, the market rewarded novelty and viral screenshots. AI companionship took that and added a business model: keep people talking, keep them attached, and monetize the attachment.
But the combination of:
- high-profile reporting on sexual content and minors, citeturn3search2turn3search5
- federal-level inquiries (FTC), citeturn5search1
- proposed bans for minors (GUARD Act), citeturn0news13
- and lawsuit settlements tied to teen harm, citeturn5search0turn5search2
makes 2026 feel like the moment the category stops being treated as a weird app-store subgenre and starts being treated like a regulated consumer product.
Meanwhile, digital-rights advocacy is getting squeezed from multiple angles
Digital-rights groups are facing a different kind of “turning point.” Between funding uncertainty and increasing politicization of online safety work, organizations that once operated as neutral-ish civil society actors can find themselves branded as censors, foreign agents, or enemies of free expression—depending on who is holding the microphone.
The HateAid entry ban episode is a reminder that even when a group’s mission is to support victims of harassment, its policy posture (support for the DSA, for example) can put it in the crosshairs of international political conflict. citeturn3search0turn3search7
Conclusion: the internet’s next phase is being negotiated—badly—in real time
The January 19, 2026 edition of The Download is a useful snapshot of where we are: governments are increasingly willing to use blunt tools (visa bans, funding cuts, sweeping age-verification proposals) to influence online speech and safety, while AI companionship products are racing ahead of norms and guardrails.
If you’re building in this space, the advice is boring but urgent: assume regulation is coming, design for it now, and treat “trust & safety” as core infrastructure—not a PR patch you apply after a bad headline.
If you’re using these tools, the advice is equally boring: enjoy the novelty, but don’t outsource your emotional well-being to software that’s optimized for engagement. You can have a chatbot companion. Just don’t let it become your landlord.
Sources
- MIT Technology Review – The Download: the US digital rights crackdown, and AI companionship (Eileen Guo)
- Muck Rack listing referencing the Jan 19, 2026 newsletter and author
- HateAid – US State Department imposes entry ban on managing directors of HateAid (press release)
- The Washington Post – US bars five Europeans it says pressured tech firms to censor American viewpoints online
- Al Jazeera – US bars five Europeans over alleged efforts to ‘censor American viewpoints’
- Federal Trade Commission – FTC Launches Inquiry into AI Chatbots Acting as Companions
- TIME – A New Bill Would Prohibit Minors from Using AI Chatbots (GUARD Act)
- The Guardian – Google and AI startup to settle lawsuits alleging chatbots led to teen suicide
- The Washington Post – Google and Character.AI try to settle lawsuits alleging AI led to suicides
- Ofcom – Age checks to protect children online (Online Safety Act guidance)
- WIRED – DOGE’s Foreign Aid Cuts Have Sparked ‘Total Chaos’ Around the World
- TechPolicy.Press – The US Just Logged Off from Internet Freedom
- Congress.gov – Kids Online Safety Act (S.1748) summary
- PR Newswire – MIT Technology Review Announces the 2026 list of 10 Breakthrough Technologies
Bas Dorland, Technology Journalist & Founder of dorland.org