Is the Pentagon Allowed to Surveil Americans With AI? The Legal Loopholes, Data Broker Problem, and What Comes Next

AI generated image for Is the Pentagon Allowed to Surveil Americans With AI? The Legal Loopholes, Data Broker Problem, and What Comes Next

AI is doing to surveillance what SSDs did to hard drives: it’s making something that used to be slow, noisy, and expensive suddenly fast, quiet, and—crucially—cheap enough to do at scale.

That’s why the question posed by MIT Technology Review“Is the Pentagon allowed to surveil Americans with AI?”—isn’t academic. It’s the kind of question that determines whether “oversight” means meaningful friction, or merely a PDF with good intentions and bad footnotes.

Before we get into the weeds, credit where it’s due: this article is inspired by that MIT Technology Review piece (March 6, 2026) and its original creator, Charlie Osborne (as credited by MIT Technology Review). If you want the original framing that sparked this analysis, start there.

Now, let’s answer the question in the most annoying journalist way possible: it depends what you mean by “surveil,” and it depends what the Pentagon is doing with the data. Which is not a cop-out—it’s the core problem. In 2026, the US legal and policy landscape still draws big lines around “electronic surveillance,” “search,” and “targeting.” Meanwhile, AI helps agencies do things that feel like surveillance to normal humans but don’t neatly fit older statutory definitions.

1) Surveillance, “Surveillance,” and Why Definitions Are the Battlefield

One of the most important points raised in the Technology Review discussion (and echoed by legal scholars in similar debates) is that many activities people intuitively think of as surveillance are not always treated as surveillance under US law.

For example, collecting or analyzing:

  • Publicly available information (social posts, public records, footage posted online)
  • Commercially available information (data purchased from brokers, marketing datasets, “anonymized” device-location feeds)
  • Information incidentally collected about US persons during foreign intelligence activities

None of these require a Hollywood-style wiretap van. They often don’t require a warrant. And when you add AI to the mix, the same raw ingredients can be turned into a detailed portrait: who you associate with, where you go, what you do at night, what you read, what you buy, and which clinic you visited (whether you told anyone or not).

That’s the “supercharging” effect: AI doesn’t have to create a new surveillance authority to make surveillance more powerful. It makes existing authorities—and existing loopholes—more scalable.

2) The Pentagon’s Mission Isn’t Domestic Policing—But the Data Doesn’t Care

Legally, the Department of Defense is not a domestic law enforcement agency. The US has long-standing norms (and laws like the Posse Comitatus Act) that generally restrict military involvement in domestic law enforcement.

But here’s the modern reality: data doesn’t respect organizational charts. Data moves across agencies, contractors, “intelligence assistance” relationships, fusion centers, and interagency task forces. The practical question becomes less “Is the Pentagon running domestic surveillance?” and more:

  • What kinds of data can DoD intelligence components collect about US persons?
  • What kinds of data can they buy?
  • What can they retain?
  • Who can they share it with?
  • What can they query it for?
  • How does AI change the effective scope of all of the above?

3) The Big Legal Pillars: EO 12333, FISA Section 702, and the “Outside the US” Trick

Two frameworks dominate modern US foreign intelligence collection discussions:

  • Executive Order 12333, the backbone for a large amount of foreign intelligence collection—especially collection that occurs outside the US or otherwise avoids the traditional FISA court process. citeturn0search25turn3search1turn3search7
  • FISA Section 702, which allows targeting of non-US persons reasonably believed to be outside the US to acquire foreign intelligence, while “incidentally” collecting Americans’ communications when they interact with those targets. citeturn3search11turn0search7

Both have oversight mechanisms. Both have civil liberties language. Both have a persistent, well-documented critique: Americans’ data can be swept up without a traditional warrant, and agencies can often query the resulting datasets for US person information under rules that critics argue are too permissive. citeturn3search1turn3search7turn3search11

Here’s why AI matters: in older eras, “incidental collection” might still have been limited by what humans could practically sift through. AI makes that constraint vanish. A system can summarize, classify, connect, and surface US-person-related information as a byproduct of analyzing huge collections.

EO 14086 and the “necessary and proportionate” language

In October 2022, President Biden issued Executive Order 14086 to enhance safeguards for US signals intelligence activities, largely driven by international data transfer and privacy concerns. It introduces “necessary and proportionate” framing, objectives, and a redress mechanism for certain complaints. citeturn4search2turn3search1

Critics, including the Brennan Center, argue that the order still leaves room for bulk collection and that the protections for Americans caught in these systems remain limited—particularly when compared with what the Fourth Amendment intuitively suggests to most people. citeturn3search1turn3search7

4) The Data Broker Loophole: “We Didn’t Spy, We Shopped”

If there’s a single theme that keeps popping up whenever lawmakers and civil liberties groups argue about AI surveillance, it’s this: commercial data markets let the government buy access to information it might otherwise need a warrant to compel.

This isn’t speculation; it’s a documented policy concern at the highest levels of government. In February 2024, the Associated Press reported on an executive order aimed at better protecting Americans’ sensitive personal data from foreign adversaries—because commercial data brokers can enable tracking of Americans, including military service members. citeturn3news14

But foreign adversaries aren’t the only reason the data broker market matters. The same pipelines that feed advertisers can feed government buyers. And AI makes those pipelines dramatically more useful.

ODNI’s Commercially Available Information (CAI) framework

On May 8, 2024, the Office of the Director of National Intelligence released an Intelligence Community Policy Framework for Commercially Available Information—an attempt to establish baseline standards for how IC elements categorize, acquire, and handle CAI. citeturn4search0turn4search1turn4search7

Supporters describe this as a privacy step forward because it acknowledges sensitivity categories and calls for safeguards. Critics argue it doesn’t go far enough because it doesn’t categorically prohibit the purchase of certain sensitive datasets about Americans, instead relying heavily on internal processes and reporting. citeturn3search4turn4search4

FTC actions show how sensitive location data gets—and stays—dangerous

The Federal Trade Commission has been increasingly aggressive about geolocation data abuses. For instance, the FTC sued data broker Kochava in 2022, alleging it sold data that could track people to sensitive locations like reproductive health clinics and places of worship. citeturn3search0turn3search5

Separately, the FTC reached a settlement with General Motors/OnStar related to allegations of sharing driver data, including precise geolocation and driving behavior, without proper consent. citeturn3news13

These enforcement actions underscore a key point: “anonymized” data is frequently re-identifiable or practically identifying when combined with other datasets. AI accelerates that recombination.

5) DoD’s Own Rulebook: Intelligence Oversight and DoDD 5240.01

Much of what DoD intelligence components can do is shaped by internal directives and manuals implementing executive orders, statutes, and Attorney General-approved procedures.

Two documents show up repeatedly in oversight discussions:

  • DoD Manual 5240.01 (intelligence procedures and oversight guardrails) citeturn0search15turn0search13
  • DoD Directive 5240.01 (policy and direction for DoD intelligence and intelligence-related activities and intelligence assistance to law enforcement and civil authorities; updated September 27, 2024) citeturn0search17turn3search12

DoD’s Privacy, Civil Liberties, and Transparency office highlights EO 12333 as implemented by DoD Manual 5240.01 as part of the intelligence oversight obligations. citeturn0search4

These documents matter because they define terms, set approval requirements, impose reporting duties, and describe the “how” of intelligence assistance. They also highlight something subtle but important: the DoD’s authority and constraints often hinge on whether an activity is categorized as “intelligence,” “intelligence-related,” “training,” “force protection,” or support to civil authorities. Those labels influence what approvals apply, and which legal regimes are triggered.

The policy fight: transparency, scope, and public trust

DoDD 5240.01 has attracted criticism from some advocacy groups and commentators who argue it expands potential domestic roles or blurs lines with law enforcement support. Others argue the directive is primarily about clarifying assistance rules and governance rather than creating new spying powers. Public-facing debate tends to be heated partly because many operational details are classified, and partly because the plain-language summaries are… let’s say “dense.” citeturn0search17turn3search12

As a journalist, I’d add: if your privacy policy needs three cross-referenced PDFs and a legal glossary to sound non-creepy, you’re already losing the PR war.

6) Where AI Changes the Game: From Collection to Inference

When people ask “Is the Pentagon allowed to surveil Americans with AI?” they often imagine AI as the collection tool. In many real-world scenarios, AI is more likely to be the analysis tool: it ingests data collected under existing authorities and extracts conclusions.

That distinction matters. Law often focuses on collection. But privacy harm often comes from inference.

AI can turn low-sensitivity inputs into high-sensitivity outputs

Consider a pile of “not that sensitive” inputs:

  • Public voter registration data
  • Public property records
  • Commercial ad-tech device identifiers
  • Public social posts
  • License-plate reader hits owned by a contractor

Individually, some of this might be easy to access without judicial process. AI can combine it into something that feels like a personal dossier: associations, routines, likely workplace, likely home, trips, relationships, and vulnerabilities.

US law is still wrestling with how to treat those AI-generated derived facts. That’s not unique to national security; it’s a broader AI governance problem. But in the DoD context, the stakes are higher and the oversight is harder to verify publicly.

Scale is the superpower (and the threat)

Even when oversight rules exist, they’re often built around assumptions that humans are the bottleneck. AI removes that bottleneck. The marginal cost of analysis drops. What used to require a team now requires a workflow and enough compute.

That’s why the debate is as much about economics as law: AI changes what’s feasible, not just what’s authorized.

7) “But It’s for National Security”: The Legitimate Uses Are Real

To avoid drifting into cartoonish dystopia, it’s important to say plainly: the DoD can have legitimate national security reasons to collect and analyze information involving US persons—particularly in counterintelligence contexts (e.g., suspected espionage or terrorism-related activity).

Section 702 defenders, for example, argue that it produces foreign intelligence reporting that can help protect the US and that it has oversight built in. DoD officials have publicly credited Section 702 in testimony contexts as a key tool against foreign adversaries. citeturn0search7

And DoD isn’t alone in facing modern data dilemmas. The broader federal government has been moving toward standardized AI and data governance (with varying degrees of success), especially as agencies adopt AI for vetting, cyber defense, and analytic triage. citeturn4search9turn0search2

The problem isn’t that there are no legitimate use cases. The problem is that legitimate use cases can coexist with weak boundaries—and AI exploits ambiguity like it’s a feature request.

8) Practical Scenarios: What “Pentagon AI Surveillance” Could Look Like

Let’s translate legal theory into plausible operational patterns. None of these require a sci-fi domestic spying mandate; they’re variations on “use what you already have, but faster.”

Scenario A: OSINT + AI for threat monitoring

DoD components monitor public platforms for foreign influence operations, extremist propaganda, or threats to military personnel. AI helps cluster accounts, translate content, summarize narratives, and flag emerging events. If Americans participate in those public spaces, their content gets included. Whether that’s “surveillance” depends on your definition, but it absolutely can feel like it.

Scenario B: Purchased location data + pattern analysis

A DoD component buys commercially available mobility data “for force protection research,” then uses AI to identify movement patterns near bases or sensitive facilities. That analysis might surface civilians’ routines. You might never be “targeted,” but you might still be “found.”

Scenario C: Incidental collection + AI summarization

Foreign intelligence collection captures communications involving a US person. AI summarizes the thread, extracts entities, and recommends follow-up queries. The US person wasn’t the target, but their data becomes salient—and potentially discoverable—because AI made it cheap to process.

Scenario D: Intelligence assistance to civil authorities

Under established rules for assistance to law enforcement/civil authorities, DoD intelligence components may provide certain support. AI could be used to filter or enrich data shared across agencies. This is exactly where clarity matters: what is shared, under what predicate, and with what retention and minimization controls? DoDD 5240.01 is relevant here because it governs intelligence assistance relationships at a policy level. citeturn3search12turn0search17

9) Oversight in the AI Era: The Old Controls Don’t Always Fit

Oversight for intelligence activities includes internal compliance offices, inspectors general, reporting to Congress, and (in certain contexts) court oversight. DoD’s intelligence oversight program includes training and compliance expectations across military, civilian, and contractor personnel involved in intelligence work. citeturn0search8turn0search4

But AI stresses these systems in at least four ways:

  • Opacity: Model-driven workflows can be hard to audit, especially if they’re proprietary or deployed via contractors.
  • Speed: Compliance reviews struggle when analytic outputs update continuously.
  • Scope creep: Once data exists, new “secondary uses” emerge (and AI makes them attractive).
  • Inference risk: Rules about what you can collect don’t automatically handle what you can infer.

Researchers inside the defense ecosystem have been pushing “assurance” frameworks for AI-enabled systems, essentially arguing that trust requires disciplined claims, testing, and lifecycle governance—not vibes. citeturn4academia20

10) What’s Happening Politically: Public Concern, Guardrails, and the Trust Gap

Public opinion is not a legal authority, but it shapes what lawmakers are willing to touch—and surveillance laws are notoriously “hands-off” until they explode in the headlines.

Recent polling and policy commentary in the AI era shows significant concern about AI in surveillance contexts. For example, an ITIF survey release in February 2026 highlighted that a majority of Americans find AI being used for mass surveillance by the government concerning, and many believe tech companies should be allowed (or even required) to set limits on how their AI is used. citeturn2search1

At the same time, debate is complicated by the fact that AI is also widely seen as necessary for defense modernization—especially given the scale of cyber threats and foreign influence operations.

Translation: the public wants security and privacy, and it wants them both yesterday.

11) So… Is the Pentagon Allowed to Surveil Americans With AI?

Here’s the most precise answer that doesn’t collapse into a 900-page treatise:

  • The Pentagon is not supposed to conduct unlawful domestic surveillance, and DoD policy frameworks emphasize compliance with law and oversight. citeturn0search4turn0search15turn3search12
  • However, US law leaves meaningful gaps where large amounts of Americans’ information can be collected incidentally (through foreign intelligence) or acquired indirectly (through commercial data markets) without the kind of warrant requirement most people assume applies. citeturn3search1turn4search0turn4search4
  • AI does not need new legal authority to increase surveillance capability. It magnifies the power of existing access by enabling mass-scale aggregation, pattern-finding, and inference.

If you’re hoping for a clean “yes/no,” the uncomfortable reality is that the legal framework was designed around collection methods and technical constraints that no longer exist. That’s why this question keeps resurfacing across agencies, not just DoD.

12) What Should Change: Concrete Policy Options (Not Just Hand-Wringing)

There’s no single fix, but there are a few reforms that come up repeatedly in credible policy proposals:

1) Close the data broker loophole for government buyers

If an agency would need a warrant to compel sensitive location data from a phone company, it shouldn’t be able to buy an equivalent dataset from a broker and call it a day. The ODNI CAI framework is an internal standard; critics argue stronger statutory limits may be needed. citeturn4search0turn4search4turn3search4

2) Strengthen rules for US-person queries across authorities

Debates around Section 702 frequently revolve around when and how US-person queries should require a warrant or higher-level approval. Advocacy groups argue warrants should be required for querying Fourth-Amendment-protected data. citeturn3search11turn3search1

3) Demand AI-specific auditability

If AI is used to surface or prioritize US-person information, oversight needs logs that record what the model did, what it saw, and why it produced the output. “We used AI” is not a compliance story; it’s a risk story.

4) Limit retention and secondary use

Even if collection is lawful, retention duration and secondary use policies determine whether “incidental” becomes “permanent.” Critics of bulk collection emphasize that minimization after the fact is not enough. citeturn3search1turn3search7

5) Make transparency normal, not exceptional

Some oversight reporting exists (and some is required), but transparency often arrives late and heavily redacted. Without public visibility, trust erodes, and people default to the worst assumption—which, historically, the surveillance world has occasionally earned.

13) The Bottom Line: AI Didn’t Invent Surveillance, It Industrialized It

In 1978, surveillance law could assume that collecting and analyzing communications was expensive, technically constrained, and limited by human attention. In 2026, AI turns attention into an on-demand resource. That changes the balance of power—even if the statutory text hasn’t changed a comma.

If you want a single takeaway, make it this: the core risk isn’t that the Pentagon (or any agency) will “start surveilling Americans with AI.” The risk is that it can do far more with the data it already legally touches, buys, or incidentally collects—and the law doesn’t clearly regulate the inferences.

That’s why the MIT Technology Review question is the right one. Not because the answer is simple, but because the ambiguity is the story.

Sources

Bas Dorland, Technology Journalist & Founder of dorland.org