
On February 21, 2026, The New York Times published an article titled “People Loved the Dot-Com Boom. The A.I. Boom, Not So Much”. It’s a deceptively simple headline that captures a weird truth about this moment: we’re watching another technology cycle inflate into the clouds, but the crowd down on the ground isn’t cheering. The dot-com boom felt like a party; the A.I. boom often feels like a performance review where the manager brought a robot to take notes.
I can’t directly quote or fully reproduce the New York Times story here (and the Times’ site is also not accessible to automated crawlers in my research setup), but the title and framing match a broader set of measurable signals: public anxiety about A.I. has risen, corporate productivity gains remain hard to pin down, and the infrastructure buildout—data centers, GPUs, power contracts—is colliding with environmental, labor, and cultural concerns.
Below is a deeply reported look at why the dot-com era inspired more public optimism than today’s generative A.I. wave, what the data says about adoption and productivity, how investor sentiment is shifting, and what all of this implies for regulation, jobs, and the tech industry’s social license to operate.
Dot-com nostalgia: why the internet boom felt like a net-positive (at least at first)
Let’s start with the vibe check. In the late 1990s, the dot-com boom was widely seen as a grand opening for a new world. “The internet” arrived with obvious consumer-facing magic: email, search engines, cheap information, online shopping, and eventually social networks. Even people who didn’t own stocks could understand the pitch. The product was tangible: more access, more convenience, more connection.
Also, the dot-com boom—especially for everyday consumers—didn’t lead with “we’re going to automate your job.” It led with “we’re going to put a bookstore on your desktop.” Many jobs did change, plenty of companies imploded, and the crash was brutal for investors. But the early narrative was expansion: new markets, new services, new categories of work.
Today’s A.I. boom is different because the average person is encountering it first as disruption, not delight. Even when it’s delightful (hello, instant image generation), it often comes bundled with immediate social downsides: confusion about what’s real, fear of surveillance, and concerns that creative work is being scraped, remixed, and monetized without consent.
The A.I. boom’s core PR problem: it sounds like a cost-cutting plan
A large share of the A.I. marketing narrative—especially around generative models—centers on replacing or compressing human labor. That is a fundamentally different emotional proposition than “the web helps you find things.” It’s not surprising that people react differently.
There’s evidence that U.S. attitudes have grown increasingly cautious. A Pew Research Center report based on a June 9–15, 2025 survey of 5,023 U.S. adults found Americans are far more likely to say they’re concerned than excited about the increased use of A.I. in daily life, and most want more control over how it is used. Pew also found majorities believe A.I. will erode (rather than improve) creativity and relationships, while still being open to A.I. assistance in some everyday tasks. citeturn1search0
That mix—practical curiosity plus deep social worry—is the public mood in a nutshell. People will use A.I. to summarize a meeting, but they don’t want it deciding whether they get a mortgage, a job interview, or health coverage.
“Are we even getting productivity?” The NBER survey that poured cold water on A.I. hype
One reason the A.I. boom is facing a backlash is that many organizations can’t point to dramatic outcomes—yet. A widely covered study from the National Bureau of Economic Research (NBER), surveying nearly 6,000 executives across the U.S., U.K., Germany, and Australia, reported that most firms saw no measurable productivity impact from A.I. adoption over the past three years. Coverage noted that 69% of firms report using A.I., but executives’ personal usage is limited, and impacts on output and employment have been minimal so far. citeturn1news13turn1search1turn1news14
This lands at an awkward time. The infrastructure spending is real. The valuations are real. The carbon and water questions are real. But the “we’re changing everything tomorrow” story isn’t aligning with measured gains at most companies—yet.
Solow’s paradox returns (now with more GPUs)
Economists have a name for this dynamic: the productivity paradox, popularized by Robert Solow’s famous line that you can see the computer age everywhere but in the productivity statistics. NBER-linked reporting has revived that comparison for A.I. citeturn1search1turn1news12
The dot-com era eventually produced measurable productivity gains—but not instantly, and not evenly. The internet required complementary investments: new processes, new skills, and a re-architecture of how businesses operate. A.I. appears to be following a similar adoption curve: lots of pilots, lots of demos, and a much slower grind to real operational transformation.
From party to protest: the rise of A.I. backlash as a civic movement
Unlike the dot-com buildout (which mostly hid in offices and ISP closets), the A.I. buildout is physically loud: data centers, transmission lines, power deals, and land-use fights. This makes the boom easier to protest because there’s something concrete to protest against. A data center is not a metaphor.
Time recently described a growing movement opposing the unchecked expansion of A.I. infrastructure—particularly data centers—citing local concerns about electricity and water usage as well as broader worries about jobs, misinformation, and social harm. citeturn1news15
This matters because public sentiment isn’t just a vibes issue; it becomes a permitting issue. If local governments slow approvals for data centers or demand stricter environmental mitigation, that directly changes the economics of the “scale at all costs” phase of the A.I. boom.
Investors are also getting twitchy (and that’s new)
There’s another striking difference between the dot-com era and today: Big Tech incumbents are the main spenders this time. In the late 1990s, the boom was fueled by a swarm of startups and newly public companies with “.com” stapled to their names. Now, the biggest checks are being written by hyperscalers—companies that already dominate cloud computing.
That concentration has two effects:
- The boom is more financially resilient in the short term because Big Tech has cash flow and balance sheets that many dot-com startups didn’t.
- The boom is more politically vulnerable because the same companies are already under scrutiny for market power, privacy, labor practices, and content harms.
Axios reported that investor enthusiasm has started to cool, pointing to a Bank of America fund manager survey in which a record share of investors believed companies are overinvesting in A.I., even as major firms keep pushing capital expenditures upward. citeturn0news14
Meanwhile, commentary about “circularity” and bubble-like dynamics has intensified. The Washington Post opinion section has highlighted concerns that financial loops—companies investing in each other’s ecosystems—can mimic dot-com-era “round-tripping,” where money circulates to manufacture the appearance of demand. citeturn2search3
Bubble debates: “It’s a bubble” vs “it’s industrial infrastructure”
Just like in 1999, people disagree on whether we’re watching irrational exuberance or early-stage infrastructure investment. Forbes has published arguments on both sides, with some writers warning about bubble risks and others arguing that contracted demand and compute scarcity make this cycle fundamentally different. citeturn0search1turn2search5
My take, as a journalist who has watched tech markets swing from “this changes everything” to “this changes nothing” and back again: it can be both. There can be real value creation and real mispricing at the same time. The internet proved useful even though plenty of dot-com stocks were nonsense. A.I. can be transformative even if some A.I.-branded balance sheets are, politely speaking, adventurous.
Why people trusted dot-com more than A.I.: the “who benefits?” question
The internet boom’s benefits were easier to distribute. Even if you didn’t own Amazon stock in 1998, you benefited from price competition, easier access to information, and later, entire new social and economic behaviors online.
The A.I. boom’s benefits—so far—are perceived as more concentrated. Many of the gains flow to:
- cloud providers selling compute
- chipmakers selling accelerators
- software firms bundling A.I. features into subscriptions
- executives pursuing headcount reduction narratives
Meanwhile, the costs are also distributed—but in a more immediately painful way:
- workers fear automation and job compression
- creators see unauthorized training-data usage and market dilution
- students and educators deal with assessment chaos
- everyone deals with increased misinformation and authenticity problems
That asymmetry fuels backlash even among people who think A.I. is “cool.” A.I. can be cool and still feel like it’s being done to you rather than for you.
The labor reality: job fears, job churn, and “augmentation” talking points
In boom cycles, executives tend to say the reassuring thing: “We’ll augment, not replace.” Sometimes they mean it. Sometimes it’s just a smoother way of saying, “We’re not sure yet, but please don’t panic.”
Bank of America CEO Brian Moynihan recently invoked historical precedent: technological change shifts work rather than eliminating it wholesale, and employment can grow alongside automation over the long run. citeturn0news12
That’s the optimistic historical frame—and it is often true at the macro level. But it can be cold comfort if you’re in the specific job category being automated right now. The dot-com era created new jobs, but it also killed old ones; the difference is that the “new” felt more visible and accessible to many middle-class workers. With A.I., there’s a widespread fear that entry-level and mid-level white-collar “apprenticeship” jobs are precisely the ones being compressed first, making it harder for people to climb career ladders.
Creators, copyright, and the “why is my work in your model?” backlash
The dot-com boom had plenty of copyright fights (Napster, anyone?), but generative A.I. moves the conflict closer to the core business model. Many leading models were trained on vast datasets scraped from the open web, including creative works. Even when companies claim fair use or licensing, the average creator often experiences the situation as: “A machine learned my style and is now competing with me.”
This isn’t just a legal issue; it’s a legitimacy issue. When the public sees artists, writers, and performers expressing anger or fear, it changes the emotional story of the technology. The internet was “information wants to be free.” A.I. sometimes reads as “your portfolio wants to be a dataset.”
Misinformation and authenticity: A.I. didn’t invent lying, it industrialized plausibility
Another reason the A.I. boom feels less lovable is that it coincides with an erosion of shared reality online. Generative tools make it cheap to create text, audio, and imagery that looks credible enough to spread before it can be checked.
Pew found that Americans strongly value the ability to tell whether content was made by a human or by A.I., yet many don’t feel confident they can spot A.I.-generated content. citeturn1search0
In the dot-com era, misinformation existed, but the dominant story was access: the internet let you find things. In the A.I. era, the dominant fear is confusion: the internet might now generate things.
Energy, water, and the physical footprint of “virtual” intelligence
The A.I. boom is powered by a very un-virtual backbone: electricity, cooling, land, water, and supply chains. If you live near a proposed data center, the abstract promise of “AI transformation” quickly turns into concrete questions like:
- Will my utility bills rise?
- Will this strain the grid?
- How much water will cooling require?
- What happens to local noise and traffic?
Federal Reserve Governor Michael Barr has also noted that A.I. could eventually raise long-run productivity, but cautioned that its near-term effects are modest and highlighted that energy demands from A.I. infrastructure could add inflationary pressure. citeturn1news16
That is a rare moment when a central bank voice and a local activist might accidentally end up on the same side of a sentence: compute isn’t free, and somebody pays.
Why dot-com optimism doesn’t automatically translate: the trust gap
There’s a broader cultural reason the A.I. boom is receiving less affection: trust in major institutions is lower now than in the late 1990s, and Big Tech has spent the past decade torching its own reputation through privacy scandals, algorithmic harms, and “sorry, our bad” press releases that sound like they were written by an HR chatbot trained on legal disclaimers.
So when the same companies say “this next thing will be great,” many people respond with: “Sure, but will it be great for me, or great for your quarterly earnings?”
Academic work on public opinion suggests that perceived risk and trust strongly shape support for A.I. regulation. For example, research based on the 2023 AIMS survey reports broad support for regulation, with risk perception and trust in institutions predicting preferences for slowing or restricting advanced A.I. citeturn1academia22
Adoption vs integration: many companies “use A.I.” the way people “use the gym”
One of the more underappreciated findings in the executive surveys is that “we use A.I.” can mean “we bought licenses” or “we ran a pilot,” not “we rewired how we work.” Reported executive usage averaging around 90 minutes per week is the corporate equivalent of owning a treadmill that now holds laundry. citeturn1news12turn1news13
This matters because the dot-com boom’s consumer apps created behavior changes quickly: people started searching, emailing, shopping online. The A.I. boom’s enterprise promise requires process change, data readiness, training, governance, and risk management. It’s slower, and it’s harder to celebrate.
So… is the backlash “anti-technology,” or is it a demand for proof?
Backlash is often framed as fear of change. Sometimes that’s true. But in 2026, a lot of A.I. backlash looks less like Luddism and more like an audit request:
- Show the productivity gains.
- Show the safety work.
- Show the emissions and water plans.
- Show the labor strategy beyond “reskill yourselves.”
- Show the content provenance and copyright respect.
That’s not rejection of technology; that’s rejection of tech exceptionalism—the idea that A.I. should be deployed at planetary scale first, and only then should we negotiate the terms.
What policymakers are likely to do next (because voters have opinions)
When the public mood turns skeptical, regulation tends to follow—sometimes thoughtfully, sometimes clumsily. The likely pressure points over the next year include:
- Transparency requirements for A.I.-generated content and political advertising
- Limits on automated decision-making in high-stakes contexts (employment, housing, credit, insurance)
- Data center and energy regulation at state and local levels
- Copyright and licensing frameworks for training data and synthetic content markets
The biggest policy risk for the industry is not a single “A.I. law.” It’s death by a thousand local rules: permitting battles, procurement bans, sector-specific compliance, and litigation outcomes that make certain deployments too expensive or too risky to justify.
What companies should learn from dot-com (if they don’t want the A.I. era to become “dot-bomb 2.0”)
1) Stop selling the apocalypse as a feature
If your product roadmap implicitly promises to eliminate whole categories of work, don’t be shocked when society pushes back. The dot-com era sold empowerment. A.I. needs to re-learn that language—without lying.
2) Build governance like you mean it
Companies deploying A.I. at scale need robust internal policies: human oversight, auditability, red-team testing, monitoring for bias, and a clear incident-response playbook. “We have principles” is not governance; it’s a poster.
3) Treat energy as a first-class product constraint
Efficiency isn’t just about cost; it’s about social permission. If the public perceives A.I. as a grid-hogging luxury feature for ad targeting, expect backlash. If A.I. meaningfully improves medicine discovery, fraud detection, or accessibility, that narrative is easier to defend—and Pew data suggests the public is more open to A.I. for heavy analytical tasks in science and security than for intimate personal decisions. citeturn1search0
4) Don’t confuse pilots with transformation
Executives need to stop pretending an A.I. chatbot prototype equals “reinventing the enterprise.” The NBER results suggest much of A.I. use remains shallow. Real gains require training, process change, and a realistic understanding of what models can and cannot do. citeturn1news13turn1search1
A grounded conclusion: the A.I. boom can still win hearts, but it has to earn them
The dot-com boom was loved because it arrived as a broadly legible improvement in daily life. The A.I. boom is controversial because it arrives as a broadly legible threat: to jobs, truth, creativity, and local resources. That doesn’t mean A.I. is doomed. It means the industry’s default approach—move fast, scale faster, apologize later—has finally hit a technology where “later” is politically unacceptable.
If A.I. leaders want dot-com-style public enthusiasm, they have to make the benefits as obvious as the costs. Right now, many people can clearly see the GPU farms and the layoffs talk. They’re still waiting to see the part where their lives get better in a way that doesn’t require them to become prompt engineers on nights and weekends.
Sources
- The New York Times — “People Loved the Dot-Com Boom. The A.I. Boom, Not So Much” (Feb. 21, 2026). (Original RSS source; author credited by the NYT.)
- Pew Research Center — “How Americans View AI and Its Impact on People and Society” (Sept. 17, 2025). citeturn1search0
- TechRadar — coverage of NBER executive survey findings (Feb. 2026). citeturn1search1
- ITPro — coverage of NBER survey and executive expectations (Feb. 2026). citeturn1news13
- Tom’s Hardware — coverage of NBER survey and AI funding context (Feb. 2026). citeturn1news12
- Time — “The People vs. AI” (Feb. 2026). citeturn1news15
- Barron’s — Fed Gov. Michael Barr comments on AI, productivity, and energy/inflation implications (Feb. 2026). citeturn1news16
- Axios — investor sentiment and AI capex concerns (Feb. 18, 2026). citeturn0news14
- The Washington Post — opinion/analysis on “circularity” and dot-com parallels (Dec. 8, 2025). citeturn2search3
- arXiv — Baumann et al., “Reduced AI Acceptance After the Generative AI Boom: Evidence From a Two-Wave Survey Study” (Oct. 27, 2025). citeturn1academia20
- arXiv — Bullock et al., “Public Opinion and The Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support” (Apr. 30, 2025). citeturn1academia22
- Times of India — report on Brian Moynihan’s remarks on AI and jobs (Feb. 2026). citeturn0news12
- Forbes — commentary on bubble concerns and dot-com parallels (Dec. 8, 2025). citeturn0search1
- Forbes — argument against the “AI bubble” framing (Nov. 17, 2025). citeturn2search5
Bas Dorland, Technology Journalist & Founder of dorland.org