The One Missing Data Point in the AI Jobs Panic: Price Elasticity (and Why O*NET Isn’t Enough)

AI generated image for The One Missing Data Point in the AI Jobs Panic: Price Elasticity (and Why O*NET Isn’t Enough)

On April 6, 2026, MIT Technology Review ran a deceptively simple headline: “The one piece of data that could actually shed light on your job and AI.” The piece (by James O’Donnell) argues that we’re obsessing over the wrong numbers when we ask whether AI will “take jobs.” The data we don’t have—systematically, across the economy—is the kind that would let us estimate how demand changes when AI makes a service cheaper. In economist-speak: price elasticity of demand.

That framing is refreshingly unfashionable. It doesn’t fit on a conference stage next to a demo of an agent clicking buttons at 300 mph. But it’s the kind of thing policymakers and businesses need if they’re trying to forecast outcomes rather than vibes.

This article uses O’Donnell’s argument as a launch pad, adds context from labor-market research, and looks at what it would take to build the “AI Manhattan Project” of data collection the article references—without turning every worker into an unwilling sensor in the name of progress.

Original RSS source: MIT Technology Review (James O’Donnell).

AI and your job: why the internet keeps asking the wrong question

The popular question is binary: will AI replace my job? The real world is messier. Jobs are bundles of tasks, and automation tends to show up as:

  • Automation of tasks (some work disappears, or becomes cheaper and faster)
  • Augmentation (humans do “more” with the same time)
  • Recomposition (the job changes shape; new tasks appear; old ones shrink)
  • Industry-level expansion or contraction (demand changes, shifting headcount)

Most “AI jobs” debates get stuck on the first bullet point. And even then, they often replace the messy details of adoption with a simpler proxy: exposure.

Exposure is useful—just not the way people think

Governments and researchers have built genuinely helpful catalogs of what people do at work. In the US, the best-known is the Department of Labor’s O*NET, launched in 1998 and updated continuously. It breaks occupations into detailed task statements and related descriptors. O’Donnell notes that this kind of task catalog has been used by AI labs and researchers to estimate how “exposed” an occupation is to AI-enabled work. citeturn3view0turn4view4

Pew Research Center, for example, published a methodology explaining how it used O*NET work activities to rank occupations by AI exposure, then matched those occupations with labor-market data (employment, earnings) using CPS crosswalks. citeturn4view1

So far, so solid: if a job contains many language-heavy, structured tasks, it’s probably more exposed to today’s large language models than a job dominated by physical manipulation, real-time situational awareness, or interpersonal trust.

But exposure is not destiny. It is an input to a much larger equation.

The “one piece of data”: price elasticity (a.k.a. the demand rebound problem)

O’Donnell’s core point—highlighting work by economist Daron Acemoglu and coauthors (and in particular the economist O’Donnell cites, David Imas)—is that you cannot predict occupational outcomes from exposure alone. The missing link is what happens to demand when AI reduces the cost of delivering a product or service.

Put plainly: if AI lets a web developer build in one day what used to take three, the cost of building software could fall. But then what? Does the market buy three times as much software? Or does it buy roughly the same amount, just cheaper—leading to fewer developers needed? O’Donnell argues that the answer varies by industry and hinges on elasticity, and that we don’t have the needed data at scale. citeturn3view0

For some goods, economists have excellent data. O’Donnell points to grocery “scanner data” collaborations (famously, University of Chicago’s work with supermarket data) that let researchers estimate how consumption shifts when prices change. But for many services—tutoring, web development, nutrition counseling—comparable, broadly accessible datasets are thin or fragmented across private firms and consultancies. citeturn3view0

That’s why O’Donnell’s article says we may need something like a “Manhattan Project” effort to collect it. citeturn3view0

Why this matters more than any single “AI can do X% of tasks” headline

Task capability measures are improving and getting more concrete. One recent preprint, “Crashing Waves vs. Rising Tides,” analyzes worker evaluations across thousands of text-based O*NET-derived tasks and argues that AI improvements resemble a broad “rising tide” more than sudden “crashing waves.” It also projects that if recent capability trends persist, LLMs could complete most text-related tasks at “minimally sufficient” quality with very high success rates by around 2029—though adoption may lag. citeturn4view0

Even if those projections are directionally right, they still don’t answer the labor-market question people actually care about: “Will there be fewer jobs?” For that, you need to know whether cheaper output causes the market to expand.

Elasticity is the mechanism that turns a productivity shock into either:

  • a substitution story (AI replaces labor, demand doesn’t rise enough, headcount falls), or
  • an expansion story (AI makes output cheaper, demand rises significantly, headcount holds or grows—even if the work changes).

What “elasticity for services” looks like in real life

Elasticity sounds abstract until you map it to everyday professional life.

Case study: software development (elasticity can be high, but not infinite)

Software is a good example because it’s both “highly exposed” and surprisingly sticky. AI coding tools can speed up implementation, documentation, test generation, and debugging—especially for well-understood components. In a competitive market, some of those savings should flow to customers as lower prices (or more features for the same price). If demand is elastic, companies might build more internal tools, more customer-facing features, and more custom software—potentially requiring more developers, not fewer.

But elasticity is constrained by reality:

  • Budgets: even cheaper software competes with other spending priorities.
  • Attention: customers can only adopt so many new tools at once.
  • Integration costs: the hardest part is often data, workflow redesign, and change management, not code typing.

That last point aligns with MIT CSAIL’s argument that even when automation is technically feasible, economic feasibility (implementation and maintenance costs, compute, data availability, skilled labor to deploy) can limit adoption. citeturn4view2

Case study: tutoring (elasticity might be huge, but quality and trust matter)

Tutoring is frequently cited as “exposed,” because it’s language-heavy and interactive. If AI makes basic tutoring extremely cheap, you might expect demand to explode: more students can afford more hours, more adults upskill, more personalized learning happens.

But again, the elasticity depends on details we rarely measure well:

  • Do parents and schools trust AI for high-stakes learning outcomes?
  • Does tutoring time substitute for other interventions (smaller classes, human coaching)?
  • Are there regulatory or procurement barriers in public education?

If AI tutoring becomes “good enough” for remedial practice but not for motivation, accountability, or special needs support, then the market could expand while still leaving a large role for human tutors—just in a different tier of service.

Case study: nutrition counseling (elasticity meets regulation and liability)

Nutrition advice is another service where AI can generate meal plans, explain macros, and personalize recommendations. But professional nutrition counseling is entangled with medical conditions, liability, and regulated practice. Even if AI lowers cost, demand might not scale unless payers (insurers, employers) reimburse or unless consumers believe the advice is credible enough to act on.

That “belief” isn’t fluff—it translates into adoption, and adoption translates into labor demand.

O*NET: the backbone we have—and the backbone we still need

It’s worth pausing to appreciate O*NET, because it’s easy to take for granted. It’s one of the most valuable public datasets for thinking clearly about work. It provides task statements and occupational descriptors that researchers and companies use to:

  • map skills to training programs,
  • build workforce planning tools,
  • analyze automation exposure, and
  • standardize occupational language across studies.

And it’s specifically the “task view” of O*NET that has become the shared vocabulary for modern AI-labor analysis. citeturn3view0turn4view4

But O*NET is not a price-and-quantity dataset. It doesn’t tell you: if accounting prep gets 30% faster, how many more small businesses will buy bookkeeping services at a lower price?

The missing join: tasks ↔ prices ↔ quantities

To estimate elasticity, you need some combination of:

  • Prices actually paid (not list prices)
  • Quantities purchased (hours, projects, subscriptions, visits)
  • Quality tiers (basic vs premium service)
  • Time and geography (local markets, seasonality)
  • Supply constraints (capacity limits, licensing)

For services, these data exist—but often in silos: marketplaces, payment processors, SaaS billing platforms, staffing firms, and enterprise procurement systems. Researchers can sometimes access slices; policymakers rarely see a coherent picture.

So what would an “AI Manhattan Project” for labor data involve?

O’Donnell quotes Imas arguing that collecting the missing data would require a large-scale, Manhattan-Project-like effort across the economy, because even sectors not “exposed today” may become exposed later. citeturn3view0

That metaphor is effective—and slightly alarming. Because Manhattan Projects don’t have a great track record of being gentle with the concept of consent.

A realistic blueprint (without building a surveillance state)

If we take the “collect better elasticity data” idea seriously, a sensible approach might look like this:

  • Public-private data trusts where companies contribute anonymized pricing and quantity data under strict governance.
  • Standardized service taxonomies so “web development” means something consistent across platforms.
  • Privacy-preserving aggregation (differential privacy, secure enclaves) to minimize re-identification risks.
  • Open methodology, limited raw access: publish the methods and outputs, restrict microdata to vetted researchers.
  • Audits and red-teaming for privacy, bias, and representativeness.

We already see glimpses of what “direct observation of AI use in the economy” can look like. Anthropic’s Economic Index uses privacy-preserving analysis to classify Claude usage by task (mapped to O*NET) and study patterns of automation vs augmentation. citeturn5search2turn5search8

That’s not elasticity data, but it’s a sign of a broader trend: instead of guessing, some organizations are beginning to measure what people actually do with AI.

Why measuring “AI usage” still isn’t enough

Even if you know that many people use AI for software debugging or drafting copy, you still don’t know if overall industry demand expands. Usage tells you where productivity might rise; elasticity tells you whether that productivity translates to fewer workers or more output (or both).

In other words: task catalogs plus AI-usage telemetry still need to be joined to market behavior.

Industry context: the three numbers everyone cites (and what they miss)

In the AI labor debate, three kinds of numbers show up repeatedly:

  • Exposure estimates (how much of a job overlaps with AI-capable tasks)
  • Capability benchmarks (how well models do on tasks)
  • Adoption measures (who is using AI, where, and how)

They’re all valuable. They’re also all incomplete.

Exposure estimates: powerful, but frequently misinterpreted

Pew’s methodology illustrates both the power and the limits of exposure rankings. It carefully distinguishes low/medium/high exposure work activities and emphasizes that multiple exposure levels can be important within the same occupation. citeturn4view1

But when exposure numbers escape into the wild, they often become a headline like: “X% of your job can be automated.” That’s a category error. Jobs don’t vanish because a task can be automated; they change because firms reorganize production and markets respond.

Capability benchmarks: “minimally sufficient” isn’t “ship it”

Fortune summarized MIT FutureTech research that tested many LLMs on thousands of workplace tasks and found that models increasingly clear a “minimally sufficient” bar, but struggle to reliably produce “superior” quality across complex tasks. citeturn4view3

That distinction matters because in many industries, “minimally sufficient” is still not acceptable once you add compliance, brand risk, liability, or customer trust. A model that drafts a decent email is not the same as a model that can safely run an HR workflow end-to-end.

Adoption measures: the bottleneck is often organizational, not technical

MIT CSAIL’s economic-feasibility framing is a useful antidote to pure capability talk: even a human-level system may be too expensive or too hard to integrate widely. citeturn4view2

That’s why the “AI takes all jobs in five years” narrative regularly collides with enterprise reality: procurement cycles, systems integration, data access, security reviews, and the awkward fact that your company still has a critical workflow running on an Excel spreadsheet last edited in 2013.

Expert perspective: why elasticity is politically and ethically explosive

If we had high-quality elasticity estimates for services, we’d be able to model scenarios more honestly. But we’d also trigger uncomfortable conversations:

  • If demand is inelastic, productivity gains may mean fewer jobs in that niche—so how do we support transitions?
  • If demand is elastic, markets might expand—so how do we ensure gains translate to higher wages or better working conditions rather than just “more work”?
  • If demand expands massively, do we face quality dilution (a flood of cheap content, cheap legal drafts, cheap everything)?

Elasticity data would also clarify something policymakers struggle with: not just “which jobs,” but “which markets” will scale. That determines where training pipelines, credentialing reform, and labor protections should focus.

Implications for workers: how to read AI risk signals without doomscrolling

Let’s make this actionable. You can’t personally compute price elasticity for your occupation. But you can ask better questions than “am I exposed?”

Five practical questions that map to the elasticity reality

  • Is my work sold into a competitive market? Competitive markets pass savings to customers faster, making demand response more relevant.
  • Is there unmet demand today? If customers already want more than the industry can supply (backlogs, waitlists), lower costs often unlock growth.
  • Is quality measurable and regulated? If mistakes are costly (health, finance, law), adoption may be slower and augmentation may dominate longer.
  • Are there natural capacity constraints? Physical constraints (clinic capacity, classroom time) can cap expansion even if AI makes some tasks cheaper.
  • What’s the bottleneck in my workflow? If the bottleneck is relationships, approvals, or data access—not drafting text—AI may not reduce prices much.

The “intern problem”: why AI might change the ladder, not just the jobs

Even when AI doesn’t eliminate a profession, it can reshape entry-level pathways. If AI handles the first drafts, the routine documentation, or the basic analysis, then the work that used to train juniors might shrink. Companies may hire fewer juniors, or redefine what “junior” means.

Elasticity interacts here too: if demand expands, firms may still hire juniors—but they may expect them to operate at a higher level faster, with AI as a baseline tool.

Implications for companies: productivity gains don’t automatically become headcount cuts

For business leaders, elasticity is the difference between two very different AI strategies:

  • Cost-takeout strategy: reduce labor input, keep output constant
  • Growth strategy: hold labor roughly constant, expand output and capture market share

Most real companies will do some of both, but the balance depends on how demand responds—and on competitive pressure. If your competitors use AI to cut prices, you may be forced into a demand-expansion race whether you like it or not.

One underappreciated risk: AI can make it easier to produce mediocre output at scale. If “cheap and okay” floods the market, customers may pay a premium for verified expertise, compliance guarantees, and accountability. That can create a two-tier market: high-volume AI-assisted service and high-trust human-led service.

Implications for policymakers: stop funding only skills training—fund measurement

Governments are good at two things in the labor market:

  • publishing lagging indicators (employment, wages), and
  • funding training programs after disruption becomes visible.

In the AI era, we also need a third competency: leading indicators that can forecast where disruption is likely, early enough to respond.

O’Donnell’s argument implies a concrete policy shift: invest in service-market measurement—prices, quantities, quality tiers—so we can estimate elasticity across sectors. That would let governments model how an AI-driven productivity shock could translate into labor demand changes, sector by sector.

That doesn’t mean predicting the future perfectly. It means replacing vibes with bounded uncertainty.

The bottom line: “exposure” tells you what AI can touch; elasticity tells you what happens next

O’Donnell’s MIT Technology Review piece is valuable because it redirects attention from the flashiest question (“Can AI do my tasks?”) to the most economically decisive one: what happens to demand when AI makes services cheaper? citeturn3view0

Without that data, we’ll keep recycling the same arguments: optimistic productivity utopias versus apocalyptic layoff fantasies. With it, we can at least identify which sectors are likely to shrink, which are likely to expand, and which will just mutate into new job shapes that make today’s career advice look like it was printed for a different species.

And yes, it’s mildly ironic that in a world where AI systems can generate a 40-page market report in 12 seconds, the missing ingredient for understanding AI’s impact on jobs is… a very old-fashioned dataset about prices and quantities.

Sources

  • MIT Technology Review — “The one piece of data that could actually shed light on your job and AI” (James O’Donnell, April 6, 2026)
  • MIT Technology Review Japan — Japanese edition of the same story (accessible mirror used for quoting/verification)
  • O*NET OnLine — US Department of Labor occupational and task database
  • Pew Research Center — “Methodology for O*NET analysis” (AI and jobs exposure method)
  • arXiv — “Crashing Waves vs. Rising Tides: Preliminary Findings on AI Automation from Thousands of Worker Evaluations of Labor Market Tasks” (arXiv:2604.01363)
  • MIT CSAIL — “Rethinking AI’s impact… economic limits to job automation”
  • Fortune — coverage of MIT FutureTech research on LLM task performance (April 3, 2026)
  • Anthropic — “Anthropic Economic Index report: Learning curves” (March 2026)
  • Anthropic — “Uneven geographic and enterprise AI adoption” (September 2025)

Bas Dorland, Technology Journalist & Founder of dorland.org