The Rest Is Politics
The Rest Is Politics

What If the AI Revolution Isn’t Real?

January 25, 2026 • 19m

Summary

⏱️ 7 min read

Overview

Arvind Narayanan, director of Princeton's Center for Information Technology Policy, challenges both AI doomsayers and hypers by arguing that AI is fundamentally a normal technology that will bring gradual rather than revolutionary change. He contends that attempting to assign probabilities to existential AI risk is fundamentally flawed, that stopping frontier AI development is both impossible and misguided, and that focusing on transparency, defensive AI capabilities, and addressing current harms is more productive than catastrophic risk speculation. The conversation explores the tension between security-focused executives and safety-focused legislators, the futility of the 'pause AI' movement, and growing regulatory conflicts between Silicon Valley and European policymakers.

The Futility of Probability-Based AI Risk Assessment

Narayanan fundamentally challenges the practice of assigning probabilities to AI existential risk, arguing that even sophisticated forecasting efforts produce unreliable predictions indistinguishable from uninformed speculation. He criticizes a 753-page report by expert forecasters, noting their methods involve speculative reasoning about AI potentially colonizing space or deciding to kill humans to cool the planet. Rather than debating whether the risk is 20%, 1%, or 0.1%, he advocates abandoning probabilistic thinking entirely in favor of practical interventions, though he acknowledges the risks could be real.

  • The Forecasting Research Institute produced a 753-page report with expert forecasters debating AI extinction probabilities
  • Arguments for probabilities include speculation like AI colonizing space instead of Earth or killing humans to cool the planet for better chip performance
  • These probability estimates have no empirical basis and are fundamentally bogus
  • We should not think in terms of probabilities but rather focus on practical responses to real risks
" You can have a room with these so-called super forecasters debating and you can have a room with a bunch of people who are high and debating what the future of AI is going to be, and you can't tell the difference. "
" These probabilities are all bogus, and that's my strongly held view. We should not think in terms of probabilities. "

Why Stopping AI Development Is Impossible

Narayanan argues that halting AI development would require an authoritarian world government and is already too late regardless. He points out that models only slightly less powerful than frontier models can run on consumer hardware, with costs dropping 10-100x annually due to hardware improvements and algorithmic efficiency. Historically, OpenAI's GPT-2 was considered too dangerous to release but can now be built by graduate students in days, demonstrating how rapidly capabilities democratize and how poor we are at predicting danger thresholds.

  • Stopping AI development would require an authoritarian world government controlling every AI developer everywhere
  • Models one step below frontier models can run on consumer grade hardware
  • The cost of running AI models is dropping by 10-100x every year due to hardware and algorithmic improvements
  • OpenAI's GPT-2, once considered too dangerous to release, can now be built by grad students in 1-2 days
  • There's no clear relationship between computational power of a model and its dangerous capabilities
" The only way it could work is if you have an authoritarian world government that can control every AI developer everywhere "
" Historically, when we look back, our ideas around what constitutes the threshold level of danger have kind of been comically off. "

📚 4 more sections below

Sign up to unlock the complete summary with all insights, key points, and quotes