⚡ Quick Summary
This RAND working paper introduces a practical framework for identifying the fundamental limits on Artificial General Intelligence (AGI)—not based on capabilities we imagine today, but on hard physical and theoretical boundaries. Edward Geist and Alvin Moon challenge the popular perception that superintelligent AI will be capable of anything. They argue that constraints like the laws of thermodynamics, information theory, and computational complexity will continue to apply—even to the most advanced machines. Using post-quantum cryptography (PQC) as a case study, the authors show how certain capabilities, like breaking RSA encryption, may remain out of reach no matter how intelligent an AI becomes, unless it can overcome physical or mathematical barriers. This is a pragmatic, forward-looking guide for policymakers trying to differentiate hype from hard limits.
🧩 What’s Covered
The paper offers a preliminary but structured method for assessing whether a specific task or capability is feasible in principle, regardless of the future capabilities of AGI. It introduces a spectrum of technological possibility (Figure 1, p. 2), ranging from “definitely possible” (existing tech) to “definitely impossible” (like perpetual motion). AGI is positioned as “probably possible,” while other technologies like faster-than-light travel remain speculative.
The framework identifies three primary constraints that bound what AGI (or any system) can achieve:
- Thermodynamics (you can’t do work without energy)
- Information theory (you can’t recover information that wasn’t encoded)
- Computational complexity (some problems are intractable even for powerful machines)
A Venn diagram on page 4 (Figure 2) visually represents how these constraints intersect to define the space of “possible skills/tasks.”
The second half of the paper applies the framework to a case study in cryptography.
- RSA encryption resists decryption by classical computers because of the difficulty of factoring large primes.
- Quantum computing (via Shor’s algorithm) theoretically enables decryption—but only if practical quantum computers can be built.
- Post-quantum cryptography (PQC) aims to create schemes that even powerful quantum computers can’t break, not due to lack of compute, but because of provable mathematical hardness.
The authors use this case to illustrate how policymakers can identify tasks that will remain infeasible, guiding strategic decisions in national security, cryptography, and technology R&D.
💡 Why it matters?
As AI governance debates heat up, this paper offers a grounded antidote to both doomerism and techno-utopianism. By separating the limits of engineering from the limits of reality, it empowers decision-makers to focus on risks that are real, not just dramatic. If AGI can’t break the laws of physics, then securing cryptographic systems with provable hardness is not just wise—it’s future-proof. This also reframes global competition: nations should invest in understanding the limits of AGI, not just chasing its capabilities.
❓ What’s Missing
While the framework is conceptually strong, its application is narrow—focused mostly on cryptography. Other high-stakes domains (e.g. synthetic biology, automated warfare, climate engineering) are not examined, though the authors suggest the framework could apply there. Moreover, while the constraints are well-established theoretically, the paper could benefit from more empirical examples showing how real-world policy has been shaped by recognizing such constraints. Finally, the framework lacks operationalization—what specific steps should policymakers take to classify emerging technologies using this lens?
👥 Best For
- AI governance professionals seeking rigorous frameworks to assess AGI feasibility
- National security analysts working on AI-risk mitigation, especially in cryptography
- Policy advisors and foresight teams evaluating AGI scenarios
- AI researchers interested in aligning computational theory with future capabilities
📄 Source Details
Title: What Even Superintelligent Computers Can’t Do: A Preliminary Framework for Identifying Fundamental Limits Constraining Artificial General Intelligence
Authors: Edward Geist (Senior Policy Researcher, RAND), Alvin Moon (Mathematician, RAND)
Publisher: RAND Corporation
Date: June 2025
Report No.: WR-A3990-1
Length: 19 pages
Availability: https://www.rand.org/t/WRA3990-1
Support: Independent research via philanthropic funding (Open Philanthropy, Longview, etc.)