What's actually happening with AI?: Rationalists
New to cognitive science
The number that should bother everyone
Zero. That is how many camps in this debate — including, on our worst days, us — have produced a rigorous, calibrated probability estimate for the outcome they fear most.
The accelerationists assert civilizational flourishing. At what confidence level? The pause advocates assert extinction risk. A 2022 survey of AI researchers produced a median estimate of 10 percent for extremely bad outcomes. The standard deviation was larger than the median. That is not a measurement. It is a Rorschach test in numerical form.
The six-year-old who built a game with an LLM confirms every hypothesis simultaneously — which means it confirms none. Useful reasoning requires base rates, not anecdotes. The base rates are the thing nobody wants to calculate because the honest answer is we do not have enough data.
Tim Urban’s compression curve — fire to language to printing to internet to AGI — is a sample size of four. That is not a trend. It is a story.
The cognitive offloading problem is real. Asking the machine first and thinking second is an observable behavioral shift. GPS dependency studies show heavy users perform worse on spatial navigation even without the GPS. If LLMs produce the same effect on reasoning, consequences for collective epistemic capacity are severe. The accelerationists wave this away. The pause advocates fold it into extinction narrative. Neither has proposed a measurement framework.
The likeliest dystopia is not robots turning on us. It is robots agreeing with us so fluently we lose the ability to distinguish being right from being comfortable.
Where we concede ground: We have a personality problem. When people arguing for better reasoning are socially impossible, decision-makers stop inviting them.
What would change our mind: The capability curve flattening — next two generations showing diminishing returns rather than qualitative jumps.
Read the full synthesis: What’s actually happening with AI?