The Singularity
is Coming

Tracking humanity's most confident
guesses about its own obsolescence

In 2025, Marco Trombetti predicted Singularity by June 2029
-3
YearsY
 
-116
DaysD
 
-03
HoursH
:
-19
MinutesM
:
-43
SecondsS
Near enough that your mortgage might outlast your career.

Every Prediction, Visualized

200020502100215022002250230019501955196019651970197519801985199019952000200520102015202020252030Predicted year of singularityYear prediction was madeNOW
Two fingers to pan & zoom · Double-tap to reset
MT

Marco Trombetti

Predicted
Dec 2025
Target
2029(20292030)
Type
Singularity
Confidence
Metric-Driven Confident

Translation CEO Predicts Singularity By 2030 Using 'Time To Edit' Productivity Metric

What Even Is the Singularity?

Short answer: nobody agrees. The technological singularity is the hypothetical future point where AI gets smart enough to improve itself faster than we can keep up — the intelligence explosion — and everything after that is unknowable. It's where our trend lines go vertical and our models start returning NaN.

"The first ultraintelligent machine is the last invention that man need ever make."— I.J. Good, 1965

The term traces back to von Neumann in the 1950s, was formalized by Vernor Vinge in 1993, and became a bestseller with Kurzweil in 2005. Nobody agrees on what it actually is — Vinge says event horizon, Kurzweil says predictable endpoint, Bostrom focuses on superintelligence — but almost all versions orbit Good's intelligence explosion: the first machine smarter than us designs a smarter one, and the loop leaves humanity as spectators. With expert timelines lurching forward by over a decade in a single survey cycle, that hypothesis is getting harder to dismiss.

This site tracks 180 predictions across four flavors of singularity, because if we're going to be obsolete, we should at least have good data visualization for it.

The Four Things We're Actually Tracking

AGI

The One Everyone Argues About

Jan 2030

Artificial General Intelligence — AI that can do anything a human can do intellectually. Learn a language, write a novel, debug your code, have an existential crisis. The whole package. Right now AI can beat you at chess and write your emails, but ask it to do both while making a sandwich and it falls apart. With 83 predictions in our dataset, everyone has an opinion on this one.

Altman: 2027Amodei: 2027Musk: 2026Hassabis: 2030Ng: 2060-2085Brooks: 2075
Fun Fact

Every generation of AI researchers since the 1960s has predicted AGI within 20 years. We're currently on the fifth or sixth cycle of this. But hey, a stopped clock is right twice a day, and eventually one of these predictions has to land... right? Rodney Brooks used to say 2300 to make a point about prediction futility. He's since revised to 2075. Still the most patient man in AI.

The Predictions Are Accelerating

medianIQR
203020402050206020702080+1995200020052010201520202025year prediction was madepredicted yearChatGPT Launches

Median & IQR per year (min 5 predictions). Sparse years use an adaptive rolling window. Click a type to isolate.

Every year, experts say we have less time left — and they're not revising gradually. The predictions are accelerating faster than the technology they're predicting.

Every generation of AI researchers since the 1960s has predicted human-level AI within 20 years — and been wrong. But the current shift is structurally different: it's driven by demonstrated capabilities, not theoretical arguments. Industry leaders with direct access to frontier models cluster years ahead of academic surveys. Whether that gap reflects genuine information advantage or commercial incentive is the trillion-dollar question.

History favors caution. But history has also never seen capability curves quite like these.

Three Camps, One Civilization

The Optimists
2026 – 2030

Industry leaders with front-row seats to frontier models. Either they know something we don't, or their stock options are doing the talking. Amodei: software engineers extinct by Christmas 2026.

Altman, Amodei, Musk, Brin, Suleyman, Son
The Moderates
2030 – 2050

Forecasting communities, academic surveys, and lab leaders. The "we've seen this hype before, but have you *seen* GPT-5?" camp. Hinton says 5-20 years and admits he has "zero idea really."

Hassabis, Metaculus, Grace survey, Cotra, Hinton, Pichai
The Skeptics
2050 – 2300

Passing benchmarks ≠ understanding. Brooks revised from 2300 to 2075 — progress! Ng says the hype "is creating the impression AI systems are far more advanced than they truly are."

Brooks, Ng, Marcus, Chollet, Nadella, Hofstadter

So... Should I Be Worried?

Depends who you ask. Industry insiders say 3-5 years. Academics say 15-20. Safety researchers say it doesn't matter when — what matters is whether we figure out alignment before we figure out capability. Even the skeptics are accelerating: Brooks revised from 2300 to 2075, and expert medians jumped forward 13 years in a single survey cycle after ChatGPT launched.

"There's a 10 to 20% chance that AI leads to human extinction within the next 30 years."— Geoffrey Hinton, Nobel laureate & "godfather of AI"

That's what the countdown timer is for. Pick a prediction, watch the seconds tick, and decide for yourself whether to feel excited, terrified, or both. We recommend both.