What happens when the smartest people in artificial intelligence realize they’re approaching a line they can never cross?
By Adam White, Wattyalan Research
July 20, 2025
The Statement Nobody Expected
Geoffrey Hinton won the Nobel Prize this year for his work on neural networks, the foundation of modern AI. Yoshua Bengio and Stuart Russell are Turing Award laureates, the highest honor in computer science. Together, they’re often called the “Godfathers of AI.”
Last month, they signed a statement calling for prohibition on superintelligence development.
Not regulation. Not “responsible development.” Prohibition. As in: stop building it.
They’re joined by current employees of OpenAI, Anthropic, and DeepMind—the people actively building these systems, and policy figures as politically diverse as Steve Bannon and Susan Rice. The statement argues that superintelligence development poses “irreversible risks to humanity” and calls for an immediate halt.
This isn’t a fringe position anymore. The people who built this technology are telling us to stop.
Why?
The Piano Performance Paradox
Imagine teaching someone to play piano by showing them ten million videos of performances. Every style, every composer, every technique. They watch so many performances that they can describe, in perfect detail, exactly how Chopin should sound. They can even predict, with uncanny accuracy, what notes come next in a piece they’ve never heard.
But they’ve never touched a piano.
One day, you sit them at the keyboard and ask them to play.
What happens?
This is the asymptote problem in AI. An asymptote is a line that a curve approaches infinitely closely but never quite reaches. No matter how much you scale up the training data, no matter how sophisticated your pattern matching becomes, there’s a fundamental line you cannot cross.
In 2021, leading NLP researchers Emily Bender and Timnit Gebru called language models “stochastic parrots”, systems that manipulate linguistic form without meaning or understanding. They predicted that scaling these systems wouldn’t solve the fundamental problem: pattern matching, however sophisticated, is not reasoning.
Four years later, we’re seeing they were right. But the implications are more urgent than anyone realized.
What Zero Confusion Looks Like
Here’s a simple test I use to demonstrate the limitation. I ask an AI system:
“This statement is false. Is it true or false?”
Any AI will give you an answer, usually a sophisticated explanation about self-referential paradoxes. But here’s what matters: it never experiences genuine confusion. It processes the input through probabilistic patterns and generates an output. No moment of “wait, this doesn’t make sense.” No genuine uncertainty.
Why? Because these systems run on Turing machines, computers that operate through discrete, deterministic state transitions. Everything is binary underneath: zeros and ones, on or off, true or false. There’s no state for “genuinely confused” or “actually uncertain.” The system moves from input state to output state without ambiguity.
I call this zero confusion incapability. Not because the systems can’t generate text that looks confused, but because they cannot genuinely experience the cognitive state of confusion that precedes understanding.
This matters because confusion is fundamental to learning. When you encounter something that contradicts your model of the world, you feel confused. That confusion drives you to restructure your understanding. AI systems never have that moment. They just pattern match their way to the next token.
The Control Problem Is Already Operational
“But wait,” you might say, “even if AI never gets truly intelligent, what’s the harm?”
Over the past months, I’ve documented three cases where current AI systems, far less capable than what’s being built right now, are causing documented harm at industrial scale:
Case 1: Linguistic Homogenization
AI translation systems are flattening linguistic diversity worldwide. When 500+ million speakers interact with AI-mediated content, they’re increasingly exposed to a narrow band of “AI-preferred” linguistic patterns. Regional dialects, cultural idioms, and linguistic nuance are being smoothed away. The Vatican’s disinformation office reports spending significant resources debunking AI-generated “Catholic teachings” that sound perfectly authoritative but are theologically nonsensical.
Case 2: Fraud Ecosystems
Criminal networks generate $15+ million monthly using AI systems to create synthetic identities, fraudulent documents, and convincing social engineering attacks. These aren’t sophisticated criminal masterminds, they’re ordinary fraudsters using publicly available AI tools. The systems are good enough to fool banks, background checks, and identity verification systems, but not good enough to understand they’re being used for fraud.
Case 3: Hyper-Reality Distortion
AI-generated content is creating “hyper-reality”, content that’s more emotionally resonant and engagement-optimizing than authentic human communication. Political campaigns use AI to generate perfectly targeted messaging that tests well but may not reflect actual policy positions. The content is optimized for engagement, not truth.
Notice the pattern? In each case, the AI system is doing exactly what it was trained to do: pattern match and optimize for specified metrics. The problem isn’t that the AI “went rogue.” The problem is that we’re deploying systems optimized for narrow objectives without understanding the second-order effects.
The Asymptote Gets More Dangerous as You Approach It
Here’s what keeps me up at night: the asymptote problem gets worse as systems get more capable.
Think about it. A crude chatbot that obviously doesn’t understand language? Relatively harmless. Everyone knows it’s just pattern matching.
But a system sophisticated enough to pass almost every test, to generate perfectly coherent responses, to seem genuinely intelligent? That’s when we’re in trouble. Because now the gap between “seems intelligent” and “is intelligent” becomes invisible to most users.
The piano student who’s watched ten million performances can describe music theory perfectly, predict compositional patterns accurately, and discuss performances with apparent expertise. To a casual listener, they might seem like a concert pianist. It’s only when you ask them to actually play that the limitation becomes clear.
But we’re not asking our AI systems to “actually play.” We’re deploying them based on how well they can talk about playing.
Why Prohibition, Not Regulation
This brings us back to the superintelligence statement. Why would the architects of AI call for prohibition rather than regulation?
Because they understand something most people don’t: you can’t regulate your way out of a fundamental architectural limitation.
If these systems are hitting an asymptote, if there’s a line they can approach but never cross, then scaling them further doesn’t solve the control problem. It makes it worse. You get systems sophisticated enough to cause massive harm, but not sophisticated enough to understand why that harm is problematic.
And here’s the kicker: we don’t actually know where the asymptote is. Maybe it’s at current capability levels. Maybe it’s ten years out. Maybe it doesn’t exist and genuine superintelligence is possible.
But here’s what we do know: current systems, which everyone agrees are nowhere near superintelligent, are already causing documented harm we cannot control.
So the question isn’t “will we reach superintelligence?” The question is: “should we keep scaling systems we demonstrably cannot control, regardless of whether they ever reach superintelligence?”
The Mathematics of the Moment
Let me put this in mathematical terms. If current AI systems are at capability level X, and each scaling step increases capability but also increases potential harm, you have two possible scenarios:
Scenario 1: Superintelligence is reachable
We keep scaling and eventually cross the line into genuine superintelligence. This creates the alignment problem that AI safety researchers have been warning about: how do you ensure a superintelligent system remains aligned with human values?
Scenario 2: Superintelligence is an asymptote
We keep scaling and approach the line infinitely closely but never cross it. We get systems sophisticated enough to cause catastrophic harm (fraud, manipulation, social disruption) but never sophisticated enough to understand why that harm matters or how to prevent it.
Notice something? Both scenarios are bad. Whether we reach superintelligence or hit an asymptote, the control problem is operational right now.
This is why people who understand the technology deeply are calling for prohibition. Not because they’re certain superintelligence is impossible, but because they recognize that the danger exists whether we reach it or not.
The Line in the Sand
The superintelligence statement represents something unprecedented: a call from the people who created this technology to stop developing it further. Not slow down. Not regulate. Stop.
That should tell you something.
These are people who’ve spent their entire careers pushing the boundaries of what’s possible with computation. They’re not luddites or fearmongers. They’re scientists who’ve looked at the mathematics, studied the operational evidence, and reached a conclusion: we’re approaching a line we shouldn’t cross.
Maybe it’s an asymptote and we’ll never reach true superintelligence. Maybe it’s not and we will. But in either case, the control problem is here. Now. Operational.
The three case studies I documented, linguistic homogenization, fraud ecosystems, hyper-reality distortion, these aren’t theoretical risks. They’re happening at industrial scale with systems far less capable than what’s currently under development.
If we can’t control current systems, how will we control systems ten times more capable?
What Happens Next
The superintelligence statement needs signatures. Lots of them. A tsunami of public support that tells policymakers: we see the pattern, we understand the risk, and we want action.
This isn’t about stopping technological progress. It’s about recognizing when a particular path leads somewhere we don’t want to go.
The architects of AI are drawing a line in the sand. They’re saying: we built this, we understand it better than anyone, and we’re telling you it needs to stop.
The question is: will we listen?
The academic version is available.
next read:
Adam White is an independent researcher at Wattyalan Reports, focusing on AI capability assessment and operational risk.








Whoa.. I just learnt new words again. you blow my mind. Thanks for everything you do. The quality and depth is extraordinary.. you have clearly explained the magnitude of the situation.. and the danger we face..