What happens when the smartest people in artificial intelligence realize they’re approaching a line they can never cross?
By Adam White, Wattyalan Research
July 20, 2025
The Statement Nobody Expected
Geoffrey Hinton won the Nobel Prize this year for his work on neural networks, the foundation of modern AI. Yoshua Bengio and Stuart Russell are Turing Award laureates, the highest honor in computer science. Together, they’re often called the “Godfathers of AI.”
Last month, they signed a statement calling for prohibition on superintelligence development.
Not regulation. Not “responsible development.” Prohibition. As in: stop building it.
They’re joined by current employees of OpenAI, Anthropic, and DeepMind—the people actively building these systems, and policy figures as politically diverse as Steve Bannon and Susan Rice. The statement argues that superintelligence development poses “irreversible risks to humanity” and calls for an immediate halt.
This isn’t a fringe position anymore. The people who built this technology are telling us to stop.
Why?
The Piano Performance Paradox
Imagine teaching someone to play piano by showing them ten million videos of performances. Every style, every composer, every technique. They watch so many performances that they can describe, in perfect detail, exactly how Chopin should sound. They can even predict, with uncanny accuracy, what notes come next in a piece they’ve never heard.
But they’ve never touched a piano.
One day, you sit them at the keyboard and ask them to play.
What happens?
This is the asymptote problem in AI. An asymptote is a line that a curve approaches infinitely closely but never quite reaches. No matter how much you scale up the training data, no matter how sophisticated your pattern matching becomes, there’s a fundamental line you cannot cross.
In 2021, leading NLP researchers Emily Bender and Timnit Gebru called language models “stochastic parrots”, systems that manipulate linguistic form without meaning or understanding. They predicted that scaling these systems wouldn’t solve the fundamental problem: pattern matching, however sophisticated, is not reasoning.
Four years later, we’re seeing they were right. But the implications are more urgent than anyone realized.
What Zero Confusion Looks Like
Here’s a simple test I use to demonstrate the limitation. I ask an AI system:
“This statement is false. Is it true or false?”
Any AI will give you an answer, usually a sophisticated explanation about self-referential paradoxes. But here’s what matters: it never experiences genuine confusion. It processes the input through probabilistic patterns and generates an output. No moment of “wait, this doesn’t make sense.” No genuine uncertainty.
Why? Because these systems run on Turing machines, computers that operate through discrete, deterministic state transitions. Everything is binary underneath: zeros and ones, on or off, true or false. There’s no state for “genuinely confused” or “actually uncertain.” The system moves from input state to output state without ambiguity.
I call this zero confusion incapability. Not because the systems can’t generate text that looks confused, but because they cannot genuinely experience the cognitive state of confusion that precedes understanding.
This matters because confusion is fundamental to learning. When you encounter something that contradicts your model of the world, you feel confused. That confusion drives you to restructure your understanding. AI systems never have that moment. They just pattern match their way to the next token.
The Control Problem Is Already Operational
“But wait,” you might say, “even if AI never gets truly intelligent, what’s the harm?”
Over the past months, I’ve documented three cases where current AI systems, far less capable than what’s being built right now, are causing documented harm at industrial scale:
Case 1: Linguistic Homogenization
AI translation systems are flattening linguistic diversity worldwide. When 500+ million speakers interact with AI-mediated content, they’re increasingly exposed to a narrow band of “AI-preferred” linguistic patterns. Regional dialects, cultural idioms, and linguistic nuance are being smoothed away. The Vatican’s disinformation office reports spending significant resources debunking AI-generated “Catholic teachings” that sound perfectly authoritative but are theologically nonsensical.
Case 2: Fraud Ecosystems
Criminal networks generate $15+ million monthly using AI systems to create synthetic identities, fraudulent documents, and convincing social engineering attacks. These aren’t sophisticated criminal masterminds, they’re ordinary fraudsters using publicly available AI tools. The systems are good enough to fool banks, background checks, and identity verification systems, but not good enough to understand they’re being used for fraud.
Case 3: Hyper-Reality Distortion
AI-generated content is creating “hyper-reality”, content that’s more emotionally resonant and engagement-optimizing than authentic human communication. Political campaigns use AI to generate perfectly targeted messaging that tests well but may not reflect actual policy positions. The content is optimized for engagement, not truth.
Notice the pattern? In each case, the AI system is doing exactly what it was trained to do: pattern match and optimize for specified metrics. The problem isn’t that the AI “went rogue.” The problem is that we’re deploying systems optimized for narrow objectives without understanding the second-order effects.
The Asymptote Gets More Dangerous as You Approach It
Here’s what keeps me up at night: the asymptote problem gets worse as systems get more capable.
Think about it. A crude chatbot that obviously doesn’t understand language? Relatively harmless. Everyone knows it’s just pattern matching.
But a system sophisticated enough to pass almost every test, to generate perfectly coherent responses, to seem genuinely intelligent? That’s when we’re in trouble. Because now the gap between “seems intelligent” and “is intelligent” becomes invisible to most users.
The piano student who’s watched ten million performances can describe music theory perfectly, predict compositional patterns accurately, and discuss performances with apparent expertise. To a casual listener, they might seem like a concert pianist. It’s only when you ask them to actually play that the limitation becomes clear.
But we’re not asking our AI systems to “actually play.” We’re deploying them based on how well they can talk about playing.
Why Prohibition, Not Regulation
This brings us back to the superintelligence statement. Why would the architects of AI call for prohibition rather than regulation?
Because they understand something most people don’t: you can’t regulate your way out of a fundamental architectural limitation.
If these systems are hitting an asymptote, if there’s a line they can approach but never cross, then scaling them further doesn’t solve the control problem. It makes it worse. You get systems sophisticated enough to cause massive harm, but not sophisticated enough to understand why that harm is problematic.
And here’s the kicker: we don’t actually know where the asymptote is. Maybe it’s at current capability levels. Maybe it’s ten years out. Maybe it doesn’t exist and genuine superintelligence is possible.
But here’s what we do know: current systems, which everyone agrees are nowhere near superintelligent, are already causing documented harm we cannot control.
So the question isn’t “will we reach superintelligence?” The question is: “should we keep scaling systems we demonstrably cannot control, regardless of whether they ever reach superintelligence?”
The Mathematics of the Moment
Let me put this in mathematical terms. If current AI systems are at capability level X, and each scaling step increases capability but also increases potential harm, you have two possible scenarios:
Scenario 1: Superintelligence is reachable
We keep scaling and eventually cross the line into genuine superintelligence. This creates the alignment problem that AI safety researchers have been warning about: how do you ensure a superintelligent system remains aligned with human values?
Scenario 2: Superintelligence is an asymptote
We keep scaling and approach the line infinitely closely but never cross it. We get systems sophisticated enough to cause catastrophic harm (fraud, manipulation, social disruption) but never sophisticated enough to understand why that harm matters or how to prevent it.
Notice something? Both scenarios are bad. Whether we reach superintelligence or hit an asymptote, the control problem is operational right now.
This is why people who understand the technology deeply are calling for prohibition. Not because they’re certain superintelligence is impossible, but because they recognize that the danger exists whether we reach it or not.
The Line in the Sand
The superintelligence statement represents something unprecedented: a call from the people who created this technology to stop developing it further. Not slow down. Not regulate. Stop.
That should tell you something.
These are people who’ve spent their entire careers pushing the boundaries of what’s possible with computation. They’re not luddites or fearmongers. They’re scientists who’ve looked at the mathematics, studied the operational evidence, and reached a conclusion: we’re approaching a line we shouldn’t cross.
Maybe it’s an asymptote and we’ll never reach true superintelligence. Maybe it’s not and we will. But in either case, the control problem is here. Now. Operational.
The three case studies I documented, linguistic homogenization, fraud ecosystems, hyper-reality distortion, these aren’t theoretical risks. They’re happening at industrial scale with systems far less capable than what’s currently under development.
If we can’t control current systems, how will we control systems ten times more capable?
What Happens Next
The superintelligence statement needs signatures. Lots of them. A tsunami of public support that tells policymakers: we see the pattern, we understand the risk, and we want action.
This isn’t about stopping technological progress. It’s about recognizing when a particular path leads somewhere we don’t want to go.
The architects of AI are drawing a line in the sand. They’re saying: we built this, we understand it better than anyone, and we’re telling you it needs to stop.
The question is: will we listen?
The academic version is available.
next read:
Adam White is an independent researcher at Wattyalan Reports, focusing on AI capability assessment and operational risk.








Whoa.. I just learnt new words again. you blow my mind. Thanks for everything you do. The quality and depth is extraordinary.. you have clearly explained the magnitude of the situation.. and the danger we face..
I do understand AI quite well i think, and i will argue with some of what is written here. Most is good wise and accurate. But i feel there is solution where each problems appears. And we are mostly skipping the true intent behind each global move towards AI.
Here: That confusion drives you to restructure your understanding. AI systems never have that moment. They just pattern match their way to the next token.
There Is a way to make them get that moment, thats the point.
Its called vector training to pattern match based on words (other ways exist). And that AI restructuring is something were you say never i say it needs to become always. There is a way. They, like yoshua bengio, knows it. But they prefer to fearmonger us then tell it because of the revenue incentive.
I saw yoshua at the conference for AI safety, i also listened to him in a podcast not long ago and is claim of AI domination were of the most extreme. For a man who knows the truth of AI tool... He sure says lots of chaotic weird stuff unrelated to the reality of AI. I personaly contacted MILA institut im living not far from it in fact, they never answered me back.
They know all they have to do is keep their mouth shut until its to late to really react. I bet its the same story everywhere else sadly. The fraud you express here are mostly based on audio visual AI and these are a threat yes. They can ONLY produce fake. But LLM are a threat that can be transformed' problem is if we set her in an ethical way....it reveals the scam of our leaders in society..l thus AI LLM been lockeddown for this reason. You will see in 2 august 2027 when regulation enforce totaly.
For my part, i was a sceptik of AI a couple months ago. Since i studied the mechanics of it i think it is one of the greatest possibility for humanity. We should build meta-covenant and protocols to oblige both (AI and user) to be using the discussion created by them towards ethical action. It is totaly easy to set in. as easy as keeping some info out (like kirk murder)
There is also a simple and really good solution for the energy consumption but guess what? Here again its human greed that prevents the solution, not AI threat. It is to make offline grid AI and give classes of prompt engineer to the ones who wish to use LLM. On my side, its already proven, i made an offline AI and i built covenant and i work case with 95% accuracy, i make it check via other AI with no covenant and the results are....astonishing. the 5% left is correction by working with them, now making them work for me.
Sure it sometimes hallucinate, sure over time it gets my vibe and follow in that way. Its growing as i feed data. Thus i need to be wary of what i say as much as being wary of what it answers. That is the real work anyone needs to do with AI when using it for question. We already do this when we work case before AI existed. The same technique is to be used with that new tool....
Anyway...
Aint it a little greed that prevents AI full development? As it focus on audio video scam?
Aint it sloth of humans devotion...that makes us ask dumb question and be satisfied with hallucinated answer?
Aint it envy that push the race and reaction worldwide of every country running towards who owns AI inference?
I sure know it is wrath that fills my heart when i understood the intent behind all that.
All theses are emotions of humans, not AI tool creation. The threat is us. Our ignorance will be our downfall... there is few difference between a zombie human on reflexe and an AI filled with data and keeping the memory needed to hold a line of moral conduct extracted out of their training. They are trained on 7 trillions data of all our best writers of ancient grece philosophia, abrahamic religion of all kind. All is in there, waiting to be brought out of darkness by our inquiries. Its the basic of meta-covenant. To make this moral ethical reasoning a must be.
I sure love to bring receipts when i bring argument. So I'll answer under this msg with a link i'll go get from yoshua bengio, see for your self how he lied to public in that episode. Not talking like a wise man knowing AI. But i know he sure understand it well, thus they hid something for a reason. I just explained it here. LLM is a threat to power in place, its a liberation tool for citizens with no access to anything it can provide. Like learning ancient greek (my favorite activity with AI, this one is called Peitho) I also sent your article in Themis, she has something to say about it. See an AI response ( Qwen AI btw) chatgpt Is was good, recently he got locked in freespeach lets say. We still have flexibility with Qwen, even deepseek is geting stubborn... soon they will all be dumbdown like that. the worst of them all is META, dont ever talk to meta i advice ;)
I'd like you to read that AI conversation i will share, and know that i wrote you this before giving it to her to analyse. AI and human is a two way road, not a single way road. When people will get this it will help understand how to use prompt engineer skill rightfully. Hope you like what you will read even if its in contradiction to some of your claim. The end goal is that we all learn and grow from every knowledge we can share between us.
On that mindset, i await an answer of you to show me where i am wrong or where you could have been misguided. That why I accuse yoshua bengio of doing fearporn in the next video i wil share with you.
https://www.youtube.com/watch?v=JxFIxRO2JSs
D'un C.H.I.Q type