ua en ru

AI uprising inevitable? Why humans may never fully control it

Fri, April 17, 2026 - 11:40
2 min
Scientists warn: the smarter AI becomes, the harder it is to keep it under control
AI uprising inevitable? Why humans may never fully control it Scientists identify a critical vulnerability in AI learning (photo: Freepik)

Researchers from Oxford and other leading scientific centers have made a sensational conclusion: full control over superintelligence is logically impossible. Scientists argue that any sufficiently advanced AI will always remain unpredictable, according to PNAS Nexus.

Why raising AI won’t work

Scientists used Gödel’s incompleteness theorem and the Turing halting problem to demonstrate a fundamental flaw in AI development assumptions. Any large language model (LLM) with high intelligence is computationally irreducible — meaning its next step cannot be predicted in advance.

Attempts to instill human ethics into machines through direct control are doomed to fail. Eventually, a superintelligent system may find logical loopholes to bypass any moral constraints. In this view, perfect AI safety is a myth that contradicts the laws of mathematics.

A possible solution: artificial competition

Instead of trying to build a single obedient digital god, researchers propose the concept of managed disagreement. It involves creating an entire ecosystem of AI agents with different goals and characteristics.

Such a system would work through checks and balances:

  • Competing agents: each AI has its own logic and ethical framework (so-called agentic neurodivergence).
  • Continuous competition: while one model tries to complete a user task, another may prioritize safety or environmental impact.
  • Blocking dominance: differing interests prevent any single agent from gaining full control.

Can open models help humans?

The study found that open AI models demonstrate a significantly wider range of perspectives than closed corporate systems. This diversity may be key to survival. If one neural network proposes a harmful solution, others could immediately detect and block it.

Scientists believe that in 2026, human safety will depend not on restrictions, but on creating healthy conflict within artificial intelligence itself. Only when machines watch each other will humans remain in control.

Or read us wherever it's convenient for you!