The singularity is a point where conventional rules break down and predictability collapses. It manifests across our intellectual landscape in remarkably consistent ways. In mathematics, singularities mark where functions explode to infinity or become undefined, revealing the boundaries of analytic methods. In physics, they represent the extreme conditions at black hole centers and the universe’s origin, where spacetime curvature becomes infinite and our best theories fail. Engineers encounter singularities as critical failure points: robotic arms losing control authority at certain configurations, stress concentrations approaching infinity at crack tips, or control systems becoming unstable at specific parameter values. Yet perhaps nowhere is the concept more consequential – and contested – than in the realm of technology and artificial intelligence, where “the singularity” has become shorthand for a hypothetical inflection point that could fundamentally redefine human civilization itself or, arguably, bring it to an end.

In computer science and futurism, the singularity represents a predicted moment when artificial intelligence surpasses human cognitive capability and enters a recursive self-improvement cycle, accelerating technological progress beyond our capacity to understand or control. It does not have to actually surpass the human intelligence – the control of the narrative is more than enough especially when the existing narratives are challenged and easily cancelled. First articulated by mathematician John von Neumann in the 1950s and popularized by computer scientist Vernor Vinge in 1993, the concept gained mainstream attention through inventor Ray Kurzweil’s prediction of a 2045 timeline. The mechanism is deceptively simple: once AI systems become sophisticated enough to improve their own architecture, each generation designs a superior successor faster than the last, creating an intelligence explosion analogous to a runaway nuclear chain reaction. Current developments lend both credibility and complexity to this scenario – large language models demonstrate emergent capabilities their creators didn’t explicitly program, while simultaneously revealing profound limitations in reasoning and understanding. The economic incentives driving AI development are immense: nations and corporations invest hundreds of billions annually, treating artificial general intelligence as both inevitable and strategically critical. Certainly, there are infrastructure limitations (e.g. electricity availability and grid capacity for increased consumption, construction of data centers’ network, mere availability of necessary raw materials and their mining to fuel the rapid advancement), which need to be addressed, thus, driving the speed of progress down. Yet, the singularity remains fundamentally different from other technological transitions. Unlike the agricultural or industrial revolutions, which exploited the body and the product of labor augmented human capabilities respectively, the AI targets the essence of a human being – the reasoning. An intelligence explosion could create entities whose motivations, decisions, and actions exist beyond human comprehension – an event horizon past which we cannot see, plan, or meaningfully prepare. What is more frightening, is that it can create an alternative, artificial reality by replacing the major narratives, therefore deceiving the humanity in general.
Psychologists know that a diagnose of insanity is a product of an external observation to an individual. What if we all collectively go insane – not at once, but gradually, day after day with every Overton window we get exposed to? Who will tell us that we are crazy then? This is the question I do not have an answer
Binary Outcomes: Why the Singularity Offers No Middle Ground
I am not a futurist (although I certainly indulge in observing the patterns and get a dose of adrenaline from modeling that come true). However, those, who think they are, must grapple with the uncomfortable truth: when it comes to the technological singularity, we face a stark binary – it either happens or it doesn’t. There is no “partial” singularity, no comfortable middle path where we achieve superintelligence while maintaining complete control and understanding. The mathematics of recursive self-improvement admits no stable equilibrium between human-level and superhuman intelligence; it’s either a runaway process or it fails to launch. We either cross the threshold where AI can improve itself faster than humans can oversee, or we don’t. We either create systems whose intelligence vastly exceeds our own, or we remain in the current paradigm of narrow, tool-like AI. This isn’t speculation – it’s a consequence of the exponential dynamics involved.
Yet, when we examine the historical track record of humans in decision-making positions confronting transformative risks, a sobering pattern emerges that effectively collapses this binary to a single outcome: zero. The decision-makers systematically prioritize personal enrichment and elite preservation over collective welfare, making catastrophic outcomes inevitable. For examples, consider the book Conroligarchs: Exposing the Billionnaire Class, Their Secret Deals, and Globalist plot to Dominate Your Life by Seamus Bruner. They are plenty, which is astonishing.
The pattern is consistent: humans in positions of power (just as other mortals) systematically discount tail risks, prioritize immediate tangible benefits over diffusion of future harms, and fail to cooperate even when facing existential threats. This isn’t a failure of intelligence – it’s a structural feature of how the decision-making is organized within the dominant paradigm (i.e. materialism, capitalism, even as far as hedonism and satanism). Corporate executives face quarterly earnings pressures and shareholder demands for growth. Politicians operate on election cycles measured in years, not decades or centuries. Regulatory bodies are chronically underfunded and outmatched by the industries they’re meant to oversee. The collapse of the old model and transit towards the economic system, for instance, requires cooperative effort in shaping that new future, which accounts for the interests of all parties, or the inevitable alternative of extinction and the later does not seem to be the off-the-table option.
Now apply this track record to AI development. The incentives are overwhelming: first-mover advantage in AGI could mean economic dominance, military superiority, or obsolescence. No nation or corporation will unilaterally slow their AI research while competitors advance. We’re already seeing this dynamic play out – AI safety research receives a fraction of the funding directed toward capability advancement, safety protocols are treated as optional “alignment taxes” that slow deployment, and companies release increasingly powerful models with minimal external oversight. The technical challenges of AI alignment – ensuring superintelligent systems reliably pursue human-compatible goals – remain unsolved, yet development accelerates regardless. Those calling for caution are dismissed as alarmists impeding progress, just as climate scientists were once marginalized.
The uncomfortable conclusion: given humanity’s demonstrated inability to exercise collective restraint in the face of competitive pressures, cooperate globally on existential risks, or prioritize long-term safety over short-term advantage, the probability that we successfully navigate through insanity associated with AGI development approaches zero. The binary choice between singularity and no-singularity effectively collapses to one option – we will continue racing toward AGI with insufficient safety measures, driven by the same institutional failures that characterize our response to every previous civilizational risk. The outcome stems from a predictable failure mode: if there is an availability of the worst case scenario at no visible cost or even better the responsibility bill is issued in the future, it will most likely be the choice of action and our governance systems just as decision-making structures are fundamentally mismatched to the challenge we face.
This isn’t pessimism nor determinism – it’s pattern recognition. The technological singularity may represent the ultimate test of human wisdom and foresight, and our historical record suggests we are structurally incapable of passing it. We’ve built a civilization that excels at incremental problem-solving and competitive optimization, but fails catastrophically at collective long-term planning and voluntary restraint. As we approach the most consequential transition in human history, those institutions remain unchanged, and the outcome is therefore predictable.
So, is there hope? What are the options given the recognition of the patterns?
This is the topic for the next post.