How I Learned to Stop Worrying and Love AI
Forget the Singularity, Embrace the Crash
Forget the singularity. The current AI paradigm is a bloated, hallucinating mess. A systemic architecture of extractive failure. And I, for one, can’t wait for the whole thing to blow up.
The Tedium of Extraction
We are currently living through the most boring phase of the Intelligent Age. It’s not boring because the technology isn’t clever, it is dazzlingly clever. It’s boring because the entire paradigm is a lazy rerun of every extractive industrial system that came before it. It’s the same old logic wearing a shiny new neural network, running on protocols as rigid and flawed as General Ripper’s infamous “Plan R”: the ultimate systemic failure built on self-imposed rules.
The core business model of today’s AI models is AI Colonialism. They are built to concentrate wealth and power by externalizing immense human and planetary costs. Think about the entire supply chain:
Planetary Resource Drain: These models require data centers that consume dizzying amounts of electricity and water, creating a heavy ecological footprint. The “scale at all costs” mindset, which insists that bigger models are always better, is fundamentally unsustainable, pushing us closer to environmental tipping points.
Extractive Labor: The glittering outputs of this paradigm are built on the backs of often-invisible human labor: the low-wage work, frequently in the Global South, required for data labeling, moderation, and cleanup.
The Hoarding Mechanism: The industry spins a “seductive lie” that AI productivity gains will naturally flow to the collective good. The brutal economic reality is that this productivity is systematically hoarded by concentrated firms, amplifying existing global economic inequality.
This isn’t innovation; it’s just efficient resource appropriation. It’s a systemic design flaw that is boring because it’s so predictable. The system is structurally designed to pull value upward, making the already wealthy even richer, and leaving the rest of us with a new suite of expensive tools that are excellent at generating mediocre content.
The Philosophical Limits of the Status Quo
Beyond the financial and political rot, the current AI paradigm has a profound, undeniable flaw at its core: it cannot be trusted.
This isn’t a bug. It’s a feature. The hallucination, the model’s inability to consistently guarantee truth, is an innate limitation of its architecture. This AI paradigm is optimized for coherence and brevity, not for semantic truth or verifiable integrity. This isn’t just a stochastic parrot. It’s the voice on the other end of the nuclear telephone, insisting on its own rigid, hallucinatory truth while the adults in the war room scramble for the override code that doesn’t exist.
This inability to guarantee the truth is why the current system is so corrosive to our collective well-being. It is actively eroding what I call our cognitive agency. When we outsource the difficult labor of critical analysis and deep information gathering to a machine optimized for surface-level smoothness, our own capacity for independent problem solving diminishes. We are trading long-term intellectual resilience for short-term automated convenience.
The worst outcome isn’t the explosion. The worst outcome is the tedious whimper of this system continuing for another decade, slowly flattening our minds and poisoning our information streams until we no longer have the capacity to resist or even notice the decline.
Please enjoy this brief message from our sponsors…
Well. Not really. But this article is inspired by a talk and workshop I’m hosting at this years Online Facilitation Unconference November 17th-21st. Would love to have you there with us!
The Case for Celebrating the Crash
The current AI paradigm is a live-action version of the film’s climax. The only way to avert disaster isn’t finding the bomber, it’s destroying the entire chain of command and control that launched it. The film’s dark genius is its celebration of agency: the ability to choose the terms of failure, rather than passively accepting the continuation of madness set by the system’s own rigid, flawless protocol.
That is the energy we need now. We need the current hyper-leveraged AI speculative bubble to burst.
Why? Because systemic correction is the only path to the Intelligent Age we actually want.
A crash isn’t just a market event. It’s a discontinuous innovation: a structural reset that makes space for a new paradigm. When the hyper-concentrated, over-scaled, extractive architecture collapses under its own weight and financial fragility, the capital and focus will be redirected.
This creates the radical necessity for something better: The Regenerative Intelligent Age.
This successor system must be built on the inverse of the current failed model. It must be care-centric and power-aware. It must prioritize:
Semantic Integrity over Computational Scale: Investing in smaller, verifiable, domain-specific AI models that prioritize localized user control and truth, rather than gargantuan, hallucination-prone general models.
Digital Commons over Private Oligopoly: Mandating that essential public knowledge and data be managed as a collectively created resource to democratize access and insulate essential systems from speculative collapse.
Cognitive Resilience: Rebuilding our critical thinking skills and digital literacy to resist algorithmic flattening.
The current paradigm is a structural dead end. It’s the Cold War bomb shelter of the digital world: an expensive illusion of security and progress. The crash, the sudden, glorious failure of this extractive model, is not a defeat. It is the beginning of the Reversal Engineering process.
The only way out is through. Let’s not fear the volatility; let’s embrace the magnificent, necessary destruction required to finally usher in something truly intelligent, relational, and regenerative.
Let the bubbles pop. Let the chips fall where they may.
Rachel Malek’s talk and workshop, “How I Learned to Stop Worrying and Love AI” explores the pragmatic case for embracing systemic failure as the path to the Intelligent Age. Join her at the Online Facilitation Unconference 2025 Intelligent Age Version for a participatory workshop to reverse-engineer the absurd acts of sabotage required to usher in a better future.



This is a really interesting view, Rachel.
AI hallucination is a real problem because it can spit out false information that may be harmful.
AIs need to be more truth-seeking.