The Dawn of the Cognitive Dark Age

Around 1200 BCE, advanced civilisations across the Mediterranean crumbled in a cataclysm historians call the Bronze Age Collapse. Mighty empires toppled, trade networks shattered, and entire writing systems vanished—the Greeks abandoned Linear B, plunging into centuries of illiteracy. Many scholars point to the enigmatic "Sea Peoples" as culprits, but the deeper lesson lies in how swiftly sophisticated societies can unravel. Today, as generative AI evolves at an exponential pace before our eyes, we face a subtler yet equally profound threat: not a physical collapse, but a cognitive one. By outsourcing our thinking, creativity, and problem-solving to algorithms, we risk forging a society incapable of functioning without technological crutches. Unlike the comically bloated humans of WALL-E, our dystopia might feature minds intellectually atrophied, reliant on AI for tasks we once mastered ourselves. The generative AI industrial complex has emerged with staggering speed in just two years, leaving us little time to reflect on the mental faculties we might be letting waste—and whether, like the post-Bronze Age world, we’re unwittingly steering towards a cognitive dark age of our own design.

I propose a distinction between "ensloppification" and the widely recognised term "enshittification." The latter typically describes the gradual degradation of platforms or services due to misaligned incentives—often prioritising profit over quality. Enshittification is an economic phenomenon.

Ensloppification, however, is a different creature. It’s not primarily market-driven but rooted in human nature. Ensloppification stems from the deadly sin of sloth, where individuals and organisations choose the path of least resistance, inviting a slow, creeping decline.

This descent, I contend, is most explosively accelerated by what I term the generative AI industrial complex—a system reshaping our world at breakneck speed. The story of ensloppification begins here.

The Generative AI Industrial Complex

The commercialisation and integration of generative AI into everyday tools have unfolded at an unprecedented clip. Contrast this with the adoption of past transformative technologies like the internet. From sloppy content creation to subpar code generation, we’ve witnessed a slop-explosion rivalling the Permian-Triassic extinction in scale. Yet beneath this sheen of progress lurks a troubling truth: we’re fast becoming LLM junkies, craving another hit from our Silicon Valley pushers.

Tech giants—think OpenAI, Nvidia, Meta, Microsoft, X, Anthropic, Mistral, Alibaba, DeepSeek, and others—are investing billions to build ever-more-powerful AI models, each iteration outstripping its predecessor. Their business model is straightforward: craft tools so indispensable that users can’t imagine life without them once hooked. But in embracing these efficiencies, we may be priming ourselves for a cognitive outsourcing that erodes our innate abilities.

Several unseen forces tug at this dynamic:

  • We must recognise the incentives baked into modern venture capital and tech startup culture. Burning through the cash these companies command is a Herculean feat. The San Francisco ethos is to aim beyond the known universe, hope there’s more out there, and conquer it.
  • A logistic curve, viewed up close, mimics an exponential one. To put it bluntly, endlessly scaling parameters and compute—occasionally tweaking your graph axes to logarithmic—won’t stave off reality forever.

How does the generative AI industrial complex tie into the cognitive decay of LLM slop? Simply put, these tech economies can only balance their books through mass market saturation before the infinite money glitch dries up. I’ll grant that incremental innovations—like those from Nvidia or DeepSeek—might partially staunch the bleeding. Still, the big players are pushing hard to flood the market with their models and compute, yielding half-baked products like Microsoft Copilot, which few find consistently useful. Big tech, ambitious startups, and smaller firms building on their foundations are all shoving generative AI into your hands—like a street dealer peddling wares.

The Knowledge Worker’s Dilemma

Picture John Doe, a worker at a large non-technical firm. Leadership has decreed that operational efficiency must rise, courtesy of shiny new AI tools. Non-technical knowledge workers face an especially acute strain of this affliction. As AI grows adept at complex cognitive tasks—and as these workers are handed the tools—professionals in fields from law to medicine to software development confront a stark choice: adopt AI to stay competitive, risking skill atrophy, or resist and face irrelevance. This is cognitive outsourcing in action.

The threat isn’t hypothetical. Cognitive psychology has long shown that skills fade without practice—our brains operate on a "use it or lose it" basis. By offloading demanding tasks to AI, we may be undermining our own intellectual growth and upkeep, much like handing your legs to a treadmill and forgetting how to walk.

I’ve seen this firsthand in my work, where I’m a lone technical spearhead in a non-technical setting, driving an internal SaaS project that leverages LLMs to automate a core process. From conversations and deployment logs, I’ve observed senior colleagues doing two things:

  • Using chatbots to handle the critical thinking needed to generate inputs for the LLM system.
  • Pushing garbage through the system, ignoring warnings, and failing to review outputs before using them in client meetings.

These are seasoned experts in their fields, yet they’re on a slippery slope. The efficiency gains are real and market-driven—we must adapt our processes, or the market will do it for us. But the cost is steep.

The Plight of Junior Professionals

While seasoned pros wrestle with this tension, I’m especially worried about junior professionals across knowledge domains. The message is stark: more AI tools mean less demand for entry-level staff. Young people already struggle to land jobs—why make it worse? And for those who do break in, heavy reliance on AI robs them of chances to master their craft’s fundamentals and sharpen their critical thinking. This isn’t alarmism—it’s a corporate trend I see looming.

For technical fields like coding, the picture is clearer. Many practitioners view LLMs as a friends-with-benefits deal. Core skills and critical thinking form a bedrock that AI should enhance, not replace, by:

  • Saving time on grunt work.
  • Breaking through coder’s block faster.

Straying from this balance dulls those vital skills.

Paradise Lost

The Bronze Age Collapse offers a striking parallel. Those ancient societies didn’t fall because they regressed technologically—they’d hit new peaks of sophistication. Rather, their complexity and interdependence bred vulnerabilities that, once breached, sparked cascading failures.

Our growing cognitive reliance on AI doesn’t make us less advanced—it exposes new weaknesses. We risk becoming a society adept at navigating AI-augmented worlds but floundering when stripped to our raw abilities.

The Greeks who lost writing didn’t grasp their loss until centuries later. Will future generations see our era as one where we traded cognitive autonomy for convenience? Will they marvel at how eagerly we surrendered the mental strengths that once defined our resilience?

Paradise Regained

Even if this Bronze Age parallel holds, what follows isn’t eternal darkness but a new dawn. From that ancient collapse rose the seeds of Western civilisation: Homer, the Iliad, the Odyssey, Greek city-states, Rome, the Renaissance, the Enlightenment, the Industrial Revolution, the internet, and now the generative AI industrial complex. Rise and fall are timeless.

Nurturing our innate human abilities will matter more than ever. Perhaps we revive viva voce exams at all educational levels—you can’t fake your way through a live clash of minds; a fool’s ignorance shines bright. More broadly, we must recall that cognitive outsourcing isn’t novel. Past generations deferred decisions to family patriarchs or morality to priests, easing their mental load at a cost. Outsourcing intelligence and critical thinking follows the same pattern—it’s better avoided. Yet those who barely think now will likely persist, and in that, we might still see a rising tide lift all boats. Those who cling to their native faculties today could shape tomorrow’s renaissance. The question remains: will we be awake, resolute, and human enough to seize the dawn after ensloppification’s dusk?

Disclaimer: This article was edited by Grok 3.