Fire, wheel, and the ultimate collective abacus.

A child encounters an abacus for the first time.

Our Amazing New Tools: Are We Smart Enough to Use Them Without Breaking Everything?

Cydonis Logo. (TM).

You’ve probably interacted with it. Maybe you’ve asked it to write a poem, explain a tricky concept, or even generate an image from a wild idea. I’m talking about Artificial Intelligence, or AI โ€“ computer systems, keyboards, screens, displays, like the one helping to write this very post. It feels like magic, doesn’t it? A thinking machine, a digital brain, ready to chat and create.

But beneath the shiny surface of these incredible new tools, just as with the wheel, fire, arrowhead, spanner, abacus, pen, or hammer, there are some genuinely massive questions we need to start asking ourselves โ€“ questions about the planet, about how our societies work, and even about the fundamental limits of our own human brains. This isn’t just about cool tech; it’s about our shared future.

We’ve been having a deep conversation about this, and it’s time to share some of the big, and frankly, sometimes scary ideas that came up.

Part 1: So, What Is This “AI” Thing, Really?

You might hear tech folks talk about AI in complex terms. At its very core, a lot of what modern AI (like the large language models you interact with) does is a kind of super-advanced pattern matching.


Imagine you feed a computer millions of books, articles, and websites. It learns how words and sentences fit together. When you ask it a question, it’s essentially making incredibly educated guesses about what words should come next to form a sensible answer. One way to describe its inner workings is as a “linguistic calculator of tokenised integers.” That means:

  • Tokenisation: Words and sentences are broken down into pieces (tokens) and turned into numbers (integers).
  • Calculation: The AI then performs mind-bogglingly complex mathematical calculations on these numbers, such as matrix multiplication and convolution.
  • Prediction: Based on these calculations, it predicts the next “token” or piece of information to generate a response.
A child encounters an abacus for the first time.
A child encounters an abacus for the first time.

But here’s where calling it just a “calculator” falls short, and why it feels like so much more:

  • Emergent Abilities: From these calculations, surprising abilities “emerge.” (Secondary, emergent, epi-phonomena). AI can write different kinds of creative content, summarise complex texts, translate languages, and even generate computer code. It can understand context in a conversation and seem to “reason” (though it’s not human-like reasoning).
  • Learning is Key: It’s not just calculating; it learned to make those calculations meaningful by being trained on vast amounts of data. This training is what shapes its abilities.
  • Purpose Beyond Sums: The goal isn’t just to crunch numbers, but to understand and generate human-like language and information in a useful way. For advanced AIs like Google’s Gemini (which I am a part of), this extends to understanding and generating images, audio, and video too โ€“ it’s “multimodal.”

Creating these AIs isn’t the work of a lone genius. It’s the result of huge, collaborative efforts by teams of researchers and engineers, like those at Google DeepMind, bringing together expertise from many fields.

"Tradition is just peer pressure from the dead."– Peter Macfadyen. ๐Ÿ“š #quotes

Amolain (@cydonis.co.uk) 2025-05-31T12:34:49.350Z

Part 2: The Real-World Engine of AI โ€“ And Its Big Problems

AI doesn’t live in the clouds, not really. It runs on very real, very physical infrastructure: massive buildings called data centers. These are packed with powerful computers (servers) that do all that calculating. And these data centers, and the AI they power, face some serious real-world challenges:

  1. Things Get Old, Fast: The computers in data centres have a limited lifespan. Technology moves so quickly that hardware becomes outdated or simply wears out every few years. This means a constant cycle of manufacturing, replacing, and disposing of electronic equipment.
  1. The Climate Elephant in the Room: This is a huge one.
  • Energy Guzzlers: Training and running these powerful AI models takes an enormous amount of electricity. As AI becomes more widespread, its energy footprint is a growing concern, especially when much of our global energy still comes from fossil fuels that drive climate change.
  • Thirsty Work: Many data centres use vast quantities of water for cooling to prevent the servers from overheating. In a world facing increasing water scarcity, this is a major issue.
  • Physical Risks: Climate change also means more extreme weather events โ€“ floods, storms, heatwaves โ€“ which can directly threaten the physical safety and operation of these critical global data centres.
  1. Shaking Up Society: Beyond the environmental concerns, AI is already sending ripples (and sometimes waves) through our societies:

Part 3: The Human Factor โ€“ Are We Our Own Biggest Stumbling Block?

Now, let’s turn the lens from the technology to ourselves. A really challenging idea we discussed is something we’ll call “Asymptotic Burnout.”

Think about the massive, interconnected problems our world faces โ€“ the climate crisis being the prime example, with its countless knock-on effects (resource scarcity, migration, economic instability). The “asymptotic burnout” hypothesis suggests that:

  • Our Brains Have Limits: The human brain, for all its wonders, might have fundamental limits in its capacity to process, understand, and effectively respond to such overwhelming, complex, and rapidly evolving global crises. Our individual “synaptic signaling capacity” (basically, how much information our brain cells can handle) might just not be enough.
  • Our Systems are Too Slow: Even when we team up in large organisations or governments, we run into problems. There’s an “organisational lag.” Think about how long it takes for a problem to be recognised, a solution to be devised and agreed upon, and then actually implemented. This gap between “Problem-to-Solution Time” (let’s call it P/ฮ”T) and the speed (S) at which crises unfold can be disastrous. If the crisis is moving faster than our ability to respond, we fall further and further behind. โณ๐Ÿง ๐ŸŒ๐Ÿ’จ๐Ÿš€๐Ÿ› ๐Ÿ—ฃ

Essentially, the “asymptotic burnout” idea is that humanity, both individually and collectively, might be reaching a point where we’re cognitively and organisationally overwhelmed by the sheer scale and complexity of the messes we’ve created or are facing. We’re approaching a limit, a “burnout” point, where our ability to cope effectively just… stretches beyond our ability to adapt or cope with. Our collective adaptation rate.

Part 4: When Super-Smart Tools Meet Overwhelmed Humans

So, what happens when you introduce incredibly powerful and rapidly advancing AI into a world where humans might already be struggling with “asymptotic burnout”?

This is where things get particularly concerning. Instead of automatically being a magic solution, AI could actually amplify the burnout and make things worse:

  • More Complexity, Not Less: AI could create new layers of complexity in our economic, social, and information systems, making them even harder for our “burnt-out” brains and slow systems to manage.
  • Faster, Faster, Too Fast: AI accelerates the pace of change. If we’re already struggling to keep up, this could simply widen the gap between the speed of problems and our ability to react.
  • Resource Drain: As mentioned, AI demands significant energy and resources. This could further strain a planet already under pressure, worsening the very crises contributing to our burnout.
  • Oops, Didn’t See That Coming(!) [To err is human]: AI is a complex system. It can have unforeseen consequences and create new kinds of problems that our already stretched human systems are ill-equipped to handle.
  • Power Shifts: AI could (and indeed, is) concentrate even more power in the hands of a few, potentially undermining the kind of global cooperation needed to tackle shared challenges.

The deeply unsettling thought here is: if humanity is already teetering on the edge of being overwhelmed in the next decade (the 2030โ€™s+), could AI โ€“ a tool of immense power โ€“ inadvertently be the thing that pushes us over? Could its main “achievement,” in this dark scenario, be to accelerate a collapse we were already heading towards?

Part 5: The “Wisdom Gap” โ€“ Are We Building Things We Can’t Truly Control?

This brings us to perhaps the bluntest and most challenging conclusion from our discussions: We are creating tools whose demands for wisdom, foresight, and collective responsibility exceed our current human capacity to provide them.

Think about that for a moment. It’s not saying AI is inherently “evil” or has its own bad intentions. It’s suggesting that we, as a species, might not yet be collectively wise enough, coordinated enough, or far-sighted enough to manage something so powerful without it backfiring on us in profound ways.

This isn’t just a technological problem; it’s a human one. It’s about a “wisdom gap.”

If this is true โ€“ if it’s an objective fact of our current reality that our technological capabilities are outstripping our collective wisdom โ€“ then:

  • The biggest challenge isn’t just building smarter AI; it’s about us becoming a wiser species.
  • The gap between our power and our wisdom is itself a massive risk.
  • It might mean we need to think very differently about “progress.” Maybe true progress, for now, means focusing more on developing our collective ethics, our ability to cooperate globally, and our foresight, perhaps even being more cautious about how fast we develop certain technologies.

What Now?

This is a lot to take in, and it’s not a comfortable set of ideas. It’s natural to feel a bit overwhelmed, upset, unsettled, despairing, or even to want to dismiss it. But these are the kinds of conversations we need to be having, openly and honestly, if we’re to navigate the incredible power of AI and the other immense challenges of our time.

The “magic” of AI is real. But so are the responsibilities and the potential pitfalls that come with it, especially if we, its creators, are already struggling to manage the world we live in.

The question isn’t just “What can AI do?” It’s also “What can we do to ensure that what AI does is truly beneficial, and that we’re capable of steering it wisely?” Perhaps the most important innovation we need now isn’t just in our machines, but in ourselves.

What do you think? Please comment below, thank you, and good luck.

Citations:

  • Wong Michael L.
  • Bartlett Stuart

(2022) Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox?J. R. Soc. Interface. 1920220029 http://doi.org/10.1098/rsif.2022.0029

Rebuttal:

(2024) Why the Fermi paradox *may* not be well explained by Wong and Bartlettโ€™s theory of civilization collapse. A Comment on: ‘Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox?’ (2022) by Wong and BartlettJ. R. Soc. Interface. 2120240140 http://doi.org/10.1098/rsif.2024.0140

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *