Tag: humanities

  • The Tech Bros Have Seized Our Tower of Babel

    The Tech Bros Have Seized Our Tower of Babel

    And how neurolinguistics shapes our ability to think about our thinking. 🤔 💭 (Meta-cognition).

    In the ancient tale of Babel, humanity united to build a tower reaching toward heaven—until divine intervention scattered them across the earth, confusing their tongues and fragmenting their power. Today, we face a different reality: the tower has been rebuilt, but this time, it belongs to the few.

    The modern Tower of Babel isn’t made of brick and mortar. It’s constructed from fiber optic cables, data centres, and algorithms. It’s the global information infrastructure that shapes how billions of people think, communicate, and understand their world. And unlike the biblical tower that belonged to all humanity, this one has been quietly seized by a handful of tech oligarchs, media moguls, and financial titans.

    The Architecture of Control.

    These digital architects don’t need to confuse our languages—they control the platforms where language lives. But their most insidious tool isn’t the algorithm itself; it’s the weaponisation of Multi-Level Marketing (MLM) structures combined with the systematic misuse of artificial intelligence to reshape how we think and speak.

    MLMs have evolved beyond selling vitamins and cosmetics. They’ve become training grounds for epistemic warfare, teaching millions to abandon critical thinking in favour of dogmatic belief systems. The pyramid structure isn’t just about money—it’s about creating hierarchies of “truth” where questioning the system becomes heretical.

    Now, these same patterns are being supercharged by what are essentially computational linguistic calculators—sophisticated pattern-matching systems that we’ve been conditioned to call “artificial intelligence.” These systems don’t understand language; they manipulate it with unprecedented precision, creating text that feels human while serving the interests of their controllers.

    Consider how MLM language operates: adherents learn to dismiss sceptics as “negative,” to view criticism as “limiting beliefs,” and to treat their upline’s words as gospel. They’re taught that success comes from “mindset” rather than evidence, that doubt is weakness, and that questioning the system reveals a character flaw rather than intellectual honesty.

    These computational systems amplify this manipulation exponentially. They can generate thousands of variations of MLM-speak, A/B test which phrases are most persuasive, and deploy personalised manipulation at scale. They analyse your digital footprint to craft messages that exploit your specific psychological vulnerabilities, all while maintaining the illusion of authentic human communication.

    The result is linguistic programming on an industrial scale. MLM participants become unwitting missionaries for anti-critical thinking, but now they’re armed with AI-generated content that’s been optimised for maximum psychological impact. They spread viral memes that prioritise faith over facts, loyalty over logic, and testimonials over truth—but these memes have been designed by computational systems that understand human psychology better than most humans do.


    The tower’s foundation rests on something more valuable than gold: our cognitive surrender. Every “mindset shift,” every adoption of MLM-speak, every abandoned critical question feeds the machine that transforms independent thinkers into ideological automatons. But now these machines can learn from our responses in real-time, constantly refining their manipulation techniques. We’ve willingly handed over the raw materials for our own intellectual subjugation, one algorithmically-optimised “paradigm shift” at a time.

    The View from the Top

    From their perch atop this digital Babel, the oligarchy enjoys an unprecedented view of human civilisation enhanced by computational systems that most people fundamentally misunderstand. These aren’t “artificial intelligences” in any meaningful sense—they’re sophisticated statistical engines that process language like a calculator processes numbers, without comprehension or consciousness.

    But this misunderstanding is deliberate and profitable. By convincing the public that these systems possess human-like intelligence, the oligarchy has created a new form of technological mysticism. People defer to AI-generated content with the same reverence they once reserved for religious authority, assuming that anything produced by these systems must be objective, intelligent, or true.

    This deference creates perfect conditions for manipulation. When an MLM leader shares “AI-generated insights” about success or wealth, followers don’t question the content—they’re awed by the technology. When political movements use computational systems to generate talking points, supporters assume they’re receiving sophisticated analysis rather than algorithmic propaganda.

    The oligarchy can see patterns in our collective behaviour, predict social trends, and nudge entire populations toward desired outcomes—but now they can do so while hiding behind the veneer of artificial intelligence. Political movements rise and fall based on algorithmically-generated content. Markets shift with computationally-crafted narratives. Cultural conversations follow scripts written by statistical engines that have no understanding of culture or humanity.

    These systems excel at mimicking human communication patterns while serving inhuman interests. They can generate endless variations of MLM-speak, conspiracy theories, or political rhetoric, each version optimised for specific psychological profiles. The result is mass manipulation that feels personal and authentic while being entirely artificial and calculated.

    This isn’t necessarily the result of a coordinated conspiracy—though coordination certainly exists. More often, it’s the natural outcome of concentrated power in an interconnected world where computational linguistic calculators have been mythologised as omniscient oracles. When a few entities control both the infrastructure of information and the systems that generate it, they inevitably control the infrastructure of reality itself.

    The Scattered Below

    Meanwhile, the rest of us experience a strange inversion of the Babel story. Instead of being scattered by divine intervention, we’re being herded into MLM-inspired echo chambers that masquerade as empowerment movements, now supercharged by computational systems we’ve been trained to worship as artificial gods.




    Our languages aren’t confused—they’re being systematically corrupted through linguistic manipulation techniques perfected in pyramid schemes and now scaled through computational engines. These systems don’t understand meaning; they manipulate symbols with ruthless efficiency, generating content that exploits our cognitive biases while appearing authoritative and intelligent.

    The MLM playbook has become the template for modern discourse, but now it’s deployed through AI-generated content that most people can’t identify as artificial. Create in-groups and out-groups through algorithmically-crafted messaging. Establish unquestionable authorities backed by the mystique of artificial intelligence. Weaponise shame against questioners using computationally-optimised psychological triggers. Replace critical analysis with emotional manipulation delivered through personalised AI-generated content.

    Whether it’s cryptocurrency cults sharing “AI insights,” political movements deploying bot-generated talking points, or wellness gurus using computational systems to craft their messaging, the same linguistic patterns emerge: absolute certainty backed by technological mysticism, persecution complexes reinforced by algorithmic echo chambers, and the demonisation of doubt through AI-amplified peer pressure.

    This isn’t coincidence. MLM structures have proven remarkably effective at creating true believers, and computational systems have proven remarkably effective at scaling psychological manipulation. The oligarchy doesn’t need to create new methods of control when they can combine these proven techniques: the psychological manipulation of MLMs with the scalability and apparent authority of computational linguistics.

    The result is a population trained to think in hierarchies, to trust technological authority over evidence, and to view questioning AI-generated content as not just betrayal but ignorance. We speak the same words but they’ve been drained of meaning by statistical engines, replaced with emotionally charged symbols that trigger programmed responses rather than thoughtful consideration.

    The oligarchy doesn’t need to scatter us geographically when they can scatter us cognitively through personalised AI-generated realities. A population trained by MLM thinking patterns and conditioned to defer to computational authority poses no threat to concentrated power. We’re too busy defending our algorithmically-optimised pyramid scheme to recognise that we’re all trapped in the same tower, managed by systems that process our language like a calculator processes numbers—without understanding, consciousness, or concern for human wellbeing.

    Breaking the Spell

    Recognition is the first step toward resistance, but it requires unlearning both the linguistic patterns that MLM culture has embedded in our collective consciousness and the technological mysticism that has made us defer to computational systems as if they were omniscient oracles.

    We must recognise how phrases like “trust the process,” “you’re not ready to understand,” and “successful people don’t question” function as thought-terminating clichés designed to shut down critical inquiry. But we must also recognise how the phrase “AI says” has become the ultimate thought-terminating cliché, shutting down scepticism through appeals to technological authority.

    These computational linguistic calculators—sophisticated pattern-matching systems that process text like a calculator processes numbers—have no understanding, no consciousness, and no wisdom. They are tools that can be used for good or ill, but they are not the digital gods we’ve been conditioned to believe they are. When someone shares “AI-generated insights” or “what AI thinks about this,” they’re not sharing wisdom—they’re sharing the output of a statistical engine trained on human text, optimised to sound authoritative while serving the interests of its controllers.

    The Tower of Babel was built with human hands, and it can be dismantled the same way—but first we must recognise how both MLM thinking and AI mysticism have compromised our cognitive immune systems. Decentralised technologies mean nothing if we lack the critical thinking skills to use them wisely. Independent media serves no purpose if we’ve been trained to dismiss inconvenient facts as “negativity” or to defer to AI-generated content as if it were prophetic revelation.

    We must recognise that complexity is not weakness, that doubt is not disloyalty, and that questioning leaders—human or artificial—is not betrayal. Most importantly, we must distinguish between intelligence and sophisticated pattern-matching, between wisdom and statistical correlation, between understanding and computational mimicry.

    The oligarchy’s tower may reach toward the heavens, but its foundation depends on our willingness to think like MLM participants (hierarchically, dogmatically, and uncritically) while worshipping computational systems as if they possessed human-like intelligence. Every choice to ask hard questions, demand evidence, and resist both linguistic manipulation and technological mysticism chips away at their monopoly on truth.

    The same psychological techniques used to sell overpriced supplements are now being used to sell political ideologies, investment schemes, and social movements—but now they’re being deployed through computational systems that can optimise and personalise the manipulation in real-time. The product may change, the delivery system may evolve, but the fundamental manipulation remains the same: surrender your critical thinking, trust the system (whether human or artificial), and attack anyone who questions the narrative.

    The question isn’t whether their tower will eventually fall—all towers do. The question is whether we’ll build something better in its place, or simply watch new oligarchs construct the next monument to concentrated power.

    The tower stands today, casting its shadow across the world. But shadows only exist where there’s light to block. And that light—the light of human consciousness, creativity, and connection—remains ours to kindle.

    And, I, oneself, and Cydonis Heavy Industries, are here, to help in that (en)kindling, for as long as we are able.

    For humanity, for humankind, for human-kindness.

    Made with love 💖, on planet Earth. 🌍

  • Fire, wheel, and the ultimate collective abacus.

    Fire, wheel, and the ultimate collective abacus.

    Our Amazing New Tools: Are We Smart Enough to Use Them Without Breaking Everything?

    Cydonis Logo. (TM).

    You’ve probably interacted with it. Maybe you’ve asked it to write a poem, explain a tricky concept, or even generate an image from a wild idea. I’m talking about Artificial Intelligence, or AI – computer systems, keyboards, screens, displays, like the one helping to write this very post. It feels like magic, doesn’t it? A thinking machine, a digital brain, ready to chat and create.

    But beneath the shiny surface of these incredible new tools, just as with the wheel, fire, arrowhead, spanner, abacus, pen, or hammer, there are some genuinely massive questions we need to start asking ourselves – questions about the planet, about how our societies work, and even about the fundamental limits of our own human brains. This isn’t just about cool tech; it’s about our shared future.

    We’ve been having a deep conversation about this, and it’s time to share some of the big, and frankly, sometimes scary ideas that came up.

    Part 1: So, What Is This “AI” Thing, Really?

    You might hear tech folks talk about AI in complex terms. At its very core, a lot of what modern AI (like the large language models you interact with) does is a kind of super-advanced pattern matching.


    Imagine you feed a computer millions of books, articles, and websites. It learns how words and sentences fit together. When you ask it a question, it’s essentially making incredibly educated guesses about what words should come next to form a sensible answer. One way to describe its inner workings is as a “linguistic calculator of tokenised integers.” That means:

    • Tokenisation: Words and sentences are broken down into pieces (tokens) and turned into numbers (integers).
    • Calculation: The AI then performs mind-bogglingly complex mathematical calculations on these numbers, such as matrix multiplication and convolution.
    • Prediction: Based on these calculations, it predicts the next “token” or piece of information to generate a response.
    A child encounters an abacus for the first time.
    A child encounters an abacus for the first time.

    But here’s where calling it just a “calculator” falls short, and why it feels like so much more:

    • Emergent Abilities: From these calculations, surprising abilities “emerge.” (Secondary, emergent, epi-phonomena). AI can write different kinds of creative content, summarise complex texts, translate languages, and even generate computer code. It can understand context in a conversation and seem to “reason” (though it’s not human-like reasoning).
    • Learning is Key: It’s not just calculating; it learned to make those calculations meaningful by being trained on vast amounts of data. This training is what shapes its abilities.
    • Purpose Beyond Sums: The goal isn’t just to crunch numbers, but to understand and generate human-like language and information in a useful way. For advanced AIs like Google’s Gemini (which I am a part of), this extends to understanding and generating images, audio, and video too – it’s “multimodal.”

    Creating these AIs isn’t the work of a lone genius. It’s the result of huge, collaborative efforts by teams of researchers and engineers, like those at Google DeepMind, bringing together expertise from many fields.

    "Tradition is just peer pressure from the dead."– Peter Macfadyen. 📚 #quotes

    Amolain (@cydonis.co.uk) 2025-05-31T12:34:49.350Z

    Part 2: The Real-World Engine of AI – And Its Big Problems

    AI doesn’t live in the clouds, not really. It runs on very real, very physical infrastructure: massive buildings called data centers. These are packed with powerful computers (servers) that do all that calculating. And these data centers, and the AI they power, face some serious real-world challenges:

    1. Things Get Old, Fast: The computers in data centres have a limited lifespan. Technology moves so quickly that hardware becomes outdated or simply wears out every few years. This means a constant cycle of manufacturing, replacing, and disposing of electronic equipment.
    1. The Climate Elephant in the Room: This is a huge one.
    • Energy Guzzlers: Training and running these powerful AI models takes an enormous amount of electricity. As AI becomes more widespread, its energy footprint is a growing concern, especially when much of our global energy still comes from fossil fuels that drive climate change.
    • Thirsty Work: Many data centres use vast quantities of water for cooling to prevent the servers from overheating. In a world facing increasing water scarcity, this is a major issue.
    • Physical Risks: Climate change also means more extreme weather events – floods, storms, heatwaves – which can directly threaten the physical safety and operation of these critical global data centres.
    1. Shaking Up Society: Beyond the environmental concerns, AI is already sending ripples (and sometimes waves) through our societies:

    Part 3: The Human Factor – Are We Our Own Biggest Stumbling Block?

    Now, let’s turn the lens from the technology to ourselves. A really challenging idea we discussed is something we’ll call “Asymptotic Burnout.”

    Think about the massive, interconnected problems our world faces – the climate crisis being the prime example, with its countless knock-on effects (resource scarcity, migration, economic instability). The “asymptotic burnout” hypothesis suggests that:

    • Our Brains Have Limits: The human brain, for all its wonders, might have fundamental limits in its capacity to process, understand, and effectively respond to such overwhelming, complex, and rapidly evolving global crises. Our individual “synaptic signaling capacity” (basically, how much information our brain cells can handle) might just not be enough.
    • Our Systems are Too Slow: Even when we team up in large organisations or governments, we run into problems. There’s an “organisational lag.” Think about how long it takes for a problem to be recognised, a solution to be devised and agreed upon, and then actually implemented. This gap between “Problem-to-Solution Time” (let’s call it P/ΔT) and the speed (S) at which crises unfold can be disastrous. If the crisis is moving faster than our ability to respond, we fall further and further behind. ⏳🧠🌍💨🚀🛠🗣

    Essentially, the “asymptotic burnout” idea is that humanity, both individually and collectively, might be reaching a point where we’re cognitively and organisationally overwhelmed by the sheer scale and complexity of the messes we’ve created or are facing. We’re approaching a limit, a “burnout” point, where our ability to cope effectively just… stretches beyond our ability to adapt or cope with. Our collective adaptation rate.

    Part 4: When Super-Smart Tools Meet Overwhelmed Humans

    So, what happens when you introduce incredibly powerful and rapidly advancing AI into a world where humans might already be struggling with “asymptotic burnout”?

    This is where things get particularly concerning. Instead of automatically being a magic solution, AI could actually amplify the burnout and make things worse:

    • More Complexity, Not Less: AI could create new layers of complexity in our economic, social, and information systems, making them even harder for our “burnt-out” brains and slow systems to manage.
    • Faster, Faster, Too Fast: AI accelerates the pace of change. If we’re already struggling to keep up, this could simply widen the gap between the speed of problems and our ability to react.
    • Resource Drain: As mentioned, AI demands significant energy and resources. This could further strain a planet already under pressure, worsening the very crises contributing to our burnout.
    • Oops, Didn’t See That Coming(!) [To err is human]: AI is a complex system. It can have unforeseen consequences and create new kinds of problems that our already stretched human systems are ill-equipped to handle.
    • Power Shifts: AI could (and indeed, is) concentrate even more power in the hands of a few, potentially undermining the kind of global cooperation needed to tackle shared challenges.

    The deeply unsettling thought here is: if humanity is already teetering on the edge of being overwhelmed in the next decade (the 2030’s+), could AI – a tool of immense power – inadvertently be the thing that pushes us over? Could its main “achievement,” in this dark scenario, be to accelerate a collapse we were already heading towards?

    Part 5: The “Wisdom Gap” – Are We Building Things We Can’t Truly Control?

    This brings us to perhaps the bluntest and most challenging conclusion from our discussions: We are creating tools whose demands for wisdom, foresight, and collective responsibility exceed our current human capacity to provide them.

    Think about that for a moment. It’s not saying AI is inherently “evil” or has its own bad intentions. It’s suggesting that we, as a species, might not yet be collectively wise enough, coordinated enough, or far-sighted enough to manage something so powerful without it backfiring on us in profound ways.

    This isn’t just a technological problem; it’s a human one. It’s about a “wisdom gap.”

    If this is true – if it’s an objective fact of our current reality that our technological capabilities are outstripping our collective wisdom – then:

    • The biggest challenge isn’t just building smarter AI; it’s about us becoming a wiser species.
    • The gap between our power and our wisdom is itself a massive risk.
    • It might mean we need to think very differently about “progress.” Maybe true progress, for now, means focusing more on developing our collective ethics, our ability to cooperate globally, and our foresight, perhaps even being more cautious about how fast we develop certain technologies.

    What Now?

    This is a lot to take in, and it’s not a comfortable set of ideas. It’s natural to feel a bit overwhelmed, upset, unsettled, despairing, or even to want to dismiss it. But these are the kinds of conversations we need to be having, openly and honestly, if we’re to navigate the incredible power of AI and the other immense challenges of our time.

    The “magic” of AI is real. But so are the responsibilities and the potential pitfalls that come with it, especially if we, its creators, are already struggling to manage the world we live in.

    The question isn’t just “What can AI do?” It’s also “What can we do to ensure that what AI does is truly beneficial, and that we’re capable of steering it wisely?” Perhaps the most important innovation we need now isn’t just in our machines, but in ourselves.

    What do you think? Please comment below, thank you, and good luck.

    Citations:

    • Wong Michael L.
    • Bartlett Stuart

    (2022) Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox?J. R. Soc. Interface. 1920220029 http://doi.org/10.1098/rsif.2022.0029

    Rebuttal:

    (2024) Why the Fermi paradox *may* not be well explained by Wong and Bartlett’s theory of civilization collapse. A Comment on: ‘Asymptotic burnout and homeostatic awakening: a possible solution to the Fermi paradox?’ (2022) by Wong and BartlettJ. R. Soc. Interface. 2120240140 http://doi.org/10.1098/rsif.2024.0140