Skip to main content

Sometimes, a book lands so perfectly in the zeitgeist that it feels less like a prediction and more like a script the self-inflicted universe is obligated to follow. The Coming Wave by Mustafa Suleyman and Michael Bhaskar was one of those books. They saw the rise of GenAI before most of us had even considered that our chatbots might one day start writing poetry, faking research papers, or casually upending entire industries overnight.

But here we are. AI isn’t just a tool anymore—it’s a force, a co-worker, a problem-solver, and, depending on how you look at it, a beautifully efficient chaos engine. And Suleyman and Bhaskar called it all, from the breakthroughs to the breakdowns. Given Suleyman’s background as the CEO of Microsoft AI and co-founder of DeepMind, and Bhaskar’s expertise in digital publishing and technological foresight, their insights weren’t just educated guesses—they were warnings grounded in deep experience and real-world application.

One of the eeriest things about The Coming Wave is how well it nailed the paradox of AI-driven innovation: its immense potential for good, balanced precariously against its capacity for disruption… and destruction. On one hand, GenAI is revolutionizing fields like healthcare, education, and creative work. It’s diagnosing diseases, crafting compelling content, and even coding entire applications in minutes. On the other hand, it’s also making it easier than ever to generate deepfakes, flood the internet with misinformation, and automate bias at a scale humanity has never seen before. It hungerly washes its reasoning away with oceans of energy, and mountains of water. I know. The book refers to a tsunami. that is a mountain of water, ask Gilgamesh.

It’s not that we didn’t see this coming. The book practically shouted it at us. But, like kids in a candy store, we were too busy playing with the shiny new AI toys to stop and ask if we were also manufacturing a monster.

I remember reading their take on automation and thinking, this is going to get messy. They argued that AI doesn’t just take over the mindless, repetitive tasks—it’s creeping into the creative, the intellectual, the very things we once believed were uniquely human. And now? It’s happening in real time. Artists, writers, and even programmers are watching algorithms encroach on their domains, not with malice, but with a cold, mechanical efficiency that’s both impressive and a little terrifying.

It’s the same pattern we’ve seen before—industrial revolutions always bring displacement before equilibrium—but The Coming Wave warned us that this one would be different. Because this time, it’s not just hands being replaced. It’s minds.

The ethical quagmire we ignored

If you’ve spent any time lately watching AI-generated content spiral out of control (and being weaponized in democracy threatening ways), you know exactly why The Coming Wave hit so hard. Suleyman and Bhaskar saw that once AI could generate text, images, voices, and even entire identities, we’d be in trouble. And yet, here we are, struggling to tell real from fake, fact from fiction. We thought AI would be our most powerful assistant. Turns out, it’s also our most convincing liar.

Now, the questions are getting harder. Who owns AI-generated content? Who’s responsible when it goes wrong? Can we even try to phantom in the effect on elections, demagogy, democracy, voting behavior, safety perception and violence?

Suleyman and Bhaskar didn’t just diagnose the AI revolution—they also explored potential solutions and containment strategies. And if we’re smart, we’ll pay attention. Or should I say: If we were smart, we would have listened? The rise of decentralized AI, the push for stricter regulations, the ethical minefield of AI consciousness—all of these are simmering just beneath the surface.

I’m keeping a sharp eye on how governments respond, because regulation (or the lack of it) will dictate how AI evolves. Will we create frameworks to guide its development responsibly, or will we keep playing catch-up, trying to fix problems after they’ve already spiraled out of control?

One person doing incredible work in ensuring AI stays on the right side of humanity is John C. Havens. As the founding Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and the author of Heartificial Intelligence, Havens has been tirelessly pushing for responsible AI that aligns with human values, and well being. His work in AI ethics and policy is a crucial counterbalance to the rapid, often reckless, pace of innovation. If we’re looking for a guiding voice in the ethical minefield of AI, we’d do well to pay attention to him.

The reality is, we’re in too deep to turn back. GenAI is here, and its impact is undeniable. But if there’s one lesson The Coming Wave tried to hammer home, it’s this: technology doesn’t shape the future—we do. AI will be as good or as bad as the humans guiding it. If we treat it as a shortcut, a tool for profit-maximization above all else, we’ll reap the consequences. But if we actually take responsibility, if we build with foresight instead of just excitement, we might just steer this thing toward something better.

That, or we buckle up for whatever comes next. Because if The Coming Wave was right about GenAI, you can bet it was right about what’s coming after it, too.

Discover more from Heliade

Subscribe now to keep reading and get access to the full archive.

Continue reading