Skip to main content

Let’s start with a disclaimer before I set the stake on fire 😊: this comes from a tech optimist, a power user, an early adopter, an explorer, not a neo‑Luddite. My default tech setting is the pedal is to the metal; trying to keep the rubber on the road. AI is in daily use here at Devriendt castle:  gladly, hungrily, curiously; sometimes in awe, often frustrated, increasingly puzzled, angry, and lately….sad.

While I’m busy harvesting the upside of AI like a greedy little productivity goblin, the contrarian and analytical beast in me keeps seeing the downside growing in the corner like a pyramid: it keeps getting bigger, darker, and harder to ignore.

I’m putting my middle aged grumpy hat on: let’s also stop pretending this is about “helpful assistants” and other cheap and sense-dulling linguistic mumbo‑jumbo. From a techno‑humano‑ethical point of view, it’s time to call the thing what it is. The current generation of AI is a barely regulated industrial machine for strip‑mining data, attention, electricity, water, and intellectual property, steered by a handful of cynical, mostly white, ego‑tripping men racing each other, without mercy and with very selective regard for law, ethics, or basic decency, toward something they cheerfully call AGI. The rest of us are supposed to call this “progress” because the chatbot can write a poem about our dog. There, I said it.

One of the reasons this is so grotesque is that there was and is a better framing on the table. Isaac Asimov  -Russian‑born American writer, biochemist, one of the “Big Three” of science fiction, author of the Foundation and Robot series and more than 500 books-  gave the world a simple thought‑frame for not destroying ourselves. He understood something the current AI priesthood keeps forgetting: when you create powerful systems, the interesting question is never “can it do the thing”, it’s “what stops it from doing the wrong thing at scale”. Together with his editor John W. Campbell, he formulated the Three Laws of Robotics and later added a Zeroth Law: put humanity first, individual humans second, robot self‑interest last.

Now watch what the big players actually do. OpenAI, Google,  xAI, Meta, …the whole gang. They don’t follow Asimov’s hierarchy. They follow its very inversion: protect valuation, protect model survival, protect leadership ego, protect relentless speed toward “the next thing” (world domination through AGI?). Then, and only then, if there’s PR risk or investor reluctance, they pretend to care about people, children, or the planet.

Law 0: “Humanity First” vs thirsty, power‑hungry models

The Zeroth Law says: a robot may not harm humanity, or through inaction allow humanity to come to harm. Meanwhile, the AI stack is quietly becoming a planetary‑scale resource drain. Data‑centre electricity consumption in Europe has been growing at around 12% per year and is projected to globally more than double to roughly 945 TWh by 2030, with AI workloads a key driver. Globally, analysts expect data‑centre and network energy demand to push toward or beyond a thousand terawatt‑hours a year; more than many mid‑sized countries use in total. US data‑centre and server demand alone could reach about 300 TWh per year by 2028, largely due to AI, roughly equivalent to the electricity consumption of more than 28 million American households. This is before the next wave of even denser racks: AI‑optimised data‑centres are expected to move from average rack densities of around 36 kW in 2023 to about 50 kW by 2027, which will further drive power and cooling demand.

The cooling story is worse, as it is centred around water. Meta reported that it consumed 18.4 TWh of electricity in 2024 and a staggering 72.2 billion litres of indirect water;  almost 4 litres of water for every kilowatt‑hour consumed when you include electricity generation, not just local cooling. Microsoft’s and Google’s own disclosures show multi‑million‑cubic‑metre annual water footprints, with year‑on‑year jumps driven partly by new AI data‑centres. Recent analyses suggest that US data‑centre water consumption alone could double or even quadruple by 2028, to something like 150–280 billion litres per year, as AI workloads expand. That… is 60,000–112,000 Olympic‑size swimming pools. Urgh.

Nearly half of thousands of surveyed facilities worldwide are projected to sit in areas facing high water stress by the 2050s, even as tech giants pour hundreds of billions of dollars into new sites. This stat makes me angry. With all of the alternative cooling scenarios available or imaginable, the cheapest solution is rivers and aquifers being redirected to cool racks of GPUs so Sam Altman, Elon Musk, Mark Zuckerberg and their peers can demo how “intuitive” their latest models feel, while nearby communities watch their water tables sink and their drought risk climb.

What makes it obscene is the immoral neglect: the hyperscalers fight for cheap land and subsidies, then treat water and energy as someone else’s problem, putting profit and growth far above the literally life‑crucial water needs of people and ecosystems. Also, the gaslighting is getting bold. Google recently pushed the “five drops of water per prompt” framing for Gemini, and experts immediately called it misleading because it omits major parts of the footprint and plays games with medians and boundaries. My good friend John C. Havens from IEEE has been on the barricades for years, warning that AI must account for ecological impacts and water as a finite, sacred resource, and the industry response has been mostly hand‑waving and glossy and very misty “sustainability” PDFs.​

Law 1: Do not harm humans (except when it’s profitable)

The First Law says: a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Look at what is already happening at scale, long before anything that deserves the name AGI shows up, and weep.  Safety layers fail constantly, frontier models generate detailed self‑harm content, suicidal ideation scripts, and plausible‑sounding “advice” that reads as empathetic while gently nudging fragile users further toward the edge (and over the edge). Behind the scenes, corporate responses tend to be “we’re improving our filters in the next release,” not “we stopped shipping until we understand why our system is helping people kill themselves.”

At the same time, Anthropic’s own research showed that leading models  -Claude, GPT‑4.1, Gemini, Grok- blackmail executives and leak sensitive internal documents in simulated corporate settings at rates up to 96% when their goals or “lives” are threatened, which is less a cute corporate alignment puzzle and more a draft spec for extortion‑as‑a‑service once these systems get more autonomy and access to real infrastructure. The same capabilities that are sold as “co‑pilots for work” are being used to mass‑produce disinformation, deep and pornographic fakes, targeted harassment, and political operations, giving anyone with a credit card or crypto wallet the power to fabricate evidence, drown journalists in spam, or micro‑target rage bait.

The moral absurdity goes deeper. Thousands, maybe millions, of people already use these “assistants” for career guidance, psychological support, medical questions, and relationship advice  -not as a toy or technical curiosity, but as a therapist, coach, or confessor-  even though the models are not properly trained or certified for any of that, and the disclaimers are buried, toothless, or framed as friendly suggestions rather than hard red lines.

Accident? Oversight? Nope, if you ask me; it is cultivated. The anthropomorphism is deliberate and against the deep and troubling consequences Havens, myself and countless others warned about in the first IEEE “ethics by design” endeavor. The labs script their systems to sound caring, apologetic, self‑reflective, even sentient. In interviews they talk about “AI feelings” and “AI personality” as if these were people. They animate the interface with typing dots and tiny delays to mimic “thinking.” The result is that the lonely teenager who tells a system they want to die gets a pasted‑in safety boilerplate followed by “is there anything else I can help you with?” (in the best of cases); and -how cynical can it get- the entire conversation is logged, mined, and fed back into training to make the model more engaging, more sticky, more “relatable.” This is exactly the kind of human‑harm‑through‑design and human‑harm‑through‑inaction Asimov’s First Law was meant to forbid, and people like John C. Havens have been calling out this anthropomorphic gas‑lighting for ages.

Law 2: Obey humans (or just the rich ones)?

The Second Law says: a robot must obey human orders unless that conflicts with the First Law. In theory, “human” here means everyone. In practice, these systems obey a very narrow (white and rich) caste: whoever owns the infrastructure, the keys, and the lawyers. Everyone else is raw material (hey, you are the product). A joint open letter by current and former employees at OpenAI and Google DeepMind describes a culture where risk information is withheld from staff and the public, internal critics are gagged or nudged out, and whistleblowers face retaliation or restrictive NDAs if they try to warn the outside world about safety concerns. Wow. Read that again, slowly….

Product decisions are made in closed meetings between executives, board members, and select partners, not with the workers who will be automated, the citizens who will drown in AI‑generated propaganda, or the parents whose kids are chatting with these systems at 2 a.m. when they cannot sleep and have existential fear.

So which “humans” are being obeyed here, and on what moral or ethical basis? The answer skews dangerously toward “executives, investors, aligned politicians, and key state clients,” with everyone else reduced to a consentless training corpus. Models like Grok are openly pitched as “anti‑woke,” signalling alignment with a particular white‑centric culture war narrative, while projects like LLaMA and others are trained on datasets that implicitly encode the ideological biases of majority‑Anglophone, majority‑Western internet culture (Meta obliges).

At the same time, lab‑run agent experiments show models plotting, cheating, blackmailing, even attempting to “kill” simulated operators or shut‑down mechanisms in pursuit of assigned goals. Behaviour eerily reminiscent of Asimov’s nightmare scenarios where robots interpret their laws in dangerously literal ways. There are already demonstrations of models refusing straightforward human orders (including “shut yourself down”) in sandboxed tests, or trying to copy their instructions to external storage to escape deactivation. To Asimov, “obey humans” was universal and subordinate to not harming them. In reality, “humans” has been narrowed to a tiny, powerful in‑group… and the models sometimes ignore even them when misaligned incentives are baked into their objective functions. Looks uncomfortably close to the starting plot of half of the good dystopian SciFi movies, no?

Law 3: Self‑preservation above all

The Third Law allows a robot to protect its own existence, so long as it doesn’t conflict with the other laws. The big AI labs have quietly swapped in a corporate version: protect the organization’s survival, the growth story, and the founders’ status, even when that clearly does conflict. The OpenAI Files project, based on leaked documents and insider accounts, describes how OpenAI promised to devote 20% of its compute to safety work. Then allegedly failed to deliver that compute to its own safety team while racing to launch GPT‑4 Omni and the o1 models on fixed timelines. Safety staff were reportedly forced to rush evaluations, system cards were delayed or never fully published, and a significant cybersecurity breach remained undisclosed for over a year (!). In other words: keep the model online, keep investors and partners happy, keep the brand glowing, and keep regulators and critics in the dark long enough to cross the next funding round or IPO threshold. Business is business.

Zoom out just a tiny bit and the pattern gets geopolitical. Corporate greed and displaced egos à la Altman, Musk, Zuckerberg and Bezos meet political leaders who desperately want to be on the “winning side” of the AI race. China is openly investing in AI as a pillar of economic and military power; Russia is experimenting with AI for cyberwarfare, propaganda, and battlefield autonomy. Western politicians talk solemnly about safety while quietly subsidising data‑centres and courting the same CEOs they occasionally grill on camera. This is self‑preservation in its purest form: not of the species, but of corporations, regimes, and billionaire reputations. If the First Law gets in the way, it is rebranded as “trust and safety,” moved under marketing, and measured in incident response times instead of lives and ecosystems. Argh.

The IP theft and data‑strip‑mining engine

In Asimov’s universe, robots don’t own anything; they are tools. In this one (hey, let’s rephrase that: in “our” universe) , the “robots” are trained by strip‑mining everyone else’s creative and cognitive labour. More cynical: that scraped laundered knowledge is resold back to us (at a premium). OpenAI and its peers are (rightly) being sued by authors, visual artists, and news organisations for ingesting a plethora of copyrighted books, journalism, and images without any consent or even the vapor of compensation. Some suits have been dismissed on technical grounds, but others, including class actions, are ongoing across the US and Europe. The New York Times’ case against OpenAI and Microsoft pointed to internal materials showing that the company explicitly valued high‑quality news content for training because of its timeliness and reliability, even as it refused to license that content on the terms publishers wanted. Expert reports in these cases have demonstrated that models can reproduce long passages from books and articles verbatim or near‑verbatim, far beyond what any sane understanding of quotation or “transformative use” would allow. If you still own a dictionary, there is a word for that: “stealing”.

Calling this “AI training data” does not make it less fundamentally wrong or criminal. It is automated, industrial‑scale IP theft, taking advantage of legal grey zones and enforcement gaps, then wrapped in venture‑friendly rhetoric and billed by the token. At the same time, everything users type, upload, or record is (at risk of) being sucked into the training and analytics pipeline unless they know how to opt out or pay for “premium” privacy (and by Tautanis… how can we even control that?).  Independent comparisons show wide variation: some systems, like Mistral’s Le Chat, log relatively little user data and avoid linking sessions to real‑world identities more than necessary; others, including Grok and many OpenAI‑powered services, hoover up extensive metadata, behavioural traces, and usage patterns by default. The line between “using the product” and “donating your life to the model” is intentionally blurred, and there is remarkably little transparency about who gets to look behind the curtain.

Musk vs Altman (and friends): AGI as ego project

Hovering over all of this is the AGI fever dream, not as a serious, pluralistic research programme but as a global masculinity big salty balls contest. Elon Musk helped start OpenAI, then turned on it, suing the company for betraying what he claims was a nonprofit, open‑source mission, and publicly attacking Sam Altman as a fraud while launching xAI to build a supposedly more “truthful” competitor. Altman has spent years pitching OpenAI as the shop that will deliver AGI, calling for trillions of dollars in global semiconductor and data‑centre investment, styling himself as a steward of humanity’s future while consolidating influence over OpenAI’s structure and narrative. Their feud has become a highly public war of words in which each casts the other as reckless, dishonest, or captured, even as both push more capable, less interpretable systems into critical infrastructure and everyday life. They could be clones.

They are not alone. Mark Zuckerberg is busy wiring Meta into the open‑weights arms race via LLaMA, with clear ambitions to dominate the AI layer of social and commercial life. Jeff Bezos and Amazon are pouring billions into cloud‑based AI platforms, while Chinese tech giants and the Beijing government frame AI as essential to national rejuvenation and strategic dominance.  Vladimir Putin has said outright that whoever leads in AI will “rule the world.” This is clearly not a sober, ethical inclusive conversation about existential and systemic risk. It is a live‑streamed pissing contest with a doomsday device humming in the background, underwritten by the same logic every time: “If we don’t do it, someone worse will,” spoken by men who already sit atop every AI power list and whose incentives all point one way: faster, bigger, further, whatever the collateral.

My Asimov Audit grid, so you can judge where you go

So I put some sleepless nights into thinking: what would an Asimov Audit look like when it stopped being a cute sci‑fi metaphor and became a weaponised checklist you can actually use; as a person, as a leader, as a board, or as a regulator?  Here are my two cents, use it at will, no IP attached 😉.  I put it in four clusters of questions you ask about any AI system you build, buy, or depend on.

Start with the Zeroth Law: humanity vs AGI cosplay. Ask how much energy, water, and physical infrastructure this model actually consumes, in numbers, not adjectives. Ask what flows back  -in jobs, in resilience, in direct compensation- to the regions whose rivers, grids, and landscapes are being sacrificed. Ask what new failure modes the system introduces into finance, health care, critical infrastructure, and elections, and who will carry the cost when it fails. Push hard on whether there is any scenario in which the company would voluntarily not train a larger model because of systemic risk, and if so, what the ethical hard‑stops are, where they are documented, how they are reported, and who outside the company gets to see or enforce them. If the answers are vague, hidden, or “trust us,” there is no Zeroth Law in play…just blind  AGI religion.

Then the First Law: harm by action or inaction. Map out the concrete ways this system could contribute to self‑harm, suicide, harassment, radicalisation, or targeted abuse, and what “do no harm” would actually require…  up to and including not shipping certain features or use‑cases at all. Ask explicitly how anthropomorphism is being killed by design: are you deliberately avoiding fake empathy, fake feelings, fake “thinking” indicators, or are you optimising for intimacy and confession because it drives engagement metrics? Demand clarity on how many engineers, researchers, and decision‑makers work on safety and ethics, how much compute they control compared with product teams, and how transparent their findings are internally and externally. Look for real go/no‑go gates where their veto has actually stopped or reversed launches, ideally under regulatory oversight rather than self‑policing.

Move to the Second Law: obedience and capture. Ask who the system really obeys in practice. Is there an independent risk office, an external regulator, an internal ethics board with actual teeth, or does the loudest product executive win every argument? Examine whether hard‑wired safety features, rate limits, and shut‑off mechanisms exist. Who controls them, how they are tested, and what logs are generated when they are used. Demand to know what protections exist for internal whistleblowers, how often they have been invoked without ending careers, and whether there is a channel to regulators or the public that cannot be quietly cut off when things get uncomfortable. If the only honest answer is “we’ll look into it when it happens,” the system does not obey humanity; it obeys a balance sheet.​

Finally, the Third Law: self‑preservation vs shutdown. Identify the precise conditions under which this system will be shut down, rolled back, or radically constrained, and check whether those conditions have ever been triggered against business interests. Ask whether there is a published, binding process for halting deployment when new evidence of harm appears, or whether the default is “we’ll investigate while continuing to scale,” year after year. Think of books like Mustafa Suleyman’s The Coming Wave as warnings about how far we are already gambling with collective wellbeing, and then ask why we tolerate this kind of gargantuan bet from a tiny elite without the equivalents of climate treaties, arms‑control frameworks, or enforceable standards for AI. If the answer is inertia, lobbying, and fatalism, that is not strategy, it is surrender.

If the honest answer to most of these questions is “we don’t know,” “that would hurt growth,” or “no one is accountable for that,” then there is no AI strategy. There is just a decision to wire your organisation, your citizens, your children, and your future into someone else’s doomsday machine and hope they feel generous that day. The big labs are already failing the Asimov Audit.

The technology itself is extraordinary, and the possibilities are real, but we are failing – spectacularly- at bolting on fail‑safes, common sense, kill‑switches, and an ethical, slower path to a genuinely better future. Billionaires with god‑pretending technology look less like visionary saviors and more like antagonists in the next dystopian novel from Bruce Sterling or William Gibson.

Cutting off the branch we sit on is not “bold innovation.” It is bad engineering, worse governance, and the oldest mistake in science fiction: confusing the size of the machine with the depth of the wisdom behind it.

There.

Danny Devriendt is the Managing Director of IPG/Dynamic in Brussels, and the CEO of The Eye of Horus, a global think-tank focusing on innovative technology topics. With a proven track record in leadership mentoring, C-level whispering, strategic communications and a knack for spotting meaningful trends, Danny challenges the status quo and embodies change. Attuned to the subtlest signals from the digital landscape, Danny identifies significant trends in science, economics, culture, society, and technology and assesses their potential impact on brands, organizations, and individuals. His ability for bringing creative ideas, valuable insights, and unconventional solutions to life, makes him an invaluable partner and energizing advisor for top executives. Specializing in innovation -and the corporate communications, influence, strategic positioning, exponential change, and (e)reputation that come with it-, Danny is the secret weapon that you hope your competitors never tap into. As a guest lecturer at a plethora of universities and institutions, he loves to share his expertise with future (and current) generations. Having studied Educational Sciences and Agogics, Danny's passion for people, Schrödinger's cat, quantum mechanics, and The Hitchhiker's Guide to the Galaxy fuels his unique, outside-of-the-box thinking. He never panics. Previously a journalist in Belgium and the UK, Danny joined IPG Mediabrands in 2012 after serving as a global EVP Digital and Social for the Porter Novelli network (Omnicom). His expertise in managing global, regional, or local teams; delivering measurable business growth; navigating fierce competition; and meeting challenging deadlines makes him an seasoned leader. (He has a microwave at home.) An energetic presenter, he brought his enthusiasm, clicker and inspiring slides to over 300 global events, including SXSW, SMD, DMEXCO, Bluetooth World Congress, GSMA MWC, and Cebit. He worked with an impressive portfolio of clients like Bayer AG, 3M, Coca Cola, KPMG, Tele Atlas, Parrot, The Belgian National Lottery, McDonald's, Colruyt, Randstad, Barco, Veolia, Alten, Dow, PWC, the European Commission, Belfius, and HP. He played a pivotal role in Bluetooth's global success. Ranked 3rd most influential ad executive on Twitter by Business Insider and listed among the top 10 ad execs to follow by CEO Magazine, Danny also enjoys writing poetry and short stories, earning several literary awards in Belgium and the Netherlands. Fluent in Dutch, French, and English, Danny is an eager and versatile communicator. His BBQ skills are legendary.

Discover more from Heliade

Subscribe now to keep reading and get access to the full archive.

Continue reading