Tara, my nine-year-old weekend breakfast boss, playfully slides a Lego robot across the table. “It is deciding about the chocolate chips on the pancakes,” she says. Her confidence is absolute. For her, tiny robots thinking is as ordinary as rain in august (we live in Belgium). She talks to intelligent speakers, ChatGPT and Alexa with the same ease as addressing her favorite doll. Isaac Asimov imagined that kind of absurd relaxed normality long before it arrived. He was not fascinated by shiny gadgets for their own sake. He cared about regulations, projections, rules… Because without rules, shiny things turn dangerous with alarming ease. A scientist by trade, who wrote like a street-smart philosopher (and sounded like an angry bartender), he built his worlds on logic strong enough to outlive the frivolous fashions of his time.
I have lived (I still do) with Asimov in my luggage for decades. Sun-bleached paperbacks in the Arizona-desert heat. Hardcovers dented by turbulence above India. E-books glowing at three in the morning while jet lag liquefied my neurons to pulp on my way to Texas. Books ruined by steaming bathtubs in hotel rooms. His attraction was never in nostalgia nor in cheap projecting. It was in the method. As a trained biochemist, he brought laboratory discipline into fiction. Form a hypothesis. Isolate the variable. Push the system until it creaks. See what breaks. That process is why his worlds still feel solid while flimsier sci-fi crumbles on re-reading.
He even named the field without realizing it. In the 1941 story Liar! he used the word “robotics”, thinking it was already in use. It was not. The Oxford English Dictionary credits him with its first appearance. From there, he spent his career working through the consequences of what those robots might mean for us.
At the center are the Three Laws of Robotics. One: do not harm a human, nor allow harm through inaction. Two: obey human orders, unless that causes harm. Three: protect yourself, unless that conflicts with the first two. They fit on a post-it but flip a machine’s priority stack so that ethics comes before efficiency. The Laws first appeared in Runaround in 1942 and ran through the Robot stories like structural beams.
Then came the Zeroth Law: protect humanity as a whole, even if that means sacrificing individual humans. A household helper could, in theory, push you out of the airlock to save the species. Uncomfortable, yes, but essential if you are serious about designing machines that work for more than one person’s immediate comfort. These laws are not code snippets. They are a moral framework dressed as captivating storytelling, and they drag uncomfortable questions to the surface. Who is included in “human”? Who decides? At what point does protection become control? Modern AI discussions get stuck in bias audits, compliance reports and data provenance. Asimov started at the foundation. Decide what matters. Then, build.
My good friend and brother from another mother, John C. Havens at IEEE, carried that principle into real-world engineering. He drove Ethically Aligned Design, a global, multi-year effort to put human rights and well-being at the heart of autonomous and intelligent systems. Not as a patch. As the default. That work seeded the IEEE 7000 series of standards, including IEEE 7000-2021, which integrates ethical values into the systems engineering process, and IEEE 7010-2020, a recommended practice for measuring how A/IS affect human well-being throughout their lifecycle.
If you need receipts, they exist. Ethically Aligned Design lays out general principles like human rights, well-being, data agency, transparency, and accountability, then translates them into methods for research, design, and policy. Havens also co-authored peer-reviewed work around IEEE 7010, and has been a public engine for getting companies to shift from “move fast and duct-tape compliance later” to “design with purpose from day one”. He did the unglamorous thing. He turned ideals into process. I was proud to work alongside John on that journey. It was values first, then functions. Exactly the Asimov way.
“Outdated slogans”, some will smirk. Fine. Try shipping a safety-critical system without a compass and see where you beach the boat. The Laws are direction, not destination. Give an autonomous drone a first-line rule that prioritizes civilian safety over target velocity and you change the risk surface in the field. Give a trading algorithm a top-level mandate to avoid destabilizing markets and you dampen the likelihood of cascading failures that erase pensions before lunch. You still need testing, guardrails, audits, and humility. But you start with the right north star.
Alignment is not a luxury. It is heat in winter, medicine to the right patient, traffic systems that do not collapse because of a software glitch, and markets that do not vaporize savings overnight. Asimov understood that before the term “AI governance” existed. The hardest challenge has not changed: building moral judgement into systems that act faster and reach further than we can.
The Zeroth Law was ambitious. It assumed machines could be designed to weigh the collective good, a problem humans still fumble. Whether we can pull that one off is uncertain and still up for debate . But if we insist on giving systems speed, reach, and agency without a moral scaffold, we are writing the headline we do not want our children to read.
The Lego robot has made its choice. Chocolate chips are in Tara’s best long-term interest. No argument. There are laws, and then there are higher laws.