I got a lot of reactions on my blogpost Connected Brains, artificial intelligence… and you. Predictably, most people had questions on ethics. Will all kinds of bad people not take advantage of this human-enhancing technology? Should we not stop inventing and evolving, before the bad guys take over the show? Or… worse… the intelligent machines go rogue and do all kinds of unpleasant things with humanity before warping the leftovers to a non-identified planet… Would it not be more ethical not to think about enhancing our capabilities and stop moving slowly to Theilhard De Chardins famous “Point X” (Kurzweils singularity)?
Hm, I honestly think ethics are technology and platform agnostic. People do not need technology, networks, weapons and fiber connected super computers to be ethically correct or horribly wrong. Networks, robots and datacenters will not hurt, maim, humiliate, starve or kill people. People will.
In every stage of our evolution, there have been good people, and really bad people. Some bizarre part of our main human programming gives us the ability to choose: good or bad. Isaac Asimov gave his beloved robots a very severe basic programming: hardcoded in their positronic brain were three non-negotiable laws, preventing all harmful attacks on individuals. Any attempt to tamper with, or violate any of these laws would autodestruct the robot, by frying its brain. We, humans, do not have a hardcoded safety valve. We’re free to harm whoever we choose…
Fire, written language, science, medicine, aviation, chemistry… everything ever invented by humankind has been used and abused for ethically very questionable purposes. Inventing more, better, quicker… will not stop this process; nor will it accelerate it.
Bio-machine technology, connected networks, thought controlled computing, cyber enhancements etc… will eventually make us smarter, quicker, more enduring, longer lasting and Star Trek ready.
It will not make us better humans. There will always be the Dark side of the Force. Let’s take them on with better tech ;-).