A Proposal to ban AI research as “A Sound of Thunder”
In Response to Erik Hoel’s “We Need a Butlerian Jihad against AI”
On June 30, Erik Hoel posted an article “We need a Butlerian Jihad against AI—a proposal to ban AI research by treating it like human-animal hybrids”. Please read it before reading my reply below.
I disagree in general, for a variety of reasons (explained forthwith). I do agree with the humanitarian argument that we shouldn’t create abominations, but I think it’s superseded by another argument. Quod Erad Demonstrandum:
First, there is the problem of defining and finding out when an AI can be considered (self-)conscious. Right now, we don’t know how exactly human consciousness works, and we are totally in the dark about any other types of consciousness. And while Erik Hoel proposes:
When a research company goes to make an AI, they should have to show that it can’t do certain things, that it can’t pass certain general tests, that it is specialised in some fundamental way, and absolutely not conscious in the way humans are.
There are several problems with this:
The ‘certain things’ that the developing AI is forbidden to show may very well be intricately linked to the ‘certain things’ it is developed for;
The only way to ascertain that this developing AI can’t do certain things or can’t pass certain tests is to actually test it. And then there is the non-zero chance that it passes such a test. What then? Euthanise it? Keep it nevertheless? This proposed solution doesn’t seem workable;
What if the evolving AI develops a kind of consciousness that is utterly different from humans? Euthanise it? Keep it nevertheless?
In short, right now we have no certain way to test—let alone predict—if an AI will become conscious or not. The only way to be absolutely sure is to ban all AI research. But even Erik Hoel will realise that the chances of that are so close to zero as to be indistinguishable from it.
It’s even worse than that: we barely now how many current AIs function. As Ryota Kanai notices in his “We Need Conscious Robots” article (April 27, 2017) in the “Consciousness” issue of Nautilus:
[…] their decisions are emergent properties of the learning algorithms and the data set they are trained on. Their inscrutability has created concerns about unfair and arbitrary decisions.
Deep Neural Networks (DNNs) are a prime example. Deep inside their cloud networks, they’re free to develop solutions to the problems they’re fed in every which way these DNNs see fit. They do sometimes come up with ingenious solutions, even if nobody—including the DNNs themselves—knows how they reached them. One might call them ‘idiot savants in a back box’. In such circumstances it’s not unimaginable that a DNN AI could develop sentience, or even a form of (self-)consciousness, by accident.
Subsequently, the legal wheels are already grinding:
Beginning next year, the European Union will give its residents a legal “right to explanation.” People will be able to demand an accounting of why an AI system made the decision it did. This new requirement is technologically demanding. At the moment, given the complexity of contemporary neural networks, we have trouble discerning how AIs produce decisions, much less translating the process into a language humans can make sense of.
Here comes the crunch: in order to be able to question AIs, Ryoto Kanai proposes to ‘endow them with metacognition—an introspective ability to report their internal mental states’. Coincidentally (or not), this is one of the main functions of consciousness.
TL;DR:
Erik Hoel: “To prevent the creation of abominations, we must ban AI research.”
Ryota Kanai: “To make AIs accountable, we have to imbue them with consciousness.”
(Obviously, I paraphrase. But it gets to the heart of the matter.)
On top of that, discontinuing DNNs and other AIs has many consequences, such as:
No self-driving cars. Once AIs have perfected driving, they will drive better than humans (no wavering attention, impervious to distractions, always 100% concentrated), and in the process save many, many human lives (at least 135,000 per year according to my very conservative estimation[1]);
In short, the possible good that AIs—weak or strong—can (and already are) deliver(ing) is so huge, that one may—and should—seriously consider if the (accidental or not) creation of a number of AI abominations is the price we are willing to pay in order to save and/or improve millions upon millions of human lives.
Also, should possible negative consequences stop us from doing things that are, on average, good in the long term? For example, in the realm of genetics, it’s perfectly possible that two healthy parents produce a neurodivergent child (which does indeed happen quite often). Or miscarriages. Or terminally ill children. Does this mean that humans should stop to procreate? Or do we accept this as the price for maintaining humanity? No sane human would argue the former, so a certain amount of neurodivergent children are born, and in response we try very hard to develop treatments for them.
Furthermore, by not creating strong (conscious) AI, we are not creating a future with AIs where all the mistakes and faults have been corrected, and many, well, thousands? millions? billions? of healthy future AIs will not come into existence, unable to live a meaningful existence (like a floating robot cleaning the oceans from plastic, like a space probe exploring the solar system, like the crew of an interstellar vessel. I’m sure there are countless more examples). Will the monsters and misfits of the very first generation of strong AI be a worthy sacrifice to the happiness of the many in the future?
This reminds me of Ray Bradbury’s classic SF story “A Sound of Thunder”, in which a wealthy hunter (Eckels) inadvertently wreaks havoc by stepping outside his boundaries during a—carefully planned—time travel hunting trip, crushing a butterfly. As a result, history is changed:
English words are now spelled and spoken strangely, people behave differently, and Eckels discovers that Deutscher has won the election instead of Keith.
Change Deutscher to Trump and Keith to Biden and the story becomes ominous and timely. As such we (humanity) in our hunt for—what we think is—morally superior research may kill the butterfly (the abominations) that will eventually lead to a better future where countless healthy conscious AIs live in peaceful coexistence with humanity. Strong AI will be developed, if not this century then in another: once we know it’s possible it will happen, one way or another. And suppose those future AI superminds invent a time machine, then I don’t want to face them as they confront the people who want to smother the very research that brings them into being with the future equivalent of ‘A Sound of Thunder’.
Therefore, my proposal is that we don’t need to ban (certain branches of) AI research, as there are already several institutions dedicated to the ethical treatment of AI, like:
The IEEE Global Initative on Ethics of Autonomous and Intelligent Systems;
The EU has produced Ethic guidelines for trustworthy AI;
The EU has also produced a White Paper on Artificial Intelligence—A European approach to excellence and trust;
The Future of Humanity Institute (directed by Nick Bostrom) (and two other institutes from the University of Oxford: The Institute for Ethics in AI and the Oxford Internet Institute);
The AI Now Institute at NYU;
The Centre for Advancing Responsible & Ethical Artificial Intelligence in Canada;
And I’m certain to have overlooked quite a few more of them. Why not make these an integral part of the development of strong AI?
Since this is terra incognita, we will make mistakes and create a number of freaks. But we can try to do our very best to minimise the damage, so the record will show the future AIs that we tried with the very best of intentions, and proceeded with caution. So while we should try very hard to avoid creating ‘abominations’, if the benefits outweigh the costs by such multitudes, I’m certain future AIs will forgive us.
Finally, I suspect the cat’s already out of the bag; that is, while we do not have strong AI at the moment, everybody knows this might become possible in the future. Suppose we successfully implement a worldwide moratorium against the production of strong AI—again, how we measure that is another matter, fraught with essential difficulties—there will always be countries that do not sign such a treaty. North Korea, for one, who will then see this as an opportunity to try to develop strong AI, and be ahead of its enemies. So do we really want to leave the development of strong AI to bad actors like North Korea, the religious extremist government of Iran, the Putin puppet government, and others? Is it not better to let the more enlightened civilisations of the world take the lead in developing strong AI? Led by the many organisations that valiantly try to promote the ethical development and treatment of AI?
Make no mistake, a strong, conscious AI will happen in the future. We either let it happen by accident—or even worse, by bad actors—or we firmly take control of the process, accepting the possible failures while simultaneously trying to minimise them, and keeping a firm ethical leash on it throughout.
We owe it to, not only to ourselves, but to future generations of both humans and AI.
Addendum one:
While we’re at it, is there a way to create an AI that’s both strong—self-conscious—and ethical? In my upcoming duology “The Replicant, the Mole & the Impostor” (part 1: “The Replicant in the Refugee Camp” slated for publication on November 1, 2021), I portray a ‘reality event’ in which there is a replicant—disguised as a human—among the ten candidates. In order to win that contest the replicant—an AI running a pre-grown human body—must appear as human as possible.
As the event runs and runs and runs—its full ten months—the replicant becomes, almost inevitably, more and more human. And, if a disguised AI becomes indistinguishable from human, has it then truly become human, or not? Answers in my ‘replicant’ duology.
(On top of that, most of the reality event takes place in a refugee camp on a Greek island, where eventually the refugees—assisted by the candidates and an uncontrolled influx of bleeding edge tech—take their fate in their own hands and create a blueprint for a better world.)
Addendum two:
My current Work in Progress (WiP) depicts how humans and AIs complement each other as a mixed crew on an interstellar vessel on its way to Alpha Centauri, needing each other to overcome several crises and the fateful misunderstandings at First Contact (while researching the origins of consciousness along the way).
Footnote:
[1] According to Wikipedia, the total estimated deaths from traffic in 2016 was 1,350,000. If self-driving AI prevents only 10% of this (a very conservative estimate), then some 135,000 souls are saved per year;