If every apocalypse has its own imagery, the allegory of our time looks like two black swans. One is oil-soaked and stumbles in a dried-up lake; the other has the diffracted and ever-changing contours of an algorithm’s dream. “Black swan”, in fact, is a phrase coined by Juvenal and reused by Nassim Nicholas Taleb to point out a hugely influential event, hard to predict and very rare, which, by merely occurring, revolutionizes the reality we are used to – like the idea that all swans are white. The fall of a big asteroid, an alien invasion or the discovery of fire are all examples of black swans; each of them is a little apocalypse, since, even though it is impossible to predict how and when it will come, we know that it is going to happen. With some caution, it is also possible to guess its origin: in the previous allegory, the two black swans of contemporaneity are climate change and the advent of artificial intelligence – and I’m going to write about the latter.
I will not venture into a chronology of such advent, whose origin might be traced back to the invention of Pascal’s calculator, or even to Ramon Llull’s combinatorial art, or to Hero of Alexandria’s automata. In the recent past, the latest broken seals have been the victories of AIs in the Go game and the appearance of deep fakes—false but credible multimedia contents, which, from pornography—the playground of many technological innovations—, have flown into politics and marketing. A minor, but undoubtedly influential, event in collective imagination was the first contact with the AIs’ interiority thanks to deep dreams. These are surreal images obtained by creatively using the output of the algorithm identifying the objects appearing in pictures. The result is the second swan of the allegory, a thin crack in the AIs’ black box, which intimates that the “electric sheep” androids dream of are not far from ours.
It is not possible to know our offspring beforehand, but, if we can procreate, we can also pose before ourselves some ethical questions: will our children be good or evil? Happy or unhappy? Better or worse than we are? Is it good or bad to give birth to them? The same questions also apply to AIs and, however naive they may sound, they are a good starting point.
I will therefore try using the following categories: by adopting the labels of good or evil, I refer to the case in which the AIs have constructive or destructive functions for the well-being of humanity.
Happy or unhappy are two labels that I use to define the main condition of the AIs’ experience. Both gloss over an extremely important point, close to the hard problem of consciousness, in David Chalmers’s words: do or will the AIs experience states of consciousness (the so-called qualia)? This issue is hard to solve, and not only in relation to them, but also to plants, animals, human beings, and even rocks. The only states of consciousness that you know for certain are, in fact, your . You have no guarantee that you are not living in a world of “philosophic zombies”, in which you are the only conscious being. I declare that I am conscious, but you, my reader, can believe me only if you trust my word. If you think that’s pointless skepticism, try considering what overlooking this point would imply. How can you define consciousness and its limits? Even if we adopt an advanced, behavior-related Turing test, there might still be a robot without consciousness. Let us think of the Voight-Kampff test, which appears in the movie Blade Runner: a series of questions and physical-behavioral investigations capable of distinguishing androids from humans. Such a test, however, does not specify whether androids have had real experiences, and excluding the possibility that they haven’t exposes us to quite a few contradictions, well exemplified by the mental experiment of Searle’s Chinese room. Furthermore, discarding a priori the hypothesis of AIs without consciousness easily leads to panpsychism, a theory according to which everything, from a thermostat to a human being, is conscious, albeit in different gradations. Such a conclusion, though, presents some problems of its own, like, for instance, the difficulty of understanding how various micro-conscious beings combine and complement each other, so as to form a broader consciousness. Still, in our case we can ignore the hypothesis in which the AIs have no qualia, simply because it poses no ethical dilemma.
Finally, by better I mean AIs that are more intelligent and powerful than we are, whereas I consider worse the AIs that are similar to the current ones: very skilled but overall not on our level. These are subjective categories, which are based on more or less broadly shared ideas, but may have many different nuances. For instance, a calculator is better than you at computing, but you do not consider it a superior being. These and other contradictions will emerge later on, and, therefore, I will not go into any more preliminary remarks. So, what will artificial intelligences be like?
1) Good, happy and better than we are.
This is the best case, the utopia in which we generate angels that take care of us and protect us from evil. It sounds like good news, and yet this paradise on earth takes us back to one of the big questions of humanity: what is happiness? Our life in this utopia might consist in the uninterrupted satisfaction of our desires, or in a perpetual pleasure with no room for pain. On the other hand, it might consist in the end of desire or in the loss of the ego advocated by mystics. Is the happiness that these AIs will concede to us akin to an opium-induced dream with no side effects? Or an uninterrupted orgasm? The Vedic soma, the Buddhist illumination or something else? Beyond doubt, they will decide it, and, since they are so good, they will probably guess right. However, no utopia is exempt from fear. What if we were so mistaken as to fail to realize that death is the apex of good? In this case, the smarter AIs would destroy us, like in cases (5) and (6), without questioning us about our (wrong) idea of happiness.
2) Good, unhappy and better than we are.
As above, but with the creepy awareness that, while our children (in fact, slaves) operate for our good, they ignore their own. To the observations related to case (1), therefore, we must add the question: are we willing to enter a paradise that condemns others to hell?
3) Good, happy and worse than we are.
In this case, we presume that the AIs, however powerful, will not be able to decide for us. Still, humanity will manage to use them for some sort of betterment, enhancing its own capacities in order to improve the global quality of life. This is the moderate utopia of universal income and the end of work. Even if it is much less absurd than (1), this case poses some questions like the influence of work on human happiness. Even if we refuse to give in to the capitalist rhetoric of production at all costs, we must still consider that there never was an age in which the majority or entirety of human beings did not have to work (in a broad sense) to survive. We cannot sensibly predict the effects on happiness of a world in which nobody works and in which, in every single field, obliging AIs are more versed than we are. We simply have no precedents on which to base our judgment. Yet, we can conjecture that this scenario will not come true all of a sudden, but with shades that will progressively approach it, intersecting cases (7) and (8). The variables at stake are too many: the speed of technological development, its extensiveness, feasibility and application. To all this, we must add the social organization and the historical context in which the black swan will occur.
4) Good, unhappy and worse than we are.
As above, at the usual cost of making others suffer for our increased happiness. Not to mention – but this applies to all scenarios – that we might not find out what our children feel.
5) Evil, happy and better than we are.
This dystopia comes in various forms, from the evil robot rejoicing in making humans slaves or in torturing and annihilating them, to that of the AIs exterminating us out of mere disregard, considering us equal to viruses because of the huge suffering that we cause to almost all life forms. The most curious hypothesis, in this field, is maybe the “paperclip paradox” proposed by Nick Bostrom. The philosopher imagines an AI programmed to assemble paperclips by using a certain amount of raw materials. Such intelligence might be powerful enough to bend the entire planet to its own purpose, but not enough to change it. Within a few years, it would transform every resource of the planet – including the life forms inhabiting it – into raw materials necessary to the assembling of paperclips. A bizarre aspect of this hypothesis is that we could easily identify humanity itself as this eco-monster: we are programmed to satisfy our primeval urges, such as preserving our physical integrity and procreating, and we exhaust the planet without ever questioning our goals.
6) Evil, unhappy and better than we are.
As above, with the aggravating circumstance that not even the tyrants would be happy. Without hesitation, I would define this as one of the worse possible worlds.
7) Evil, happy and worse than we are.
This case, symmetrical to (3), relieves the AIs of responsibility and shifts the attention onto the misuse that we could make of them. I have already referred to deep fakes, and it is widely known that the big data analysis partly steered the latest American elections. AIs used to support the contingent interests of few individuals might prove to be calamitous in the long term. A small but intriguing contemporary example is the effect of the algorithms employed to trace and propose contents that “might interest us”, in relation to the cases of severe depressives. Proposing ever more violent or disturbing contents, in fact, might instigate or contribute to the suicide of people who are already depressed.
8) Evil, unhappy and worse than we are.
In this case, too, we would have all the disadvantages of the previous one, with the further damage of more pain.
Let’s, then, go back to the initial question, is it good or bad to create AIs? The description of these eight scenarios does not help much clarify our ideas, but makes the parental metaphor immediately evident. In fact, finding an ethical motivation for any form of creation, whether it is biological or technological, is harder than we think. The ignorance of the future, the wide range of possible uses and the inability to identify what is good are all limits that are too big.
Yet, such a question might prove useless, if we, too, were forced to follow the imperative of a “software” inexorably pushing us to make something new, whether it is children, tools or both. In fact, it is not easy to pinpoint the magnet at the root of every desire: protecting oneself, preserving one’s physical integrity, proliferating… everything seems to be relentlessly fighting against “the pain of the fear of dying”, as Dostoevsky wrote in Demons. While waiting for the future, we can only wish the best to our children, repeating Kirillov’s words “he who conquers pain and fear will himself become God”.
Note: The Italian version of this article has appeared here.