Trouble in Paradise: The Rise of Artificial Intelligence
Why is the rise of artificial intelligence (AI) unnerving to so many of us? I have a theory. Our ill-ease with AI may be due to the fact that the 21st century will not be the first time that entities with agency and nous will have appeared on earth. After all, Homo sapiens is the direct heir of the last such event. Isn’t it possible – or even, tautological – that many of us feel threatened by the idea of human-like machines precisely because they will be, in certain respects, like humans?
Of course, this raises the question of what humans are like. In his Discourse on the Origin and Foundations of Inequality the 18th-century Genevan philosopher, Jean-Jacques Rousseau, asserts unapologetically: “Human beings are evil.” This is not a pessimistic hypothesis, according to Rousseau (writing in 1755), but a salient fact of which “sad and continual experience makes proof unnecessary”. And the experience of humankind is still, to my eyes, pretty melancholy in 2018.
Perhaps it is time, then, to revive a question that most philosophers (and tech industrialists) seem to have forgotten: What makes human beings evil? Rousseau, for one, holds that “man is naturally good” – but he concludes that our originally virtuous species has been corrupted by “the knowledge he has acquired”. (In 1755, using a masculine pronoun for humankind was still the done thing.) This is hugely interesting.
We no longer tend to suspect that there might be a link between human knowledge and human depravity. Until the mid-19th century, however, few in Europe and the Americas doubted that the origin of human evil was represented, in some way, in the first pages of the book of Genesis. And in Genesis, we find an archaic legend of the fall whose images are deep as dreams, but whose plot is clear. The first human pair, Adam and Eve, are placed by God in a paradise – the Garden of Eden. They are naked, and innocent, and in love.
But a serpent soon tempts Eve, who in turn tempts Adam, to taste the fruit of a Tree of the Knowledge of Good and Evil. “You shall be as gods”, whispers the serpent (a line that foreshadows the title of Yuval Harari’s Homo Deus). Eve notices that this fruit is “pleasant to the eyes” … and after Eve and Adam eat it, “the eyes of them both are opened”. In them, humankind sees for the first time what none of us, to this day, can ‘unsee’ – the possibility of cruelty, and the inexorability of death.
According to Genesis, all savagery and despair begin here, with a primal human will to acquire knowledge illicitly. But what is the relevance of this to the dawn of ‘intelligent’ machines? Well, in the 1780s and 1790s, the German master-thinker Immanuel Kant intensified Rousseau’s theory of human depravity. Kant is so convinced that humans have a “propensity to evil” that he, like Rousseau, says “we can spare ourselves a formal proof”. The sanguinary scenes that life shows us are, for Kant, conclusive.
What is more, Kant turns to the first pages of Genesis in his 1786 essay, “Conjectural Beginning of Human History”. The Eden narrative is a “mere fiction”, Kant suggests, but he is impressed by its inner coherence. So Kant sets out a philosophical narrative of humankind’s fall, modelled on Genesis. His question is the origin of what he calls, in a later text, “radical evil”. By ‘radical’, he means that being touched by this ‘evil’ makes humankind what it is.
Intriguingly, Kant grants that “the holy document is quite right” in its representation of certain basic truths. For instance, the Tree of Knowledge in Genesis marks, for him, the vertiginous moment in which humankind ventured for the first time beyond the limits set by instinct. A prehistoric human choice to consume strange fruit, Kant reasons, may have activated our faculty of choice. Where instinct dictates “single objects of desire”, he says, reason discloses “an infinity of them”. And once reason has cracked the hard shell of instinct, he stresses, it is impossible to turn back.
This brings us back to AI. For Kant’s steely verdict is that human beings are evil because they are intelligent. This is of course not to say that intelligence, per se, is evil. (Kant is a rationalist.) Rather, Kant thinks that it is the essence of “restless reason” to drive humankind “irresistibly” towards the development of all its capacities – good and evil.
The purveyors of emerging ‘intelligent’ technologies tell us that their machine systems will only be coded – or rather, quasi-coded – to realize human goods. Thanks to ethically informed AI designers and design, we can be sure that ‘intelligent’ machines will not go rogue. “Don’t be evil”, after all, was Google’s corporate motto until recently.
Google has been shocked to observe, however, that its DeepMind technology has a tendency to select “highly aggressive” forms of behaviour in certain contexts, and when faced with certain tasks. Arguably, Kant could have predicted this. His interpretation of Genesis would seem to suggest that the idea of purely benevolent AI is formally incoherent. For, whenever a new form of real intelligence is generated, a new capacity for evil is generated with it.
If Kant is correct, then we can be certain (a priori) that genuinely intelligent machines will, like us, have a propensity to evil – because they are intelligent. They, too, will want to taste the forbidden fruit. And if Kant is correct, then we can also reason (a posteriori) from the absence of evil machines – there are, as yet, no evil machines – to an absence of real machine intelligence.
The latter is a line of reasoning that Jean Baudrillard sketches in his 1990 book, The Transparency of Evil. Computers “may be said to be virtuous”, says Baudrillard – but this is only because they are “immune even to the seduction of their own knowledge”. For Baudrillard, computers’ immunity to evil only proves that, though we may call front-wave machines ‘intelligent’, they are actually still “devoid of intelligence”.
The link between intelligence and evil is not just an archaic fear, dramatized in Genesis and rationalized by Kant. It is a recurring philosophical claim that creators of ‘intelligent’ technologies, and philosophers of technology, would be reckless to ignore.