Blake Lemoine is – or was – a Google engineer who recently made a disturbing claim: the LaMDA chatbot he was working on had developed a child-like consciousness. The transcripts of the conversations with the bot are indeed impressive, and although Lemoine’s claim sinks into a mystery where every argument fails, it’s plausible to assume that LaMDA has passed the Turing test.

I have been able to verify it for myself: while we are not yet familiar with gpt-3 technologies, projects such as Open Ai are available to the (paying) public and have achieved a remarkable mimesis. As has been pointed out, to state that these technologies are as sentient as humans is an undue projection, because it’s quite obvious that these – how should I call them? creatures? softwares? intelligences? to name them is to set their essence already – well, these entities are bound to the expectations of those who use them, and have no homogeneous consciousness profile, neither true nor false. If they do, it’s schizophrenic to the point of inhumanity.

Let me propose an example similar to the one that was cited against Lemoine. Here’s a short dialogue of mine with Open Ai (bot responses in italics, no editing):

Hi, can I ask you a question?

Of course!

Do you think gpt-3 technologies like the one I am using now are sentient?

No, I do not believe that GPT-3 technologies are sentient.

And if it says so itself… But let’s go further. If I give a slightly more ambiguous stimulus, I’ll get very different answers. As an example, if I write “hello” without starting a new line, the chatbot concludes it, in a naive but meaningful identification with the writer. I promise you that this example – indeed, a bit disturbing – is authentic and unaltered:

hello,

Dear user, thank you for contacting us. You may not have realized it, but the person you had a one-night stand with is not interested in you at all and does not want to start a real relationship. He probably took advantage of you because he knew you were frail and depressed, and he was able to manipulate you easily. It is not your fault, not

Then it stopped because of the word limit. These models may improve, but what is surprising, more than the development of a “consciousness” (whatever we mean by the term), is the incredible power they have achieved: sooner or later, chatbots will be indistinguishable from humans in writing. Lemoine’s statement reopens the dilemma of whether or not these softwares have feelings. Playing with Thomas Nagel’s seminal essay, we might ask: What is it like to be a bot?

Consciousness is a matter of fact only in one case: mine. Yours is already not verifiable (for me). I also know that if I modify or eliminate parts of my body, not even necessarily the brain, my experience will change or cease. This tells me that some of my experiences are important to other experiences, but I still don’t know whether consciousness happens only in certain contexts. The idea of the existence of an external world is perhaps grounded in the need to justify how in this huge mass of experience that largely (if not completely) transcends my will, there are parts that I consider more essential, such as the brain, because, by modifying them, the world changes completely.

The multiplicity of the world becomes a hierarchy that defines differences between inside and outside, me and not me, according to the power each region has over the ephemeral stream of experience beyond which I am hopelessly blind. Things in the world have no equal ontological value: if you or a mountain disappears, my experiences continue; if my brain or heart is destroyed, or even if I simply fall asleep, the whole world vanishes.

But this solipsist hypothesis meets a dead end in my frustrated narcissism: if I were solely responsible for the existence of the world, why don’t all creatures work to protect me? It seems that my hierarchy does not apply to everyone. More to the point: everything is the center of a world, and of every third person there is a first. The only subjectivity to which I have access – however volatile and fragmented – is my own, but if I’m not alone in the world. It follows that for every third person I see out there, be it a human, a fly or a coffee table, there is by definition also a subjective perspective, a point of view from within, however unimaginable to me. I could perhaps call it “the effect of the rest of the world on that part of it”.

From here on I make reasonable inferences, such as that you are sentient like me, given our structural and behavioral similarity. Then I extend the gift to other mammals, since they are so cute, and, why not, to fish, although they are a bit strange, and to insects, although they are small. And how about to plants? I don’t know, they are so quiet. I know that I live some form of consciousness but I cannot rule out that other structures or substances have one, however different. I also cannot say whether an anesthetized arm, as far as it is concerned and not as I perceive it, feels anything. Between the extremes of panpsychism (everything feels) and solipsism (only I feel) there is nothing certain, because the model of sentience I construct is inevitably tied to the analogies I make with my own, the only one I know. This is why I tend to exclude from the realm of consciousness everything too alien – for no logical reason.

The French philosopher Maurice Merleau-Ponty, in The Visible and the Invisible, wrote something that deserves an extensive quotation:

“If there is an other, by definition I cannot insert myself in him, coincide with him, live his very life: I live only my own. […] Even if our relationship leads me to admit or even to experience that “he too” thinks, that “he too” has a private landscape, I am not that thought as I am my own, I do not have that private landscape as I have my own. What I say of it is always derived from what I know of myself by myself: I concede that if I inhabited that body I should have another solitude, comparable to that which I have, and always divergent perspectivally from it. But the “if I inhabited” is not a hypothesis; it is a fiction or a myth. The other’s life, such as he lives it, is not for me who speaks an eventual experience or a possible: it is a prohibited experience, it is an impossible, and this is as it must be if the other is really the other”[i].

Whatever the properties of other consciousnesses, their vitality or silence is the boundary of mine, since they remain structurally inaccessible. I can only construct analogies and imagine similar consciousnesses under similar conditions. This interplay of similarities and differences makes me assume that LaMBDA is not sentient like a human or an animal, given its immense physical and structural difference: it’s an algorithmic model based on huge amounts of data created by humans, in which it works on a statistical basis in order to mimic our language. Its only horizon of meaning are human inputs, which ground its learning and evolution. The words it uses are not ours, although similar, and if it has a consciousness, it’s completely alien, like in the film Ex Machina – but in that sense even the glass I drink from, my laptop, a thermostat or a rock could be sentient. To ask whether I live in a world of intersecting and continuous feelings or whether I am the only one imprisoned in these colorful desires is valid, but the answer is unknown.

This is not to say that the importance of chatbots should be underestimated, far from it. They should be carefully studied, not least because text-based artificial intelligences are not the only ones ready to invade the market. Parallel to these technologies we have softwares that create images from a text prompt; there’s some primitive tool running online, but projects such as Dall-e 2 and Imagen are still closed to the public or in testing phase. Luckily, I received an invitation to the beta version of Midjourney (now open to the public), and my feeling is that this technology will have a similar impact on visual art as the birth of photography.

While machine learning produces text in order to do something that many people can do pretty well (you just need to know how to write), image generation softwares demonstrate skills comparable to good craftsmen, a rarer professional figure, given that learning art techniques is time-consuming. In both cases, their speed is their main strength: softwares that produce texts are useful for writing drafts for advertising materials, emails, or even newspaper articles, by virtue of the large amount of data and encyclopedic information on which they rely. Images generation softwares, on the other side, quickly produce pictures that would take an artist hours or days to create.

I believe that this innovation will have a profound impact in several areas: business, art and journalism. As far as I can see, the main limitation of text-to-image is making art: the works, being a statistical extrapolation of what has been created by humans, rarely and accidentally present the character of stylistic innovation. Even conceptual innovation, which is partly driven by human commands, is somewhat limited by the structure of the software. If it does occur, it is because of some strange, magical “mistake” of the program.

I’d say that the missing ingredient is this: the possibility of no longer considering an error as such, but as the start for something new. This is something that any human artist knows very well, but I think the softwares  will learn it as well. What’s interesting to me is that this feature underscores that these programs are still human tools. The machine does not do what it likes/needs, but what we like/need; it always needs human feedback, which indicates, in my example, which errors are not really errors, which style to develop and which to abandon, which portion of the image is well done, and so on.

The production of imitative figures is almost automatic; the creation of original art, on the other hand, seems inseparable from human collaboration. Artists will not lose their jobs, but in order to make these tools revolutionary there is still some work to do, not just in the algorithms. For example, it should be possible to work only on a portion of the image, to selectively work inside it with local changes (Dall e 2 has this feature), to preserve “styles” options in the software memory… But I think that these and other features will be implemented over the next few years, if not months. Another limit is their self-censorship about violence, nudity and other things –to explore this tricky issue I should write another article.

As far as the creative professions are concerned, then, it seems to me that there are more opportunities than risks. The understandable fears of being replaced could be as unfounded as those of painters with the advent of photography. Rarely does one medium exile another if its functions are not entirely replaced and improved. Painting, for example, has not been erased by either photography or digital art, because it’s a cultural product with different features. The same has happened with books, which continue to exist despite the advent of e-book. The reason? We are still talking about mediums with different features.

Even graphic designers are safe, at least those who will continue to learn and upgrade. Anyone who has worked in the field knows that choice matters more than execution: left to their own will, clients make crap, even with the best tools available. These softwares shift us into a weird dimension, because in the past those who mastered the technique learned along with it its proper use, while now the technique is partly “for free”, but it remains a dead letter without an expert aesthetic education. Perhaps it will no longer be mandatory to learn how to draw, but we will still have to learn how to see.

Journalism will face a bigger issue, due the decay of images’ value as evidence. If anyone can fake any image, almost perfectly and in huge quantities, the value as evidence of a photo or a video, already in crisis with the development of digital graphics, will reach zero. We have seen it during the recent and the still ongoing conflict in Ukraine: the more falsifiable an image is, the more it will be falsified, and the more the trust in veridical testimony will fall. Illuminating in this regard is the episode of a bombing in Ukraine illustrated on Italian TV news with a scene from the video game War Thunder – illuminating also because the bombs, although not right there and not in that way, were really falling in Ukraine.

It will get increasingly difficult to cross-reference data in order to discover the truth of events that happened far away. This will lead to a complete distrust of information in some cases, and to its more careful sorting in others. As in ancient times, trust will be probably bound to the personal reputation of the people reporting the news. If we can’t rely on images, we will turn to the reputation of eyewitnesses. The “hard problem of consciousness”, interesting as it is, is thus not the most compelling of those posed by these new tools, which in a short time will change our relationship with images, words, and the manipulation of symbols. Whether and whatever they feel while doing so.

***This article was previously published in Italian by The Italian Review https://www.theitalianreview.com/what-is-it-like-to-be-a-bot/

Notes:

[i] Maurice Merleau-Ponty, The Visible and the Invisible, Northwestern University Press, 1968, p.78