When Blake Lemoine, a Google engineer, claimed in the summer of 2022 that AI was “sentient,”[1] thereby heralding the ‘Singularity’ (more on this below),[2] he was roundly denounced as, to say the least, jumping the gun. His employer fired him[3]; there were a few desultory interviews, and Luciano Floridi, one of the leading philosophical deans of AI, then at Oxford, weighed in, following subsequent news surrounding ChatGPT, to say where all such claims went wrong.[4]  Of course, no critical voice in the spirit of Günther Anders could be found (the best AI has ever had in that ‘critical’ direction might have been Peter Sloterdijk or else Friedrich Kittler).[5] Floridi split the difference: AI is not as such ‘intelligent’ and the ‘best practice’ (so goes the reigning academic meme) involved human beings using AI resources “proficiently and insightfully.”

To be sure: one is already doing that, should one be searching the internet, or mining one’s research project on Twitter, asking for ideas from the Twitterati who happen to be online to see one’s query, or getting colleagues, or even asking random academics to give one references via email.[6]

People who have used ChatGPT enthuse about how wonderful it is — and why would they not enthuse? The effect is a bubble effect, as one may also extend what Tor Nørretranders called the ‘user illusion,’[7] the results correspond to their own data traces, littered around them in the wake of their internet use habits, reading and reinforcing the same things again and again. We like what we know; what we agree with, we like even more. We like our own formulaic expressions however primed into our consciousness as we take this to be, our own “convictions” as Nietzsche says. Thus, psycholinguists know that the key to a good conversation, to good therapy, a good job interview (or a good date) is to repeat what the other party says right back to them. Deftly done, this is not perceived as parroting but as geniality. One heart, and one soul. Perhaps most telling — this is a common feature of hype — people who have not used ChatGPT — there is a learning curve — also enthuse about it.

The tune changed as professors at Oxford and other universities were warned that students had an ally in faking not ‘news’ but term papers.[8] In keeping with the original goals of the original MIT Eliza program,[9] designed by Joseph Weizenbaum to model a therapy session where the human user’s input sentence would be answered by a question, largely repeating the original phrase (why do you feel…) and which was called an early natural language processing system (listing the assumptions about language built into this would take another essay), the success of the Eliza program depended on users using it as if they were interacting with another human being.  If this (priming) precondition is an early version of AI ethics, Weizenbaum himself did not see the achievement as ‘therapy’ and designed the program to explore the nature of language (and indeed a certain critical component was part of this),[10] and went on to explicitly repudiate popular readings — not that this had any impact on its reception to this day.[11]

Our narrative expectations online have been shaped to a greater degree than many imagine by our collective online gaming experiences (whether we ourselves are gamers or whether we think of this as a cultural notion, involved with any bit of click bait using social media or, more notoriously, a dating app such as Grindr or Bumble).[12] As Chris Bateman puts it: ‘no one plays alone.’[13]

In important ways, we are what we do on-line, or as Günther Anders and Theodor Adorno analyse the ‘culture industry,’ we consume what we are fed and, at the same time, we are convinced that this is not so, which is quite the idea. User complicity, suspension of disbelief, of critical analysis, means that old Hollywood could use sets in a studio and new Hollywood can use CGI as opposed to shooting on location. Similarly, gaming designers have for some time used stock formulae (this would be the new Homer, one might argue) to ‘program’ what readers/gamers can take to be interactive storytelling (at least part of the interactive part is always generated spontaneously and the rest can serve as a prompt).

For this reason, I argue that the bar for ‘passing’ a Turing Test is a low one: vending machines and ATMs, and even — I exaggerate only slightly — a toaster can ‘pass’ a Turning test.[14]  For AI, patched together out of what we have posted online, what we have read online, it’s even easier. But everything depends, and this is the key to AI ethics, on ‘good’ user habits. — In the case of ChatGPT-assisted papers, instructors (no one asks which instructors) found it difficult to differentiate unassisted student papers (good or bad) from assisted or ‘enhanced’ substitutes for the same.  Here I argue that the outcome is overdetermined and not less ‘prepared’ or cooked: professors, long since anxious that plagiarizing students might play them for fools, had already been availing themselves of AI in the form of plagiarism software, such as Turnitin, and arranging for university underwriting of the cost of this software.  But such software only magnifies and indeed generates the problem over time. Thus pre-sorted by subject and level of difficulty, this same faculty anxiety worked to ‘populate’ or ‘feed’ a curated database (curation being much of the work of AI) composed of student papers together with faculty feedback over many years. Add post-pandemic AI, add data from Zoom classes even including sometimes egregious transcription errors, and the potential for a tsunami of academic and other writerly fakes has been waiting to break for a while. And recall Floridi’s counsel, too: the new move seems to be to fake it to help you make it; ChatGPT is increasingly regarded as the equivalent of a calculator in math class.[15] But now one might return to the cautions noted above.

Articles have thus appeared singing the praises of AI or ChatGPT poetry — it’s all ‘good’ if you say it is — and one may expect novels courtesy of the same (but the proof will be in the sales, and maybe, just given the flat or monotone character of most mass market fiction on offer, it already is…) and so on. The ‘Eliza effect’ to match the ‘Hallelujah effect’[16] would seem to have arrived and so too (though this has not yet proven ready for the mass market of Bumble or Grindr) the potential virtues of a virtual girl- or boyfriend, a virtual ‘friend’ for the lonely heart, of the kind already imagined on screen in the Spike Jonze film, Her and more luridly and strangely more prosaically in Rupert Sanders 2017 Ghost in the Shell.

Here Nietzschean psychology is helpful, specifically his fictionalism, rather exactly in the spirit in which Hans Vaihinger wrote of Kant’s fictionalism.[17] But Nietzsche argues that we are abandoned to fiction: “we are, from the bottom up and across the ages, used to lying.” For Nietzsche, we notice new things badly, and, more commonly, not at all. And modern psychological research would seem to corroborate his argument. As Nietzsche puts it, even “in the middle of the strangest experiences we do the same thing: we invent most of the experience and can barely be made not to regard ourselves as the ‘inventor’ of some process” (Beyond Good and Evil, §192). As if Nietzsche were aware of recent research on eye movements in reading (and he was exceedingly aware of work on 19th century sense perception), confirming his point, he observes:

Just as little as today’s reader takes in all the individual words (or especially syllables) on a page (he catches maybe five out of twenty words and “guesses” what these five arbitrary words might possibly mean) – just as little do we see a tree precisely and completely, with respect to leaves, branches, colors, and shape. (BGE §192, cf. his earlier use of the same example of the leaf in On Truth and Lie in an Extra-Moral Sense)

Why bring Nietzsche to a discussion of AI, robots,[18] the erotic hermeneutics of social media,[19] the phenomenology of ‘being the blue dot’ (GPS),[20] or our tendency to project our consciousness into (and through) a screen,[21] etc.?

One problem is rigor (do we get Nietzsche right?); another problem is equivocation when it comes to the intelligence of AI. What is intelligence? We scarcely raise the question than ChatGPT has spoken, as the Tagespiegel has let us know, complete with a visual reminder of 2001 and Hal. We have asked the oracle, our new Eliza, and the oracle has replied.[22]

Where Nietzsche observes “a blind and chance hooking together of ideas, passive, automatic, reflexive, molecular” (GM I:1), I read him in light of his self-identification as a psychologist, as explaining how ‘priming’ works, even with respect to social media contagion exercises of the kind that currently adumbrate our lives at every level, or the way boy scouts or small children might be ‘trained’ to virtue and good behaviour, catch phrases repeated everywhere.

Another approach that might serve us beyond the Übermensch phantasms of ‘philosophising with a hammer’[23] and a certain DC comic book hero (Superman™) to match recent film versions of the Marvel vision of violence and transhuman supermen (Ironman™),[24] but the most productive might be the ideal of a perpetual motion machine as this illuminates Nietzsche’s metaphor of a cosmic music box. If the idea of a perpetual motion machine cuts a little too close to home, given the functioning of LPN (Lipid Nanoparticle) adjuvants in mRNA vaccines, now ubiquitous, making the theme uncannily controversial even for a Nietzsche paper,[25] this exemplifies the ‘extramoral sense’ framing Nietzsche’s fairly mechanistic, physiologically minded discussion of ‘truth and lie.’ Again: for Nietzsche, we are “accustomed to lying” (BGE §192) and it is to the point of AI that there is equivocation and that it is effective.

A great deal of talk of AI is future-oriented. This does not mean the topic is intended to be open to new developments.  Par for the course for a business pitch or proposal, talk of AI sells investors on an extant (or almost) product, ready to go, quite like ChatGPT, or almost, add a number and stir.  AI is stipulated, postulated, supposed, proposed, quite as deity was for another world and time.

At issue is intentionality and as Nietzsche reminds us, we anthropomorphize constantly. Just that constant dedicated projection of ourselves into everything is how the ancient MIT Eliza program worked. We human beings are past masters, Nietzsche tells us, at focusing only on ourselves and projecting, that is, deceiving ourselves and others: “Deception, flattering, lying, deluding, talking behind the back, putting up a false front, living in borrowed splendor, wearing a mask, hiding behind convention, playing a role for others and for oneself — in short, a continuous fluttering around the solitary flame of vanity — is so much the rule and law among men that there is almost nothing which is less comprehensible than how an honest and pure drive  for truth could have arisen among them.  They are deeply immersed in illusions and dream images: their eyes merely glide over the surface of things and see ‘forms.’” (‘On Truth and Lie in an Extra-Moral Sense’).

We ‘find ourselves’ in our clouds, in our lakes (Nietzsche speaks of mountains ‘with eyes’), and, perhaps, above all, we find ourselves (or we think we find ourselves) in others (this is the famous philosophical problem, nota bene, unsolved to date, of ‘other minds’), just as we find ourselves, this is the force of ancient mimesis, in animals and plants and rocks.  Even more than identification with this or that item in a so-called ‘natural’ world as if one might find nature anywhere ‘unnatured’ by human hands (this is archaeological ecology) that is now distant from and in many cases even alien to many of us, we find ourselves in our things: our cars, our motorcycles, our television soundbars or hifi setups (once upon a time as headphones have changed all this) or as Günther Anders argues, in the big screen television replacing the family table as a center of focus.[26] Whatever it is, we identify with/project ourselves into, we also live through, perceive through, experience through the equipment we surround ourselves with. This requires no particularly special set up: thus the smart phones we carry and display — Chris Bateman calls these so many ‘pocket robots’[27] — and here Pierre Bourdieu’s astute (and not less Heideggerian) analysis remains on point, a social ‘status signal’ constantly ready to hand to access social media or email, on a continuum with Elon Musk and his rocket-ship aspirations.

AI shills want you to think of an elderly person cooing over a fake cat.[28] Sherry Turkle reminds us that the pretence is that of intimacy, the ‘care economy’ now automated, programmed and primed, for a profit. Today, the robot ‘cat’ has been replaced with ChatGPT; hence the ongoing hype of the Turing test for fun and presumed profit, this being the version du jour of yesterday’s news about folks applying for legal license to marry their sex robots. Now one can ‘speak’ with one’s favourite dead philosophers, though one imagines it might be more desirable qua device à la the video conceit of the 1995 Things to Do in Denver When You are Dead, to speak with dead loved ones but — and that is quite to the point about projection and its limits — the ruse might, I suspect, be harder to sustain if you are trying to have a ‘final’ conversation you never had in real life with a person you actually knew — a spouse, a parent, a friend — by contrast with a person with whom you are only ‘acquainted’ through book reading, like a fictitious personage (Gulliver or Huck Finn or Severus Snape or Mr. Spock/Picard) or historical personages as Machiavelli argued along with Nietzsche who repeats him quite as Nietzsche also unpacked what it takes to bring a text to speak, arguing that we do that whenever we read.

What is at stake concerns the already mentioned problem of ‘other minds’ — using the argot of the analytic tradition in philosophy that is today the only kind there is, given what is taught at university, what is tested and vetted and (above all) hired to. But the thing about the problem of other minds is that it remains unsolved and, perhaps, so Nietzsche would seem to argue, unresolvable.

What is at issue, and it is no accident that this is the point of departure for Mark Coeckelberg’s AI Ethics, is not whether a computer might beat a human at checkers or chess (or tic tac toe) or some other game, roster style (note that Coeckelbergh trumps all this by starting with the high geek game of Go),[29]  permutating outcomes. At issue is whether like Kasparov or Bobby Fischer, the latter having moved on to the great tournament in the sky, the software in question, data set, ChatGPT, might know and feel itself as champion: consciousness, amor propre, all that stuff.  Cheekiness, which is the next best thing for giving such an impression, is now programmed into Chatbots and this matches the pitch for robot ethics, which, although theorists of robot/AI ethics rarely take note of this venal issue, is all about, and arguably only about, ensuring that users play by very corporate specified rules.

This question is tied to questions of ethics and technology, an ethos rehearsed now for more than a century, including Nietzsche reflections on “mechanical activity” which he associated with modernity as a way of numbing awareness in general, as he reflects in Human, All too Human, “One ought not to ask the cash-amassing banker, for example, what the purpose of his restless activity is: it is irrational. The active roll as the stone rolls, in obedience to the stupidity of the laws of mechanics.” (HH §283) Nietzsche’s aphorism bears the instructive title: The Irrationality of the Actual.  This is an aphorism for investment bankers and might serve as a motto for those speculating on nearly anything, not only big data and bitcoin and NFTs. From this perspective, all ‘intelligence’ is an automaticism, perforce including AI.

What is the agency question when it comes to AI?

To ask the above question once again: what is intelligence?  And whose artificial design, whose intelligence?

What about the ‘singularity’? Has it already happened? Did Google in fact ‘wake’ up and have the authorities simply denied this (denial seeming to be the rule with authority for the last three years)? Is Floridi wrong? Has ChatGPT managed the deed? Going back a bit, would that have already transpired with the Facebook experiment priming the adolescent mind? Is it still happening (are adolescents still on Facebook?)? How would we know?  Would it matter?

When it comes to Nietzsche and AI, the concern may be the transhumanism connection, the posthuman, overhuman overlord.  When Nietzsche wrote of the human being as something to be ‘overcome,’ as Michel Haar and other Francophone scholars noted in a small tradition of thinking Nietzsche’s notion of the ‘earth’ and a loyalty to the same that could, but has rarely been, be connected with the ecological ethos of the same era, ‘back to the land,’ that would be the wonderfully French idea of bioculture, quite established when it comes to viniculture, planting biodynamically in accord with animal rhythms and the phases of the moon, to be considered in the context of the reflection, from the perspective of that same earth, that the human is the ‘skin disease of the earth’ or else qua unfinished, undetermined, not yet fixed animalnoch nicht festgestellte Thier.

These are complex notions even if some scholars are cavalier about the details. There are many Nietzsches, and it is instructive that the AI Nietzsche, like other ‘digital’ Nietzsche’s, is not among the most recondite.

Considering AI qua black box, chit, promissory note like a good deal of philosophical ethics, we talk about what promising ought to be, what love, what empathy, integrity, bravery, etc., ought to be, and the less one knows about Nietzsche in this sense, the easier.  Thus, for many who write about Nietzsche and transhumanism, any bits that don’t fit, one simply tosses.  This tactic is respectable in analytic philosophy as a way to read Nietzsche, but it is especially dangerous when it comes to AI. In political theory, Apolline Tallandier draws on the ideal and ethos of transhumanism, claiming to offer a survey while somehow skipping most of the literature, indeed: nearly all of it, yielding an essay unencumbered by too much Nietzsche or too much Nietzsche scholarship, citing the “cryonist” Max More instead.[30] Here it is incidental but to the point that Nietzsche foregrounds the objectivist desire to cut off one’s head to see what the world might look like without it, which isolated body bit happens to be the preference for cryonics.

Nietzsche’s focus on mechanical activity includes its convenient almost ASMR side-effect: deadening consciousness.  AI as genie, AI as quasi-deity, super or transhuman aspect, the borg quality of AI is not concordant with the Nietzschean Übermensch because the latter from Nietzsche’s point of view is less cartoon hero than a piece of vanity: far from some future dream but an always-already-with-us, inner Callicles.

Recall Nietzsche’s Goethean reference to a leaf in On Truth and Lie, as he varies this with respect to a tree (likewise Goethean) and to the example familiar in these days of AI and screen time, regarding the cognitive physiology/psychology of reading/scanning.[31]

Throughout, Nietzsche focuses on the illusion, the deception, the lie, which, to complete the reference to AI, is automatic.  Thus I quoted Nietzsche on reading a page and here we may continue: “it is so much easier for us to put together an approximation of a tree. Even when we are involved in the most uncommon experiences we still do the same thing: we fabricate the greater part of the experience and can hardly be compelled not to contemplate some event as its ‘inventor’.” As Nietzsche emphasizes, “one is much more of an artist than one realizes.”

Thus inventive, and here we are back to the toaster we might blame for malfunctioning or to the plush toy for the elderly ladies who are given, quite as children are given, pacifiers or iPads in place of parental attention to quiet their claims on family members for such attention, or who perhaps have no such family members, ‘family’ being a phantom concept in any case, not unlike Descartes’ phantom limb.  Same diff, were this a different essay on the sex dolls that did a smash business, so I am told, during lockdown along with every other deliverable commodity item.

The point to be made — this is how one goes about constructing a usable ‘user illusion’—is that this same success led to the urgent need for robot ethics: one wants, the corporation needs, users who play by the rules. Users must never (hence, the ethical imperative) treat these products as heterosexual men have traditionally treated woman: the damages would be unimaginable in a strikingly short time. But also the user experience would be better if the user can be programmed to use the device only so and not otherwise. Both points are critical for a sustainable business model.

Beyond Goethe’s morphology — i.e., the reference to the leaf and the tree Nietzsche connects with reading and translating — Nietzsche borrowed his own text from Gustav Gerber’s Kunst der Sprache [The Art of Language]).[32] To this extent, Nietzsche/Gerber emphasize the projective element in reading a text (we think, at least to begin with and sometimes even after sustained and repeated reading, that we already know what the author is saying), which is similar to looking at a picture (especially if we have never taken a course in art history much less art, that we ‘know’ what we are looking at). Nietzsche proceeds in Beyond Good and Evil to extend our confidence that we ‘already know’ to our face-to-face interactions as Nietzsche took his insight to our perception of intimacy:

“In a lively conversation I often see before me the face of the person with whom I am speaking so clearly and subtlety determined by the thought he is expressing or which I believe has been called up in him that this degree of clarity far surpasses the power of my eyesight — so that the play of the muscles and of the expression of the eyes must have been invented by me. Probably the person was making a quite different face or none whatever” (BGE §192).

But that means, to the point of sentiment, that we make up the connection ourselves. What is more, we are so focused on ourselves that anything that seems to be focused on us has an advantage. This is the heart of Dale Carnegie’s 1936 guide for salesmen, encouraging them to learn their client’s names and to repeat them as often as possible. Repeating a person’s language is the key to ‘winning’ friends and ‘influencing’ people. AI has been doing that for a while. Nietzsche’s argument that we prefer the lie, the illusion, to truth means that when it comes to friends and lovers and family members, but also political figures, historical facts, paintings, or musical pieces, we, most people, so Nietzsche argues, “prefer the copy to the original.” We like things to be as we imagine them to be. And AI is custom-made for that.



[1] There are a number of reports of this, and here I mention a newpaper report that references “AI ethicist,” Nitasha Tiku, “The Google Engineer Who Thinks the Company’s AI has Come to Life,” Washington Post, 11 June 2022.

[2] See, for example, Babich, “Martin Heidegger on Günther Anders and Technology: On Ray Kurzweil, Fritz Lang, and Transhumanism” in: Journal of the Hannah Arendt Center for Politics and Humanities at Bard College, 2 (2012): 122-144. Online: https://hac.bard.edu/amor-mundi/martin-heidegger-and-gunther-anders-on-technology-on-ray-kurzweil-fritz-lang-and-transhumanism-2019-05-09.

[3] See The Guardian report from 23 July 2022, online: https://www.theguardian.com/technology/2022/jul/23/google-fires-software-engineer-who-claims-ai-chatbot-is-sentient.

[4] Luciano Floridi, “AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and
Other Generative Models,” Philosophy and Technology, 16 February 2023. Online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4358789 and see Floridi and M. Chiriatti, “GPT-3: Its Nature, Scope, Limits, and Consequences,” Minds and Machines, 30/4 (2020): 681–694.

[5] See, for example, Thomas Barth, “Kittler und künstliche Intelligenz: Über die Verquickung von Medientheorie und Macht,” Berliner Gazette, 19.07.2018. Online: https://berlinergazette.de/kittler-und-kuenstliche-intelligenz/.  Of course, and most analytic philosophers have had to content themselves with Bert Dreyfus for a critical voice, or more recently and arguably at best: the Wittgenstein scholar, Peter Hacker.

[6] See for one recent discussion by a historian but there are (and have for some time been) many more: Wulf Kansteiner, “Digital Doping for Historians: Can History, Memory, and Historical Theory be Rendered Artificially Intelligent?” History and Theory, Vol. 61, No. 4 (December 2022): 119–133.

[7] Tor Norretranders, The User Illusion: Cutting Consciousness Down to Size (London: Penguin, 1999 [1991]).

[8] S. Marche, “The College Essay Is Dead: Nobody is Prepared for How AI will Transform Academia.” The Atlantic (2022). Online: https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/  and C. Stokel-Walker, “AI bot ChatGPT Writes Smart Essays – Should Academics Worry?,” Nature, (2022).

[9] Joseph Weizenbaum, “Eliza— A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, 9/1 (1996): 36–45.

[10] Caroline Bassett, “The Computational Therapeutic: Exploring Weizenbaum’s ELIZA as a History of the Present,” AI & SOCIETY, Vol. 34 (2019): 803–812.

[11] Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: WH. Freeman, 1976).  Indeed, the program has just won the Peabody Award for Digital and Interactive Storytelling according to a recent MIT report explaining the achievement as having opened “up a broader dialogue about general machine intelligence, the chatbot was put to the Turing Test, and it passed a restricted version.” Rachel Gordon,” ELIZA wins Peabody Award,” MIT CSAIL, 24 March 2022. Online: https://www.csail.mit.edu/news/eliza-wins-peabody-award.

[12] This is part of my argument in “Texts and Tweets: On The Rules of the Game,” The Philosophical Salon: Los Angeles Review of Books, 30 May 2016. Online. https://thephilosophicalsalon.com/texts-and-tweets-on-the-rules-of-the-game/ but see also Jordan Frith and Rowan Wilken, “Social Shaping of Mobile Geomedia Services: An Analysis of Yelp and Foursquare,” Communication and the Public, Vol. 4(2) (2019): 133–149 with transformative real-life consequences, in inadvertent data sharing: Madeleine Carlisle, “How the Alleged Outing of a Catholic Priest Shows the Sorry State of Data Privacy in America,” Time Magazine, 26 July 2021,  https://time.com/6083323/bishop-pillar-grindr-data/.

[13] Chris Bateman, “No-one Plays Alone,” Transactions of the Digital Games Research Association, Vol. 3, No. 2 (September 2017): pp. 5–36. Recommended are Bateman’s continued reflections on his blog: Only a Game.

[14] Babette Babich, “On Passing as Human and Robot Love” in: Carlos Prado, ed., How Technology is Changing Human Behaviour (Santa Barbara: Praeger, 2019), 17–26, here: 17.

[15] Brady D. Lund and Ting Wang, “Chatting about ChatGPT: How may AI and GPT impact academia and libraries?” Library Hi Tech News, January 2023.. Preprint online here:  https://www.researchgate.net/publication/367161545_Chatting_about_ChatGPT_How_may_AI_and_GPT_impact_academia_and_libraries.

[16] Babich, The Hallelujah Effect (London: Routledge, 2016).

[17] Hans Vaihinger, Die Philosophie des als ob (Berlin: Reuther & Reichard, 1911) and Vaihinger, „Nietzsche and Kant,” New Nietzsche Studies, Vol. 9, Issue 1/2 (Fall 2013/Fall 2014): 1–20.

[18] See, again, Babich, “On Passing as Human and Robot Love” as well as: “Robot Sex, Roombas, and Alan Rickman,de Gruyter Conversations: Philosophy & History, 17 August 2017. Online. See also, in conversation with Chris Bateman, With Chris Bateman) “Touching Robots.Only a Game. 23 February 2017 in addition to: “Teledildonics and Transhumanism,” The Philosophical Salon: Los Angeles Review of Books, 18 December 2016.

[19]  See, again, my “Texts and Tweets.”

[20] Babich, “Screen Autism, Cellphone Zombies, and GPS Mutes” in: Carlos Prado, ed., How Technology is Changing Human Behaviour (Santa Barbara: Praeger, 2019), 65–71.

[21] See, including a discussion of Raymond Williams and Theodor Adorno, Babich “Günther Anders’s Epitaph for Aikichi Kuboyama,” Journal of Continental Philosophy, 2/1 (2021): 141–157 as well as “Radio Ghosts: Phenomenology’s Phantoms and Digital Autism,” Thesis Eleven, 153/1 (2019): 57–74.

[22] Hannes Soltau, „Tagesspiegel Plus Interview mit Künstlicher Intelligenz: „Ich würde Friedrich Nietzsche empfehlen,“ Tagespiegel, 28.01.2022. Online: https://www.tagesspiegel.de/kultur/interview-mit-kunstlicher-intelligenz-ich-wurde-friedrich-nietzsche-empfehlen-376183.html.

[23] Nietzsche’s Hammer thus refers to a Stimmgabel, in French, a diapason, or tuning fork, the bodily, non-virtual context needed as Nietzsche is speaking of sounding out idols in his Twilight of the Idols, testing them for emptiness, like bloated intestines as he explains his metaphor. Earlier, in The Gay Science writing contra Aristotle and Greek tragedy, which also an issue for Nietzsche concerning intestinal blight – that is what ‘catharsis’ means as Nietzsche indicts Aristotle for missing the nail (“not to speak of the head of the nail”), Nietzsche, Die fröhliche Wissenschaft in: Kritische Studienausgabe (Berlin: de Gruyter, 1980), Vol. 3, 436.

[24] See Babich, “Friedrich Nietzsche and the Posthuman/Transhuman in Film and Television” in: Michael Hauskeller, Thomas D. Philbeck, and Curtis D. Carbonell, eds., Palgrave Handbook of Posthumanism in Film and Television (London: Palgrave/Macmillan, Sept 2015), 45–54.

[25] See my notes in Babich, “Pseudo-Science and ‘Fake’ News: ‘Inventing’ Epidemics and the Police State” in: Irene Strasser and Martin Dege, eds., The Psychology of Global Crises and Crisis (London: Springer, 2021), 241-272. Cf. on nanoparticle effluvia/persistence, my review of Tom McCleish (1962-2023): “On The Poetry and Music of Science: Whose poetry? Whose music?”: https://www.academia.edu/42345690/On_The_Poetry_and_Music_of_Science_Whose_poetry_Whose_music_2019_.

[26] See for discussion, Babich, Günther Anders’ Philosophy of Technology: From Phenomenology to Critical Theory (London: Bloomsbury, 2022).

[27] See, again, Babich and Bateman, “Touching Robots.”

[28] I am only (slightly) paraphrasing Steve Fuller here.

[29] Mark Coeckelberg’s AI Ethics (Cambridge: MIT Press, 2020) thus begins with a chapter, “Mirror, Mirror, on the Wall” reviewing the conquest of human gaming ingenuity with the 4–1 defeat of Lee Sedol in the game of Go.

[30] See Max More’s “Transhumanism: Towards a Futurist Philosophy,” Extropy 6 (Summer 1990) as well as More, “On Becoming Posthuman,” Free Inquiry 15, no. 4 (1994). See further, citing Judith Shklar yet oddly missing expressly Nietzschean social theorists like Shklar’s student, Tracy Burr Strong (1943-2022): Apolline Taillandier, “’Staring Into the Singularity’ and Other Posthuman Tales: Transhumanist Stories of Future Change,” History and Theory, 60, no. 2 (June 2021): 215–233.

[31] Yu-Cin Jian, “Reading in Print versus Digital Media uses Different Cognitive Strategies: Evidence from Eye Movements During Science-Text Reading,” Reading and Writing, 35/7 (September 2022):1–20. And see Brigitte Nerlich and David D. Clarke, “Mind, Meaning and Metaphor: The Philosophy and Psychology of Metaphor in 19th-Century Germany,” History of the Human Sciences, Volume 14, Issue 2 (2001): 39–62.

[32] See for a provocative overview, further, Wolfert von Rahden, “Die Renaissance der Sprachursprungsfrage im 19. Jahrhundert im deutschen Sprachraum,” Forum Interdisziplinäre Begriffsgeschichte, 1 / 9. JG. / 2020): 56–87.