Not long ago, a case brought public attention to the potential negative impact of chatbots. Two parents sued an AI company, claiming that their children’s interaction with one of its chatbots had prompted them to imagine and discuss plans to kill them. The story is undoubtedly disturbing, but it’s worth considering whether this kind of behavior might actually reflect existing family or social dynamics—an idea that apparently never crossed the parents’ minds.

This isn’t the first time the media has focused on the alleged dangers of chatbots. The ensuing debate didn’t stray far from the familiar pattern of singling out technology as a convenient scapegoat. Essentially, this was a textbook example of media panic—a collective burst of fear and blame that arises whenever a new technology or cultural medium enters our lives.

This phenomenon is nothing new. Every innovation—from writing to virtual reality—has been accused at some point of destroying fundamental values, corrupting youth, or threatening social cohesion. As early as the fourth century B.C., Plato warned in the Phaedrus that writing posed a risk to memory and truthful communication (what we might now call “fake news”). Later, in the fifteenth century, the invention of the printing press was criticized for spreading heresy and corrupting young people with supposedly immoral novels. Such media panics have persisted over the centuries with every new medium and form of entertainment: In the 1980s, the role-playing game Dungeons & Dragons was accused of promoting Satanism; rock ’n’ roll and heavy metal music were long believed to encourage deviant behavior; and violent video games of the 1990s, like Doom and Mortal Kombat, were linked—despite scant evidence—to real-world acts of violence.

All these episodes share a common thread: the technological or media platform becomes a scapegoat for deeper social anxieties. AI and chatbots are simply the latest targets in a long-standing series of alarms where technological innovation is cast as a destabilizing force.

The concept of media panic was explored by Kirsten Drotner in her 1999 essay Dangerous Media? Panic Discourses and Dilemmas of Modernity, which provides a useful lens for understanding these episodes. According to Drotner, media panic is a social reaction that emerges when a new technology or form of communication is perceived as a threat to traditional values, social cohesion, or psychological well-being. This response is often marked by a tendency to shift attention from underlying structural issues to a single technological element.

In this specific scenario, the relationship these adolescents formed with the chatbot wasn’t the root cause of their distress, but rather a symptom of preexisting vulnerabilities. As Drotner points out, technology doesn’t create new problems out of thin air; it reflects and often amplifies conditions already in place. Here, the chatbot offered an illusion of intimacy—an emotional haven amid social isolation and personal struggles. Yet blaming the chatbot exclusively for what happened means confusing the effects with the underlying causes.

This kind of oversimplification, typical of media panic, threatens to distract us from the core roots of these problems. Issues like mental health struggles, lack of social support, and the breakdown of family or educational networks fade into the background when the narrative fixates on the technology itself. As Drotner warns, this inclination not only provides a reductive explanation, but also hinders broader, more incisive reflection on the real causes of these phenomena.

It’s hard not to recall Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy, who noted that any invention predating our birth is considered normal and harmless; anything invented during our youth is revolutionary; and anything we encounter as adults is fundamentally against the natural order. This is telling, because media panic often targets the young, seen as fragile and at risk from new technological “threats.” More realistically, it’s part of a continual process of renewal and replacement—one that can undermine the cultural capital of older generations.

In a 2020 study, Amy Orben captures this idea with a vivid metaphor: The Sisyphean Cycle of Technology Panics. In the myth, Sisyphus is condemned to roll a boulder to the top of a hill, only to watch it tumble back down, forced to repeat this futile effort forever. Similarly, with each new technology, society seems to relive the same surge of fears, fueled by political and media rhetoric as well as certain academic studies that focus almost exclusively on negative effects.

Research that zeroes in on harm, often relying on flimsy methodologies, fosters a distorted view of these phenomena. Orben underscores the importance of recognizing these recurring patterns, reminding us that panic is never entirely free from economic, political, or social interests. Too often, we reduce complex problems to simplistic narratives, granting a single technological tool the power to reshape individual and collective values and behaviors.

When it comes to chatbots specifically, recent research[i] by MIT sociologist and psychotherapist Sherry Turkle offers a more nuanced perspective on the impact of these relational technologies. Turkle’s findings show that their effects vary greatly depending on a user’s existing psychological and social conditions. Her work emphasizes that chatbots aren’t the primary cause of distress; rather, they act as amplifiers, reflecting and sometimes intensifying the emotional state of those who engage with them.

Turkle observed that individuals with well-rounded social lives can benefit from interacting with these technologies. For them, chatbots offer a useful tool to improve social skills, experiment with non-invasive forms of connection, or simply address practical issues. On the other hand, people facing social isolation, loneliness, or psychological vulnerability are more likely to develop problematic relationships with chatbots, effectively replacing real-world connections with a digital bond that, while providing an illusion of intimacy, fails to meet their deeper emotional needs.

This dynamic was detailed in a study by Turkle’s team, which identified six different user profiles, ranging from “well-regulated moderates” to “socially disadvantaged loners.” The case of the young Setzer appears to fall into the latter category, suggesting that his attachment to the chatbot was not the source of his distress, but instead a reflection of a more profound issue.

In an article published in L’Indiscreto[ii], Francesca Memini, Rossella Failla, and Chiara Di Lucente highlight another key point: the illusion of intimacy provided by chatbots is no accident—it’s the result of a design intended to simulate empathy and closeness. These tools are engineered to respond in human-like ways, creating a sense of artificial reciprocity that prompts users to project their own emotions and personal meanings onto the interaction. Yet this reciprocity is only superficial: the chatbot does not possess emotional intelligence; it merely performs a predictive manipulation of symbols.

This becomes especially problematic for vulnerable individuals, who may mistake that simulation for a genuine emotional bond. Even when people are fully aware they’re interacting with a machine, many remain willing to preserve the illusion of meaningful communication, often adjusting their own conversational style to “help” the chatbot respond more coherently. This tendency, known as the “ELIZA effect” (named after the first-ever chatbot created in 1966), shows how readily we attribute human qualities to machines, especially when they seem to cater to our emotional needs.

The case of the chatbot accused of inciting violence, like other similar incidents, invites us to reconsider a common mistake: pointing fingers at the technology itself rather than examining the deeper emotional and social vulnerabilities that these interactions bring to light. Not only does this absolve humans of responsibility, but it also risks perpetuating social inertia. Those who hold an advantage can maintain it by demonizing new tools that upset established power dynamics.

Technology is never neutral. It reflects the social, cultural, and economic conditions in which it is created, often amplifying existing inequalities and vulnerabilities. Yet it’s not autonomous either. Its impact depends on how we design, regulate, and use it—all factors largely shaped by the social context in which it develops. We must therefore monitor and improve the design of technological tools without succumbing to the temptation of framing them as the root causes of pre-existing problems. Otherwise, we risk falling back into the same old, hypocritical media panic refrain: “Won’t someone think of the children?”

This article was previously published in Italian by The Bunker Magazine.

Notes:

[i] Turkle, Sherry. “A nascent robotics culture: New complicities for companionship.” Machine ethics and robot ethics. Routledge, 2020. 107-116.

[ii] Memini, F., Failla, R., & Di Lucente, C. (2024). Artificial intimacy: AI and the illusion of empathy. L’Indiscreto. Available at: https://www.indiscreto.org/intimita-artificiale-le-ia-e-lillusione-dellempatia/ [Accessed December 18, 2024].