The recent debate over “AI Slop”—a term coined to dismiss AI-generated works as trash—brings to light a host of issues. Some, like Ted Gioia, lament a vaguely defined “good taste” they believe got lost somewhere in art history, while others, such as Gareth Watkins, go so far as to compare AI to fascism, arguing that it carries an inherently right-wing aesthetic—whatever that might mean.

What these authors fail to consider is whether “slop” is really something new introduced by AI or whether the people creating these images bear responsibility for their quality. If we take a closer look, society has churned out plenty of low-grade material well before algorithms ever came along, and cheap propaganda existed just fine without neural networks. Nor is it tenable to claim that any single medium, by its mere use, automatically results in one uniform aesthetic—be it fascist, kitsch, or anything else.

Critiques of AI that disregard its creative possibilities, human responsibility, and potential as a medium risk devolving into self-congratulatory “media panic” and a knee-jerk rejection of technology. That sort of rigid, shortsighted stance strikes me as far less constructive than committing to use AI responsibly, thoughtfully, and in artistically compelling ways—while also fostering the kind of visual literacy that helps us analyze images critically. I would add that before making grand pronouncements, we’d do well to reflect on what art and technology history can teach us.

By way of example, let me summarize a few objections from those who detest this “de-generated” art. In his piece, The New Aesthetics of Slop, Ted Gioia describes an explosion of shallow, mass-produced content, easily generated by digital platforms and algorithms. He argues that this proliferation of “slop”—be it images, text, music, or other media—pushes us toward a cultural flattening, where quantity is rewarded over quality and mediocrity becomes the new normal. Meanwhile, in AI: The New Aesthetics of Fascism, Gareth Watkins suggests these tools foster an authoritarian aesthetic: glossy, standardized images that reflect a simplified, hierarchical worldview aligned with far-right rhetoric. At its heart, this argument posits that mechanizing creativity—drawing on data from a world already saturated with prejudice—ends up replicating and amplifying discriminatory or populist narratives.

In addition to those two main objections, there are subtler ones, often presented as offshoots of this supposed mechanization of creativity. The first is the perceived lack of authenticity, which holds that AI suppresses the artist’s hand, resulting in sterile, impersonal work. The second is a view of automation as aesthetic decline, based on the assumption that the ease of producing text or images through AI equates to a lack of genuine creative effort. Lastly, there’s the issue of bias: since algorithms are trained on data already riddled with prejudice, the argument goes, they inevitably end up amplifying discriminatory or populist narratives—chiefly benefiting the far right. Taken together, these criticisms paint AI as an inherently alienating force, regressive in its aesthetics, and politically reactionary.

Fears much like those around today’s “de-generated” AI aesthetic were already in play in the nineteenth century, when photography first became accessible to the masses. In a well-known 1859 Salon commentary, Charles Baudelaire famously wrote scathing words about photography, calling it the refuge of all failed painters, too untalented or too lazy to complete their studies[i]. This sentiment reflected the worry that Daguerre’s invention would pull us into a world of ugly mechanical reproductions, forsaking the pursuit of true beauty. Yet history shows that photography—far from destroying painting or degrading art—spawned an array of unique and hybrid artistic languages. True, Baudelaire had a point in fearing that the new technology would unleash a flood of tasteless works—we still see that today—and AI is no different in that respect. But in any creative medium, it’s inevitable that the bulk of what we produce skews toward low quality.

No technology can be considered neutral, because each one both absorbs and is shaped by specific social, political, and economic contexts. What holds for AI also holds for painting and photography, which have likewise been repurposed for advertising or propaganda. Historically, new media have always turned out to be far more versatile than their early detractors conceded, and condemning AI because some use it for propaganda or frivolity is like disowning the printing press over a few political flyers or regime posters. If there’s one lesson the history of technological innovation teaches us, it’s that genuine revolutions lie in a multiplicity of uses: there will always be abuses and distortions, but also virtuous experiments, cross-pollination with other media, and the emergence of entirely new genres. Despite their limitations and flaws, AIs are exceptionally flexible tools for creative processes across fields—from digital art to scientific research, music composition, and education. They can’t simply be dismissed on the grounds of those who, in practice, neither use nor understand the medium.

Another major criticism leveled at AI concerns its alleged lack of authenticity—as if these technologies were churning out images or texts that have no true author. It’s often portrayed as a magic button that spits out infinite copies, devoid of any artistic vision. But that take ignores the fact that human agency is the real driving force here. Just like a photographer chooses the framing, lighting, and precise moment to snap the shot—shaping the work far beyond a mere button press—so too must anyone working with AI generators craft the prompt and, crucially, adjust the host of options and parameters these tools provide.

It’s not just a matter of typing a few words and waiting for an output: artists who use software such as Stable Diffusion, Midjourney, or other systems develop, in addition to prompt-writing (which is anything but simple), specialized skills in image prompting, stylistic references, fine-tuning, step control, CFG scales, localized edits, retexturing, choosing the right generative model, and more. Some integrate personal materials, do post-production tweaks, or hybridize traditional techniques (drawing, painting, collage) with AI-driven workflows. All of this often makes the final piece indistinguishable from something produced by conventional methods—yet critics tend to notice only the misfires. Believing that the ability to produce decent images quickly invalidates the human creative contribution is to confuse the automation of certain steps with the elimination of the creative process altogether. Yes, AI can spit out content in seconds, but the difference between a mediocre piece and one of substantial quality lies precisely in the creator’s skill and discernment in shaping the result.

A slightly more refined accusation against AI creativity centers on bias. In this view, because models are trained on datasets already rife with prejudice and inequality, their outputs inevitably mirror—or even amplify—those distortions, ultimately serving authoritarian or discriminatory ends. It’s true that if we’re blind to these biases, AI can replicate them unchecked—they learned them from us, as many critics tend to forget. Every piece of data is, in some sense, a “bias”: even without algorithms, any human communication or body of texts or images reflects a partial perspective shaped by personal and cultural context. The real issue, then, isn’t the presence of bias (which will always exist) but our ability to notice it, refine it, and lessen its impact.

When it comes to closed-source models developed by big companies that reveal nothing about their datasets or training methods, it can indeed be tricky to intervene and correct undesirable outputs. But it’s hardly impossible—sometimes it’s even quite straightforward, especially with software like Midjourney, which offers extensive customization. Still, there’s no denying it’s a challenge for those who work with AI. There are also open-source projects where anyone with the required skills and resources can customize, filter, or fine-tune the data for their own goals, removing unwanted biases—only, of course, to insert their own in the process.

Not all biases lean to the right—far from it. If the far right seems adept at using AI for propaganda, it’s not because AI itself has some fascist soul; it’s simply that certain political forces are shameless about exploiting emerging technologies for viral attention. The left (or any other cultural sphere) could do the same, instead of rejecting an entire medium outright. What’s more, many overtly racist or discriminatory materials run into filters and censorship in commercial AI systems, which makes reproducing them hardly straightforward. Faced with those who would throw the baby out with the bathwater by condemning AI as fueling hateful rhetoric, we might respond that the most farsighted strategy is precisely to learn how to harness these tools.

Part of the criticism of AI appears rooted in a suspicion of technology that, on one hand, is typical of certain twentieth-century leftist traditions (especially those inspired by the Frankfurt School), and on the other, echoes—albeit indirectly—Heidegger’s reflections on the all-consuming nature of the technological apparatus. Adorno and Horkheimer coined the concept of the culture industry to expose how mass media manipulate and how art is reduced to mere commodity[ii]. Heidegger, meanwhile, used the notion of Ge-stell (“enframing,” or the “device” that locks being into a mechanical vision) to describe technology as a totalizing force, capable of transforming both the world and humanity into mere resources[iii].

These perspectives still wield influence, but they risk becoming distorted when repeated without accounting for historical, social, and technological change. After all, even the greatest philosophers have spouted the occasional absurdity; Heidegger, for instance, viewed the typewriter as a prime example of how modern technology distorts human essence and our relationship to being and truth: “The typewriter tears writing away from the essential domain of the hand, that is, from the domain of the word.”[iv] Such claims haven’t aged well, given that typewriters and computers have been around for quite some time and human intellectual output hasn’t exactly been “torn away from the domain of the word,” whatever that was supposed to mean.

Some strains of left-wing thought suffer from an all-out condemnation of technology, seen as the source of all alienation. This overlooks the hacker and libertarian traditions that emerged from the left, which championed open access to code, knowledge sharing, and the right to modify the means of production—digital or otherwise. Rather than futilely rejecting technology, a left consistent with its emancipatory ideals should be demanding greater transparency, opening up datasets, and advocating for open-source models, training programs, and widespread workshops.

If we look at the trajectory of each major technological breakthrough—from early printing presses to the first cameras, from cinematic experiments to computer graphics—we see a familiar pattern: on one side, doomsayers proclaim the end of “true” talent or “true” art; on the other, evangelists hail a new golden age. In reality, as art and communication history shows, both extremes miss the complexity of any technology, which can be deployed in a variety of ways.

Generative AI is no exception. Despite what its critics say, it’s obvious these tools are extraordinarily versatile: they can, of course, be used for propaganda—of any kind, from any side. The unsettling video of Trump’s “capitalist empire” in Gaza stands alongside a satirical clip of the president kissing Musk’s feet. Getting hung up on the notion of populist or fascist “AI Slop” ignores that every medium is born hybrid, erratic, and constantly reshaped by those who use it. That’s precisely why we shouldn’t dismiss AI as the enemy’s tool but rather advocate for openness.

On top of all this, there’s also the pressing issue of private monopolies over AI platforms and the powerful role they play in the knowledge economy. That’s precisely why I strongly advocate for open-source solutions, which I see as the best way to curb both monopolistic control and algorithmic biases. By providing an alternative to big tech’s paywalled software—built on collective data in the first place—open source can counterbalance corporate influence by offering something in the public domain. To be sure, open-source approaches do demand significant resources and expertise, but they’re hardly out of reach when compared to the effort that goes into creating these technologies from scratch. It’s not a perfect solution, yet it’s currently our most promising strategy—certainly more worthwhile than outright rejecting AI altogether.

Critiques from writers like Ted Gioia and Gareth Watkins harbor several misconceptions. First, they conflate “art” with “visual output”—not even the far right pretends this content is art; it’s propaganda, and analyzing it in strictly artistic terms is a logical misstep. They also blur the line between human responsibility and the technology itself, obstinately overlooking instances where the same tool is employed for opposing propaganda. Lastly, they treat amateurs’ use of the medium as though it were a new artistic movement. Kitsch aesthetics didn’t originate with AI; they cut across political beliefs. When it’s not just personal taste (free of labels), it usually stems from limited visual literacy—limited, one might say, like these critics’ own research, which seems blind to the many AI artists exploring entirely different aesthetics and narratives. And yet, to uphold their argument, they’d no doubt call it all “slop,” never mind the obvious differences in quality.

Instead of brandishing “slop” as a post-facto excuse to justify our own fears and aversions, it would be wiser to learn this technology, make it our own, and steer its development toward outcomes more in tune with our values. Otherwise, we risk handing our opponents yet another advantage, just to indulge our preconceptions. Watkins writes: “Our most effective weapons against AI, and the right wing that has adopted it, may not be strikes, boycotts or the power of dialectics. They might be replying ‘cringe,’ ‘this sucks,’ and ‘this looks like shit.’” If that’s the plan, then, surely, Trump’s days are numbered.

Notes:

[i] Baudelaire, Charles. Charles Baudelaire: Selected Writings on Art and Artists. Trad. P. E. Charvet. Cambridge: Cambridge University Press, 1981.

[ii] Horkheimer, Max e Theodor W. Adorno. Dialectic of Enlightenment. Trad. Edmund Jephcott. Stanford, CA: Stanford University Press, 2002.

[iii] Heidegger, Martin. The Question Concerning Technology and Other Essays. Trad. William Lovitt. New York: Harper & Collins, 1977.

[iv] Heidegger, Martin. Parmenides. Trad. André Schuwer e Richard Rojcewicz. Bloomington, IN: Indiana University Press, 1992. Pp. 80–81, 85–86.