Over the past few years, a trend has emerged on social media platforms like TikTok, Facebook, and Instagram: online jokes referencing "anti-AI" pejoratives. There are videos going around in which creators berate robots and chatbots in fictional scenarios as if they're second-class citizens, repackaging Jim Crow-era segregation in the context of an imagined cyberpunk future, and skits depicting scenarios along the lines of a man pretending to be a cop making fun of a robot lamenting, "My cooling fans won't work."

As the backlash against artificial intelligence (AI) continues to escalate, so does the popularity of fictional slurs. While it reflects legitimate anxieties surrounding AI infestation in every aspect of our lives, this brand of humour is underpinned by real tension. When we direct contempt, abuse, and demeaning behaviour toward AI, what does it reveal about us?

AI has permeated every facet of our lives, inciting polarising reactions. There are a lot of aspects warranting legitimate concern regarding AI as both a product and an industry, such as its integration into production pipelines, customer service, internet browsers, search engines, and operating systems, and how it's making it easier to pump content of questionable quality out into online spaces that already have problems with an oversaturation of "slop".

This particular technological encroachment on daily life has led to frustration, prompting backlash as people seek to reclaim agency in various ways – one of which is the strategic weaponisation of language.

Finding its origin in a 1958 article by William Tenn, who used the word to describe robots from science fiction films, "clanker" entered the popular lexicon thanks to the Star Wars franchise, wherein clone troopers would use the term as a slur against enemy battle droids. Recently, it has been adopted by naysayers of AI. However, these derogatory terms mirror real-life racial rhetoric. Most of the words people use as anti-AI slurs are derived from and employed in the same ways as actual slurs against minorities. Regardless of its origins, "clanker" is now commonly used in lieu of the N-word; "wireback" is a riff on wetback, which is a slur against immigrants in the US; "Rosa Sparks" and "George Droid" parallel Rosa Parks and George Floyd, civil rights activist and victim of police brutality, respectively – members of existing groups of marginalised people.

By calling machines derogatory terms inspired by real-life slurs, users of such pejoratives normalise racist and ableist patterns of thought. It may be a form of protest done in jest, but jokes are rooted in real ideas, which are reinforced through repeated usage. Participating in the "palatable" version of racism and ableism equips individuals with the unsavoury knowledge of how to successfully marginalise a population, sentient or otherwise. According to linguist Adam Aleksic in an National Public Radio (NPR) piece published on August 6: "...the people saying clanker are assigning more personality to these robots than actually exists."

In other words, they simultaneously humanise AI and exhibit dehumanising behaviour by using the schema of actual racism. Not equating robots with people fails to justify this phenomenon, because calling someone a slur is dehumanising behaviour to begin with.

The emergence of slurs aimed at AI shows how quickly prejudiced language forms. Derogatory terms spread online, wrapped in humour but replicating the structure of real hate speech: defining a group through contempt, and building an in-group identity around shared hostility. Even though AI is not "alive", participating in such language trains people to normalise cruelty. Regularly repeating statements that "ironically" borrow from the language of bigotry alters the taboo surrounding the usage of discriminatory speech. In other words, using slur-like or slur-derived language makes us more likely to use actual slurs.

This phenomenon also reveals that edgy racist humour is no longer confined to niche far-right online spaces, but has seeped into and spread across the greater digital landscape. Hate speech and reactionary rhetoric on the internet have existed long enough that the line between parodic anti-AI language and actual racist dog whistle has blurred.

Despite these troubling patterns, acknowledging the existence of racism-adjacent behaviour toward AI doesn't necessarily require granting them the same moral status as humans. The key concern here is what this behaviour reveals about us. Disparaging attitudes toward AI may not cause them emotional harm, but they shape the social environment in which technology operates. If people grow comfortable demeaning things that talk and respond like humans, it may dull the reflexes that support empathy in general.

Ultimately, the proliferation of anti-AI slurs should force us to engage in critical self-examination and reckon with uncomfortable truths. The "othering" of AI isn't just a futuristic metaphor, it's a mirror. In making fun of machines using denigrating language, we may be revealing how much of our own bigotry we are willing to recycle.

Nuzhat is a compulsive doodler and connoisseur of bad early aughts television. Send her recommendations at [email protected].



Contact
reader@banginews.com

Bangi News app আপনাকে দিবে এক অভাবনীয় অভিজ্ঞতা যা আপনি কাগজের সংবাদপত্রে পাবেন না। আপনি শুধু খবর পড়বেন তাই নয়, আপনি পঞ্চ ইন্দ্রিয় দিয়ে উপভোগও করবেন। বিশ্বাস না হলে আজই ডাউনলোড করুন। এটি সম্পূর্ণ ফ্রি।

Follow @banginews