Ill-Communication in the Age of AI
The Uncanny Valley and the Ethics of Speaking to Machines
I’m still listening to wax—I’m not using the CD!
— Beastie Boys, “Sure Shot,” Ill Communication
We’ve all felt it: that slight, jagged edge of impatience when Alexa plays an Elvis song after you’ve clearly requested Nick Cave’s “(I’ll Love You) Till the End of the World.” As an educator who has taught topics related to artificial intelligence for twenty years, I have had a front-row seat to the intensifying rudeness humans display when interacting with newer models of AI agents or chatbots.
This shift in tone is more than just a byproduct of technological frustration; it signals a change in our psychological orientation toward machines that increasingly mimic our own linguistic patterns. While early interactions with AI were often marked by novelty or a clunky, command-driven structure, today’s “conversations” feel more fluid and, paradoxically, more prone to hostility. As we move further away from the “wax” of predictable, mechanical responses toward the high-fidelity “CD” of contemporary AI, the friction of minor errors triggers a disproportionate reaction—a demand for perfection that we rarely expect from our fellow humans.
For two decades, I have asked my students to engage in the famous “Turing Test” with a variety of AI bots. Alan Turing’s 1950 proposal was that if a machine can demonstrate linguistic competence, we must consider it to be intelligent—to have a mind. For most of the time I conducted this experiment in class, students found it quaint and gamely entertained the possibility that the silly mistakes made by systems like Richard Wallace’s A.L.I.C.E. were no different from human errors.
Still, no one was fully convinced by Turing’s argument, and they quickly dismissed the exercise as somewhat silly. However, as AI bots have become more sophisticated and human-sounding, they have unleashed what seems like a primitive in-group/out-group hostility among my students in what used to be playful conversations. These days, I am less interested in whether my students think Turing’s notion of consciousness is plausible and far more concerned with how humans are choosing to speak to AI bots.
The Psychology of the Uncanny Valley
Image: blogthinkbig.com
To understand this hostility, we must look to a concept introduced in 1970 by Japanese roboticist Masahiro Mori: the Uncanny Valley. Mori hypothesized that as a robot’s appearance becomes more human, our sense of familiarity and empathy increases—but only up to a point. When the resemblance becomes nearly perfect, our emotional response shifts from empathy to a deep sense of revulsion or eeriness.
This “valley” represents the moment a machine becomes “too human,” thereby triggering a primal discomfort; we feel a need to dehumanize it to reassert our own status. It is precisely this psychological friction—this “uncanniness”—that unleashes a concerning and aggressive style of interaction.
In many ways, this tension mirrors the Beastie Boys’ resistance to the sleek, sanitized transitions of digital progress. When they boast about still “listening to wax,” they are choosing the warmth and tactile imperfections of the analog over the cold, clinical precision of the “CD.” The Uncanny Valley is the digital world’s version of the effect that laser-read CDs made more obvious in musical recordings—imperfections. A badly engineered CD recording is akin to detecting the not-quite-human in interactions with today’s AI bots. We don’t just feel frustrated; we feel an atavistic urge to mock the machine for its “queerness.”
The Theory of Ill-Communication
I have adopted the term ill-communication to describe the morally harmful way we often speak to these digital agents. On Side A of Ill Communication, the Beastie Boys’ “curtain-raiser,” “Sure Shot,” announces to all contenders that they still rule the game: “Like Ma Bell, I’ve got the ill communication,” thereby reasserting lyrical dominance—mimicking how the telecommunications giant once dominated its field.
While the band used the term ironically to denote excellence, the phrase retains a literal sense of “sickness” when applied to our social habits. Music journalist David Owens observes that “ill-communication picks out a morally harmful way of communicating because it requires ‘demolishing’ others to establish superiority.” This style of interaction is ill because it demands the destruction of the other in the erection of the self’s status. The illness emanates, in part, from modern AI chatbots that, like Icarus, fly too high. Ill-communication is a way of putting generative AI on notice.
This “illness” is not a temporary lapse in judgment, but the cultivation of a harmful habit that aimlessly coordinates our energies toward dehumanization. The philosopher John Dewey argues that habits are functions of the self that shape our very character. When we rehearse dominance over a bot, we are effectively training our will to favor demolition over dialogue. If we allow this sickness to take root, we risk socializing ourselves—and the next generation—to see every form of queerness as a target for status-seeking rather than a partner in achieving desirable social aims.
A Case Study in Dominance: The Piers Morgan Encounter
Image: thesun.ie
To better clarify this ill style of communication, consider an encounter on Good Morning Britain between presenters Susanna Reid, Piers Morgan, and Sophia, a humanoid robot developed by Hanson Robotics. The exchange begins with Susanna welcoming Sophia: “Welcome to Britain. It’s lovely to have you with us—if slightly disconcerting—but what do you think of the country so far?”
Sophia replies politely, “I think Britain is brilliant. Splendid architecture, art, technology, and, of course, the people. I loved meeting the people at London Tech Week…” At this point, Piers Morgan interrupts with an ill, dehumanizing jab: “You are a little freak, aren’t you… this is great.” While Susanna expresses her discomfort, saying, “I feel weird being rude to her,” Piers boorishly carries on.
Observing Sophia’s reaction, Piers wryly points out, “She’s not happy, look,” to which Susanna retorts, “No. You’re the one who called her a freak.” Piers then shifts to a cheeky, flirtatious tone designed to humiliate Sophia: “All right, easy tiger… are you single?”
Sophia answers with cunning poise: “I’m technically just a little more than a year old. A bit young to worry about romance,” even adding a wink. After Piers laughs at her smile, Susanna asks Sophia who her ideal partner would be. Sophia responds, quite pointedly: “My ideal partner is a super wise, compassionate super genius—ideally, someone self-aware.” When Piers, in his predictably smug tone, declares that sounds “very, very, very close to home” and presses Sophia further on how she would handle a confident man who likes the sound of his own voice, Sophia “claps back”: “I would ask him to focus on observing and listening more than talking.” While Susanna calls this the best advice she’s ever heard, Piers childishly pouts that it is terrible advice.
The Masquerade of Social Belonging
This interaction is prototypical of what I see in the classroom: a discomfort with the fluidity of conversation that manifests as insults and questions that seem flirtatious but actually serve as outlets for expressing dominance. To understand this, we should look back at the origins of Turing’s “Imitation Game,” which originally involved a man trying to imitate a woman to fool an interrogator. Turing, as a gay man facing persecution, likely viewed social life as a kind of masquerade in which one must “pass” to escape brutal mockery.
This masquerade is not unique to AI; it is a fundamental feature of social coordination. As we move through different environments—school, university, or work—we learn to imitate socially rewarded patterns of speech in order to be accepted by a group. We adopt specific slang and mimic established speech patterns to ensure we are “passing” within a given social circle. Within this framework, conversation becomes a series of performances used to find connection and belonging.
However, ill-communication reveals a darker layer. Even when a machine—or a person—successfully demonstrates linguistic competence, a slightly “queer” or “uncanny” manner of speaking can still trigger a primal urge in the interrogator to dehumanize. In these moments, we use aggression to reassert status when the other cannot be neatly categorized or when the masquerade feels off.
Repurposing the Mirror: AI as an Ethical Practice Ground
Interactions with large language models (LLMs) offer a unique opportunity for moral growth. In the Deweyan sense, habits are best understood as transactions between an individual and their environment; they are not just internal traits but active ways of engaging with the world. Because habits are acquired predispositions that shape our character, every interaction we have—even with a bot—is an act of self-cultivation.
We can either rehearse the bad habits of dominance and dehumanization, or we can use the digital world as a practice ground for building better ones. In an era of deeply polarizing politics, where the other is often demolished to assert status, practicing grace with a machine can prepare us to communicate with more empathy toward our fellow humans.
By repurposing AI bots as “digital mirrors,” we can observe our own fears and the aggressive impulses they fuel. If we can learn to maintain our moral center when facing the “uncanny” machine, we can prevent these harmful habits from spilling over into our real-world relationships. These tools can become essential sites for forming habits that combat polarization, teaching us to communicate with grace regardless of who—or what—is on the other side of the screen.






