The True Hidden Danger of ChatGPT
Large language models promise less workload and more efficiency. But nothing comes for free. What is the hidden cost?
Large Language Models
An overwhelming amount is being said about the implications AI portends for our future. Many prognosticators spell doom, others claim techno-utopia. My own detailed take on the matter is laid out in my essay, The Gaian Project1. However much consideration is occurring, curious little regards the subtle dangers we already face, which are sure to increase if we ignore them.
To date, the most prominent instantiation of what we call AI is ChatGPT, the flagship product of the company OpenAI. GPT is an instance of what's called a large language model (LLM), a computational framework that uses “deep learning” techniques to process text, recognize patterns in language, and produce coherent responses to dynamic inputs.
LLMs can be utilized to generate creative writing, summarize swaths of information, categorize text and highlight trends, engage in conversations, and for other language-based tasks, including generating and reviewing computer code. These feats are accomplished through the model predicting the best “next word” in a sequence based on the context provided by preceding words. Specifically, ChatGPT is considered a “chatbot” application that utilizes the underlying structure described here.
Since ChatGPT conspicuously burst into public consciousness in 2022, many other prominent LLMs have followed in its wake, promising the elimination of various writing tasks such as crafting emails and texts, writing papers, and devising posts for blogs and social media.
On the one hand, one can produce short- to medium-length writing with exceptional ease with the heavy assist of an LLM; this has obvious potential to free up significant time and brainpower for the average worker. On the other hand, the ways in which LLMs are being used to supplant organic imaginative writing, often without attribution to the actual “writer” of the work at hand, present a pernicious situation that we have not yet faced with open eyes.
A recent study highlighted that approximately 50% of all internet traffic is attributable to bot activity2. In another study in which participants attempted to identify AI-generated posts, it was found that only about 42% of users could successfully identify the true nature of posts on social media.3 A 2024 report suggests that up to 90% of online content may be generated by AI by 2026.4 My inclination is to posit that well more than 50% of the “copy” on the internet is already being produced by LLMs.
The trends appear clear: this genie will never return to its bottle. And we’re well aware through lore surrounding genies that consequences come from casting frivolous or unthinking wishes. “I wish I never had to write a school paper again.” “I wish that someone would respond to my emails for me.” “I wish I had the perfect thing to say to win this argument.” We rubbed the bottle. A very real genie emerged. Now, our wishes have been granted. What consequences have we wrought?
Language: The Great Power
The key agent in the alchemizing of human culture from the raw components of wild nature was language. With the emergence of language, shared models of reality could be articulated. As the human brain complexified, so did linguistic nuance, which allowed for more elegant refinement of those models. The foundations of the thought systems of biology, theology, philosophy, and poetry were found in that nuance. The articulation of language, in combination with the articulation of fingers, provided the labware for our rapid transformation.
Language holds a unique power to transform reality. It would have been impossible to transcend Eden without it, for better or for worse. Terence McKenna often said that “the world is made of language.” One can read this as an ontological claim and/or as a metaphor. In one instance of discussing the matter, McKenna put it another way: “Language is the primary determinant of the experience of being.”5 In other words, what we articulate into the world becomes reality. Even as you read these words, your consciousness is being impacted. You cannot unread what you’ve read. Your mind has already changed.
It is inadequate to say that words carry immense power. A potent idea well put can flabbergast, enlighten, enliven, intoxicate, crush, dissuade, or convince. The right set of words can rally armies to battle or bring the most calloused man to weep on his knees.
Spiritual lore and myth have long advised humans of the potency of the word. In the Genesis myth, God speaks the aspects of Earth into existence. Whether by choice or necessity, he articulates each element in language, and only then does it materialize. In the book of John, this idea is reinforced and expanded upon: “In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through him, and without him was not any thing made that has been made.”6 The concepts of God and the Word are presented as one, and it is stated that nothing can have existence if not for its articulation in language.
The ancient Greeks and Stoics used the term logos, which carried the meanings of “word,” “discourse,” or “reason,” and philosophically indicated a cosmic ordering principle. The logos can be interpreted as a font of understanding that is accessed through listening according to Heraclitus: “Listening not to me but to the logos it is wise to agree that all things are one.”
In world religion and folklore, knowing the “true name” of an entity grants the knower great power over it. In Egyptian mythology, Isis tricked Ra into revealing his true name, which granted her authority over him, shifting the balance of power in the cosmic pantheon. The tale of Rumpelstiltskin highlights the power of knowing a being’s true name, as when the queen learned the imp’s name, she was then able to break their dark bargain and reclaim her child.
Lore surrounding magic carries this idea in legion ways. The true name represents the essence of the being it identifies; adept magicians can direct this essence by uttering that signifier. In fact, magic lore is highly deferential to the power of the word. Books of incantations carried ostensible power to alter reality through the speaking of various spells into the world.
In many world religions, chanting particular strings of words is practiced to focus spiritual awareness and cultivate certain frequencies of being. Japanese Shinto holds the idea of kotodama, which translates to “spirit of language,” suggesting that words have a spiritual essence that, when articulated, impacts the balance of the speaker and the world.
In the teachings of the Buddha, “right speech” is a foundational principle of the Noble Eightfold Path to nirvana. The teachings emphasize refraining from false, divisive, harsh, or frivolous language, and advocate engaging in speech that is helpful, harmonious, kind, and encouraging. This acknowledges the karmic weight that words carry to influence minds and the world at large.
All of these examples carry the message of language’s power, which can roughly be summarized as follows. Words are not mere sequences of letters nor noises of the mouth; they are carriers of will with the power to dictate reality. Words live as vessels for intent, translating the speaker’s will into the manifest world. When words are carefully chosen and consciously directed, their impact can be miraculous. Language is an intermediary between thought and material actuality with the power to alter the world.
If this understanding is even moderately true, the hidden danger before our eyes demands to be looked at closely. The danger, put as a query, is this. What happens when we outsource our wielding of the tremendous power of language to an unthinking algorithmic proxy? There are dangers science fiction hasn’t warned us of.
Hidden Danger
As we are birthing something paradigm-shatteringly new, it’s crucial that we turn every stone in our attending to the possibilities both beautiful and terrifying that AI presents.
The obvious existential dangers—from the extravagant horror of Skynet to the banal peril of the paperclip maximizer—have been well-explored via writers of science-fiction and philosophers of existential risk. These imposing dangers make for great film and story, and they do present potentialities that we’re wise to fear. There are subtler dangers of AI, however, that are almost never mentioned, and which are no less a threat in the long run. The ubiquity of LLMs presents several such dangers.
Of course, being a growing technology, a catalogue of the subtle dangers of outsourcing language-making to algorithms wouldn’t be concludable. The tendrils of this expontentially-expanding mode of operating, though subterranean now, are sure to burst poison mushrooms through the topsoil once conditions are sufficient. Nonetheless, there are some relevant dangers that are reasonably clear.
Here are five categories of subtle danger:
Lying to the world: This is the most obvious danger that people don’t want to face. When you present something you haven’t written while signing your name or posting on your account, you are lying to everyone who engages with that writing. This is the definition of plagiarism, an academic “crime” known to carry consequences ranging from failing an assignment to destroying a career. Paraphrasing—rewording someone else’s writing—which is how many people justify their LLM-mediated plagiarism, is considered in academic circles just as serious as direct copying.
When we direct an LLM to write something that we intend to put into the world for consumption by others—things like blog posts, essays, text messages, posts on social media, school papers, poems, and e-mails—it is an act of deception. No amount of rationalization can refute this fact of lying. It is a truth that must be confronted if you are using LLMs to write. Imagine if there were an indelible watermark tagged onto the writings you produce—e.g. “This email was written by ChatGPT”—would you be so keen to send it in that case?
Lying is insidious. Not only does it manipulate the recipient, it degrades the deliverer. To lie to someone is to say, “You don’t deserve to know the true reality of the situation, and I’m deciding for you.” To dispense a lie is to say, “My integrity is worth discarding.” Moreover, to lie is to encourage the ethic of lying to metastasize, as if to say, “Lying is a right and appropriate method of behaving.”
Every lie is a subtle endorsement permitting the world to lie. Each lie spoken denigrates the stature of truth by conveying the meta-message that lying is right. And when we denigrate the stature of truth, we end up in the position that we find ourselves in—with access to objective truth becoming ever more difficult to find. At what cost to ourselves do we choose to so injure the world?
Flattening of nuance and distinction: LLMs are learning from what's written on the internet. They are continually being trained on new material. As we’ve established, some increasingly massive portion of the writing on the internet is being created by LLMs. The logic here is simple. Soon enough, the LLMs are largely training themselves, becoming like the ouroboros which devours itself for sustenance. How long can something survive by consuming its own excretions?
Overutilization of LLMs has the potential to create a simulacrum of discourse that has a homogenizing effect on language and thought. The philosopher Wittgenstein penned the proposition that “The limits of my language mean the limits of my world.” When we blunt our language, we stultify imagination. When imagination stultifies, reality withers.
The internet age has already precipitated a mass flattening of nuance in our world communications. For some years we've been suffering the consequences, with people becoming algorithmically pigeonholed in distinct and countervailing reality tunnels. The integrity of our social fabric is already nearing its limit; further excision of nuance will only hinder the type of communication necessary to navigate a complexifying world.
Who’s training whom? Distinct from the chatbot phenomenon, “predictive text” is an application of LLM technology in which suggestions are given for what word the model “thinks” you want to use next in the sentence you’re composing. Google, for example, has a well-known predictive text feature they call “Smart Compose,” which is enabled by default in their suite of products. The model is all the while learning from which suggestions the user accepts or rejects, and it refines its predictions accordingly. Most users are aware that they are training the model thusly, but how many consider the extent to which the model is concurrently training them?
When we engage with predictive text, we allow an algorithm to speak for us, choosing expediency over originality of thought. The flat, standardized way of communicating that emerges from millions of users engaging in this way becomes boring at best, and at worst anti-human. Yes, we can save 15 seconds on a six-sentence email, but how much is that 15 seconds worth when the freedom of our mind to convey nuance and beauty and truth through language is the service charge? In short order the technology will be so effective that simply clicking “respond" to an email will craft a perfect response in your voice. The meta-message we’re transmitting here is, “I don’t care enough about our communication to participate with you authentically.”
Atrophy of abilities: In the world before GPS navigation, drivers were required to learn routes, use maps, or get directions in order to find their way to a destination. Now GPS navigation is ubiquitous. And as you’ve surely experienced yourself, the more you rely on it to get where you’re going, the less you’re able to find your way without it. This is the nature of atrophy; what is not exercised will degenerate.
When you outsource your language creation to an LLM, your own ability to articulate will degrade over time. When you rely on LLMs to parse complexity, your own critical thinking will dwindle in proportion to the growth of that reliance. Do we truly want to give up more of our power in a world that, by design or by accident, seeks to lay us quietly down in a made bed of comfort and convenience? To discard genuine engagement with the present moment, slouching quarter-heartedly toward the provisional life?
The ultimate danger: More than all of this, the great danger reveals itself when the power of language is recognized fully. Your word is a sacred power you are gifted to wield. The locus of your sovereignty is in the actions you undertake and the words you transmit to the world. To sacrifice your command over the words you articulate for the sake of mere convenience is to rescind your core freedom, which is the wielding of your own mind precisely as you see fit. There is no reward worth this price, and certainly not the meager ones that LLMs provide.
Our world is full of algorithms grown out of control. Allowing algorithms to dictate our reality has proved to be more destructive than we imagined. The converging trend lines between prevalence of social media and increasing diseases of despair among children provide one sorry example. Algorithmic overtaking of language is a road we cannot afford to travel down.
Plainly, these dangers lack the cinematic flair of HAL’s refusing to open the bay doors in 2001. Or of the sweeping reveal of the human battery complex in The Matrix. But they are no less dangers. If growth of AI technology remains exponential, these dangers will grow wildly beyond the quiet field of subtlety—as measured in moments, not years.
So here we are. Just shy of the pupal phase of the metamorphosis of the human story. Every step we take from this point forward has to be guided by full-open eyes, lest we falter. We cannot afford to toss aside our crown treasure of imaginative freethinking linguistic articulation for the sake of ostensible ease. The cost is too great. The promise is too meager.
Human Promise
LLMs present a boon to humanity in many and growing ways. The time-saving applications for research, data analysis, information synthesis, general queries, and education are undeniable. In all ways that this incredible technology can be used harmlessly to benefit human flourishing, it should be. But not all use cases are harmless.
Through this writing, I’ve invited your deep consideration of the matter. My hope is that before you use an LLM to write something for which you will be taking attribution, you will sit with the implications. To sit with what it means to willingly give away your power of articulation to another entity. Imagine having that power taken from you under threat of force; imagine being unable able to speak for yourself. It is the quintessence of dystopia. Yet we would willingly give it away? For convenience? I promise you, the sacrifice is too great.
The power we wield to shape the world is mighty. Our word is the distillation of our being into manifest reality. When we speak unthinkingly or speak to scathe others, we endorse flippant or hurtful engagement as proper, and we water those seeds in the human garden. We also say, “this is who I am.” When we choose our words carefully—that is, with care—we express care into reality. Thereby a caring reality is enforced.
Language is the primary way of articulating meaning. Ideally, we want to be using our words in a meaningful way—to deliver a meaningful message to our loved ones and the world. There is too much mere noise in the world. Too much wanton deceit. Can we not add to it unnecessarily?
My plea is this. Help us create the more beautiful world with your own verse of “let there be light.” Help us see the light that you see. Guide us through how to dispel the darkness. Articulate your vision of beauty in the world through the poetry of your rare thought. The world is in a delicate place. We need the music of your language to craft a symphonic harmony. Help us find our way. The world that we share depends on your words.
https://www.imperva.com/resources/resource-library/reports/2024-bad-bot-report/
https://arxiv.org/html/2409.06653v1
https://oodaloop.com/analysis/disruptive-technology/if-90-of-online-content-will-be-ai-generated-by-2026-we-forecast-a-deeply-human-anti-content-movement-in-response/
Terence McKenna, Interview on WFMU, April 21, 1994. https://uutter.com/c/terence-mckenna/e3eb0728-40a4-4ada-b1a7-24385c908f4e?p=15
John 1:1-3 (English Standard Version)
Great article.
It’s like a civilizational metamorphosis happening.