I avoid using LLMs as a publisher and writer
Now for my more detailed arguments.
Reason 1: I don’t want to become cognitively lazy
In a recent study by MIT researchers (Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task) demonstrated using LLMs when writing essays reduces the originality of the resulting work. More notably, when measured using an EEG, LLMs also diminish brain connectivity compared to when participants were allowed to use only their brains or a search engine. People who used LLMs for the first three tasks and then had to write an essay without an LLM, using only their brains, had the worst results. “In contrast, the LLM-to-Brain group, which had previously been exposed to LLM use, showed less coordinated neural effort in most bands and also a bias toward LLM-specific vocabulary,” the study reports. Using LLMs only after completing the work, on the other hand, can enhance the quality of the result and connectivity, but starting with an LLM seems like a tricky choice.
It was intriguing that participants using LLMs were unable to accurately cite from their work and were also the least likely to consider it “their own.” Participants using their brains and search engines, on the other hand, reliably quoted in most cases and did not have a weakened sense of ownership. This is consistent with my experience.
A study by British researchers (Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance) came to similar conclusions: LLMs can help individuals achieve faster and better results, but it weakens the ability to learn independently by making people less accustomed to thinking for themselves.
Both papers are worth reading. But I warn you: if you employ AI summarization on the first study, you will be asking for the revenge of the authors, as described in Time magazine :)
Incidentally, automatic summarization is one of the things I also avoid. In the flood of information, the offer of a condensed summary of a book or essay might seem like the greatest invention since sliced bread. However, the problem is related to both the practical value and the joy of reading: in my opinion, the most rewarding thing about reading is that you encounter (and learn) things by occasionally triggering thematically distant associations while reading the original text. But these only emerge thanks to your comprehensive involvement and your personal neural network, which LLMs don’t even know the first thing about. You read a book about business in which the author mentions that he refused to move when his dog was ill, and you realize a fundamental emotional connection with your plans to move your company — and so a story begins that you start to tell (yourself). Summarization would obliterate this, your flash of insight, potentially genuine insight, and other random associations would be replaced by a totally generic and unemotional narrative.
I was also deeply touched by a real-life illustration about the end of critical thinking in the OSINT community. This type of volunteer work relies heavily on analytical reasoning as its main tool. The author explains how the gradual delegation of tasks to ML tools has insidiously undermined key processes of source validation, consideration of multiple perspectives, hypothesis formation, and independent thinking. He states that it has led to a decline in the quality of detection and relationships within the community as a whole. Incidentally, it is paradoxical that the acronym OSINT stands for Open-source intelligence.
I often think of the essay Writes and Write-Nots by Paul Graham (among other domains, an authority in the field of artificial intelligence), who argues in his uniquely light yet profound style that writing is thinking and thinking must be cultivated. According to Graham, in the new world with AI, there will only be people who can write well and those who cannot write at all.
Hmm, I don’t intend to end up in the latter group.
Reason 2: Why doesn’t this style suit me?
The thing is, I sense something peculiar in the generated text. The odor of ultra-processed food. I perceive the hint of a cabaret magician who has learned a few tricks to satisfy and appease my curiosity. And then there’s the hypocrisy.
Even if I manually edit entire passages previously produced by the language model, I can still perceive its little calculators clacking away between the lines. (Yes, they are here with us in this room).
So for now, I simply cannot and do not want to use LLMs for writing, alas: the text would not be mine, as I would not be the text.
But that doesn’t mean I’m elevating myself above others, if that’s how it sounded.
I understand and see that these emotions are justified for many chatbot users. We all have different expertise, sensitivities, and needs. If you’re not fixated on literature (e.g. being a publisher/author/editor) or simply writing isn’t your ambition, LLMs can be a welcome aid during the formulation, development, and refinement of text. It will probably enhance the quality of the output and make it more comprehensible without compromising it. Among other things, LLMs can perfectly smooth out sharp edges in an escalating email exchange or come up with arguments for negotiating with a difficult client, so why not give them a try?
* In his book Story, distinguished author and storytelling lecturer Robert McKee explains that we can empathize with villains, madmen, and desperate people, but hypocrites are inherently abhorrent to us. “The audience never aligns with a hypocrite,” he writes, and I agree with him.
Reason 3: Be careful, there are people on the other side too!
Let me put it this way: if I had to write to Putin and his ilk, I would use LLMs exclusively, thereby expressing my utmost disdain, revulsion, and distance even on this level. I have no doubt that I would receive similarly prefabricated responses. Personal correspondence using LLMs is to me pretty much the opposite of expressing respect to others.
You’re not going to believe a witness just because they speak assertively!
—Nico Dekens, author of the article on the decline of the OSINT community.
Reason 4: Models don’t know the model of our world
Language models are simplifications of reality. “Models”. The problem is that their conversational dialog is marketed as a faithful representation of reality. They call it “intelligence”.
The problem of LLM-generated misinformation has many layers. In my opinion, the worst is that inaccuracies are getting passed off as facts and are becoming normalized..
You and I both know that we can’t fully trust LLMs, and you probably also take care to verify what it generates. It’s not Wikipedia, a diligently tuned database, nor is it Wolfram Alpha with the actual ability to derive solid data and calculate it using the laws of mathematics.
But at the same time, there are more and more users (seniors, children, and simply non-technical laymen) who consider chatbots to be an omniscient expert database, a kind of Orakulum. People then publish or use the generated outputs to substantiate, and new models are fed with them. (you probably heard Elon Musk wants to resolve this once and for all by asking Grok to clean itself up and figure out what’s missing, and then all the wisdom of humanity will be in one place. Are we really supposed to believe that?)
We encounter inaccuracies from chatbots on a daily basis. The more we equate omniscience with LLMs, the more often various online services will utilize these models, and the more often we will be exposed to nonsense. (Yes, I have found myself in an embarrassing situation: the bot that recommends books in our e-shop sometimes recommends a book from a competitor — we are trying to address this). In the real world, the consequences range from laughable mistakes to (one day, undoubtedly) monumental errors, depending on the influence of the person who acts on the answer. But the limitations are diminishing. Did you know that some agents (autonomous applications connected to the “real world”) often use LLMs to “control their logic”? Yes: logic, they say.
Imagine if your GPS hallucinated on almost every trip, determined how it cast the dice (and then empathically conveyed regret that it took you miles from the destination)… Most chatbots provide a low-reliability service — given what the audience expects. It’s a paid tool, but it can’t consistently quote from a PDF you upload. It can’t calculate as reliably as a half-century-old calculator. Unlike a child, it can’t deduce how to rearrange the wheels on the Towers of Hanoi. On the other hand, it supposedly jeopardizes your job because you’re less intelligent.
I recently heard an influential friend of mine (who I otherwise respect) saying: “The most intelligent people reject AI because their egos won’t allow them to accept the existence of an entity that can do their job just as well.” I think I understand what he’s getting at, and I agree with the first part of the sentence: the most intelligent people (which I don’t count myself among, so this is just observation) are usually the most reserved. However, the reason is not primarily their inflated egos (in which case they would not be the most intelligent), but rather their quest for data and evidence-based support. After they obtain it, they start to ponder and only later they come to conclusions.
One perspective which I borrow from Ivo Velitchkov’s blog is that the entire output of LLMs is a hallucination. It is designed to appear as a true representation of reality. Usually, with well-trained factual questions, the answer aligns with something close to the truth (because we most often write down true things, and so they are statistically most often derived from recorded or documented data). But when generalization, abstraction, and transfer to another domain are needed, LLMs often generate something that only marginally matches the truth, or not at all. But it doesn’t know this, it can’t point it out and say: this is an 80% true answer. If we as humans notice the inconsistency, we call it a mistake or a hallucination, but if we happen not to detect the discrepancy, we easily accept the answer as a “fact” (I am paraphrasing Katie Mack here).
Writing is my nemesis
For publishers and authors like me, language is the essence of existence, a livelihood, and a love. From my point of view, every single word matters, true synonyms do not exist, and the authenticity of the creator is above all else. But I also know that this is my very specific view and I don’t want to impose it on anyone.
Nevertheless, let me propose an experiment for you. Before you commence reading or writing something again, prompt yourself first: why am I reading? Why am I writing? And when you reply, ask yourself again why you responded the way you did — and repeat this three more times, going deeper each time. Feel free to discuss it here. I look forward to hearing from you.
What's Your Reaction?






