LLM Inflation

Aug 6, 2025 - 13:00
 0  0

One of the signal achievements of computing is data compression1: we take in data, make it smaller while retaining all information (“lossless” compression), transmit it, and then decompress it back to the original at the other end.

For many years, compression was an absolute requirement to get things done: storage devices were too small for the data we wanted to store and networks too slow to transmit what we wanted at an acceptable speed.

Today compression is less often an absolute requirement but it can still make our lives usefully better. For example, the page you’re reading right now has, almost certainly, arrived to you in a compressed form. It was worth my time making this work, because my site now displays more quickly on your screen and the load on my server is reduced.

All of which makes me greatly amused to see that in 2025 we are now sometimes doing the very opposite.

Here’s a simple example. Bob needs a new computer for his job. In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity. Creating the necessary prose is torturous for most of us, so Bob fires up the LLM du jour, types in “Please create a 4 paragraph long business case for my manager, explaining why I need to replace my old, slow computer” and copies the result into his email.

Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM du jour and types at the start “Please summarise this email for me in one sentence”. The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.

I have started to refer to this pattern – which I’ve now seen happen several times in very different contexts – as LLM inflation. It is very easy to use LLMs to turn short, simple content into something long and seemingly profound — and to use LLMs to turn long and seemingly profound content into something short and simple.

That we are using LLMs for inflation should not be taken as a criticism of these wonderful tools. It might, however, make us consider why we find ourselves inflating content. At best we’re implicitly rewarding obfuscation and time wasting; at worst we’re allowing a lack of clear thinking to be covered up. I think we’ve all known this to be true, but LLMs allow us to see the full extent of this with our own eyes. Perhaps it will encourage us to change!

2025-08-06 10:50 Older
If you’d like updates on new blog posts: follow me on Mastodon or Twitter; or subscribe to the RSS feed; or subscribe to email updates:

Footnotes

1

And, of course, information theory, but I’m more concerned with the practical effects in this post.

And, of course, information theory, but I’m more concerned with the practical effects in this post.

Comments

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0