LLM Inevitabilism

Have you ever argued with someone who is seriously good at debating? I have. It sucks.
You’re constantly thrown off-balance, responding to a point you didn’t expect to. You find yourself defending the weak edges of your argument, while the main thrust gets left behind in the back-and-forth, and you end up losing momentum, confidence, and ultimately, the argument.
One of my close friends won international debate competitions for fun while we were at university (he’s now a successful criminal barrister), and he told me that the only trick in the book, once you boil it all down, is to make sure the conversation is framed in your terms. Once that happens, it’s all over bar the shouting.
Earlier this year I read Professor Shoshana Zuboff’s fantastic book The Age of Surveillance Capitalism. I learned a lot from it (and I’m sure I’ll rave about it in more detail in other posts), including many new terms for phenomina that have long frustreated me.
Being able to put a name to something abstract allows you to more easily build an argument about it, explain the concept to strangers, and unify opposition to it. It’s a key success of Professor Zuboff’s book that it has introduced so many new terms to the lexicon.
The word that is relevant to this post is “Inevitabilism”.
People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.
This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
“We are entering a world where we will learn to coexist with AI, not as its masters, but as its collaborators.” – Mark Zuckerberg
“AI is the new electricity.” – Andrew Ng
“AI will not replace humans, but those who use AI will replace those who don’t.” – Ginni Rometty
These are some big names in the tech world, all framing the conversation in a very specific way. Rather than “is this the future you want?”, the question is instead “how will you adapt to this inevitable future?”. Note also the threatening tone present, a healthy psychological undercurrent encouraging you to go with the flow, because you’d otherwise be messing with scary powers way beyond your understanding.
I’m not convinced that LLMs are the future. I’m certainly not convinced that they’re the future I want. But what I’m most certain of is that we have choices about what our future should look like, and how we choose to use machines to build it.
Don’t let inevitabilism frame the argument and take away your choice. Think about the future you want, and fight for it.
What's Your Reaction?






