The Rise of Whatever

This was originally titled “I miss when computers were fun”. But in the course of writing it, I discovered that there is a reason computers became less fun, a dark thread woven through a number of events in recent history.
Let me back up a bit.
Bitcoin
Back in the 00’s, if you wanted to move money between arbitrary people over the Internet, you realistically had one option: PayPal.
The thing about PayPal is that it holds onto your money, but it isn’t a bank? I do not fully appreciate the architecture or its implications here, but PayPal’s point of view seems to have always been that they can do whatever they want. They’ve always been pretty fussy about the use of PayPal to facilitate commissioning artists for drawings of unicorn wieners, for example, so if they thought you were doing that, they would just lock your account and also keep all your money for six months. For safekeeping, I guess. And interest.
Yet PayPal was the only option for many rinky-dink individuals selling one-off goods and services, so there was some amount of frustration that the only available middleman had exclusive right to say how you were allowed to spend your money, or what kind of indie business you were allowed to run. And if they caught you ignoring the rules then they got to keep your money for half a year.
And then in 2010 or so, I heard about Bitcoin. And it sounded like the wave of the future. Finally, a way to just send money to someone. What a fucking concept. And imagine what you could build with such a system! Websites could have real tip jars. Browsers could have tipping built right in that transfer only a few cents, since transactions would be so effortless.
I downloaded a miner (well, the miner at the time, I think) and ran it for like a day and failed to mine a coin. There was nothing else to really do, so I closed it and forgot all about Bitcoin.
Fast forward a bit, Bitcoin has reached mainstream awareness, and… none of that stuff happened. Bitcoin is not so much a currency as it is an entire ecosystem of schemes. The only mention I’ve heard in the last year of being able to actually buy anything with Bitcoin was gray market estradiol. (Even gray market FIP medication just takes credit cards!) The only browser with built-in tipping is the one spearheaded by a man whose other claims to fame are inventing JavaScript and wanting to outlaw my marriage, and the token it uses apparently had 80 whole sellers in the past 24 hours. Sounds like all of that is going great.
Meanwhile, fifteen years later, the state of the art in sending arbitrary people money seems to be… uh, PayPal. But now we have Stripe, too, which can take credit card payments if you know how to make a website that uses it, but which also forbids drawings of unicorn wieners. Patreon? Stripe and PayPal. Itch? Stripe and PayPal. Ko-fi? Stripe and PayPal. Nothing is fundamentally different.
But the dream has died. It almost came true, and then it was immediately co-opted by a bunch of get-rich-quick grifters and a bunch of turbo-libertarians whose entire identities are defined by the Things that they Own and who want to cryptographically impose that on everyone else too because they’re mad that World of Warcraft nerfed warlock or something.
And I suspect the core problem that has wended its way through the history of cryptocurrency is that the vast majority of people involved do not actually care what the thing they’re flocking to is. What they care about is that it has a graph, and that they get rich if the graph goes up, so they say whatever might make the graph go up. The graph even looks exactly the same for every coin and NFT and Whatever else: x-axis time, y-axis dollars. The only place the thing appears at all is in the title, where you can safely ignore it.
Plenty of people will talk up the supposed benefits of their pet thingamajig, of course, but my suspicion is that many of them don’t actually care that much. They have a vested interest in getting other people to buy into the thing, Whatever the thing may be, because then graph go up.
And so you have what I can only call a culture of Whatever. Bitcoin failed as a currency because the people who got most invested in it do not care about currency — it could be bottled dragon farts for all they care, except that putting it on the computer means there’s no need to actually worry about a product. It’s just something to pump the value of; the underlying asset could be, well, Whatever. And Bitcoin itself is open source, so you can copy it and make your very own coin, your very own Whatever. With NFTs, you can make an entire family of “collectible” Whatevers — a strange descriptor given that you can’t actually collect one of each of them, but who really cares if the description makes sense? It doesn’t matter what the art is, or how the technology works, or what the tokens are attached to. It just has to be something you can convince other people to buy. The actual thing can be Whatever.
I think this adequately explains why the proliferation of these guys helped suck all the air out of Twitter. Tens of thousands of grifters lining every sidewalk, each one passionately hawking an indistinguishable Whatever that they don’t actually care about. Endless, endless fake enthusiasm from people all trying to convince each other to buy into their boilerplate box of nothing. Buy my thing! Haha no don’t worry about how much of it I own — let’s talk about how much of it you should own! Hint: it’s a lot!
Kind of a bummer.
The shape of the Web
The Web is a cool thing because anyone can just put stuff on it. It is the largest town square bulletin board ever devised. Back in the day, your ISP would even give you your own website! I don’t think they do that so much any more, but there are more cheap or free options than ever — hell, you can host a little website on GitHub.
And it used to mostly consist of little things made by people, and that was pretty cool! You would see more than four websites in a day. Websites would have colors! They wouldn’t all be designed for a three-inch-wide screen and then just scaled up when you’re at your desk! Twitter once let you set your own background image for when people looked at your profile.
But the trouble with everyone having a bunch of websites is that you lost track of them all and you didn’t really know when they updated and it was hard to talk back to a website. Also, making your own website is kinda hard? You have to, like, learn things.
And so the entire Web sort of congealed around a tiny handful of gigantic platforms that everyone on the fucking planet is on at once. Sometimes there is some sort of partitioning, like Reddit. Sometimes there is not, like Twitter.
That’s… fine, I guess. Things centralize. It happens. You don’t get tubgirl spam raids so much any more, at least.
But the centralization poses a problem. See, the Web is free to look at (by default), but costs money to host. There are free hosts, yes, but those are for static things getting like a thousand visitors a day, not interactive platforms serving a hundred million. That starts to cost a bit. Picture logs being shoveled into a steam engine’s firebox, except it’s bundles of cash being shoveled into… the… uh… website hole.
Traditionally, the way to pay for keeping your website online has been to slather it in ads and suffer the humiliation of Pepsi trying to sell Pepsi halfway down your page. Ads don’t pay very much, but for a moderate-size endeavor, that’s fine. You write your article and put an ad on it and make twenty cents a month or whatever. I don’t know, I don’t run ads because they’re an embarrassing blight that make everything they touch worse.
Together, these forces push big platforms in a very specific direction: maximize how many ads people see. To the exclusion of just about anything else. So Engagement becomes king — it’s okay if your users are miserable, so long as they’re here. It’s okay if the ads are obnoxious, as long as they’re seen.
Then this model spread into phone software. And then into videography. And then, somehow, into fucking, Windows??
And when the primary focus of the business is on the ads, everything else is sort of ancillary — it’s only important insofar as it keeps people around, to look at the ads. It’s jingling keys. It’s… Whatever.
This is the driving force behind clickbait, behind thumbnails of white guys making 8O faces, behind red arrows, behind video essayists who just read Wikipedia at you three times a week like clockwork, behind suggestion algorithms, behind recipe blogs that all look the same and have a mile of filler fluff, behind video game websites abandoning the idea of articles and instead turning into SEO vultures with inexplicably lengthy articles telling you “the blue key is under a rock by the river” so they have more paragraph breaks to put ads between, behind TikTok’s model of being a constant stream which I have to only guess at because I have never had any interest in TikTok but I assume it’s a worse version of YouTube Shorts and I already find those pretty irritating.
It’s all the same thing.
Look at it. Look at it, you stupid baby. Look how outlandish or shocking or extreme or dramatic, Whatever it is. Just shut up and look at it, so Home Depot will give me a quarter of a tenth of a cent.
At least when I write a lot, you know it’s because I wanted to write it. Also I’m probably not lying to you because someone paid me to do it!
And the only real hope I have here is that someday, maybe, Bitcoin will be a currency, and circulating money around won’t be the exclusive purview of Froot Loops. Christ.
Did you know there were entire get-rich-quick schemes about this? It’s like writing fake novels. Just make a website with a generic WordPress theme (every website looks the same anyway), write a bunch of bland nothing articles about things that seem a little obscure, and slather it in Google ads. Then let the money roll in from people accidentally finding your website and leaving when they find out it’s useless. But it’s too late because you already got the ad view!
I say “were” because bothering to write generic filler about nothing is passé — now the computer can do it for you!
Artificial reality
If you told me ten years ago that by 2025 we’d have the Star Trek computer, I would’ve been ecstatic. How fucking cool is that! You talk to your computer and it does things!
But we didn’t really get that. We got, I guess, sparkling autocomplete — a fancy chatbot that can string words together in the most inoffensive people-pleasing customer-service voice you’ve ever heard.
The result is something I adamantly do not want to interact with. I do not want to be exposed to LLM output at any time. It’s noise, and I feel like I get a little dumber every time I accidentally start reading it. My brain is already a bit glitchy, and I really cannot afford to have it work even more less good.
And speaking of things that work even more less good, the technology… sucks? It fundamentally doesn’t do the thing that its investors and diehard fans say it does. It just strings together text that is statistically plausible. And every new alleged advancement comes with some invested airhead billionaire boasting about how the computer is as smart as a Ph.D holder now, and then you see the output and it’s still the most generic banal brain-rotting sludge you’ve ever seen in your life.
Most of my exposure to LLM output is via Google cramming it everywhere they can think of, and in every instance the result is worse. Google Search keeps redesigning its way around my μBlock filters to dedicate an entire third of my desktop screen height to an “AI summary” — which either lightly restates the highlighted part of the top search result anyway, or is just total bullshit. YouTube keeps showing a sprinkling of “AI summaries” under video thumbnails that, without fail, restate the video title in more words. My phone’s fucking weather app has an “AI summary” with incredible insights like “it’ll get warmer over the course of the week”, which I could readily see for myself if this block of white noise weren’t pushing the temperature graph off the bottom of the screen. Over and over, actual information is moved out of the way to make room for an unreliable lossy compression of that information into text that takes longer to read.
But this is worth billions of dollars.
I think what really gets me here, and what no one really talks about, is that the bar has been revealed to be so low. LLM features get bolted onto fucking everything because what they do, what they really do, at their core, is this: Whatever. They do Whatever. And that’s great, because Whatever is something. There’s no such thing as an error, no empty results page, no such thing as a missing feature or an uncovered case. Almost without fail, you’ll get something. Is it useful? Is it correct? Is it remotely based in reality? Who cares? Far more important is that there is output. Whatever is apparently better than nothing. Cheap and inoffensive and disposable, like a red beer cup. We are doing to the Internet what we already did to the ocean: filling it with a great swirling vortex of trash.
Case study 1
“Ah!” the Hacker News commenters cry. “But have you tried it?” they ask with all the indignity of a kindergartener offended that you won’t eat their mud pie.
But yes, thanks: I was once offered this challenge when faced with a Ren’Py problem, so I grit my teeth and posed my question to some LLM. It confidently listed several related formatting tags that would solve my problem.
One teeny tiny issue: those tags did not and had never existed. I typed this additional context into the computer, and it generated a profuse apology followed by a different set of fictional tags. That was the end of that grand experiment.
The trouble was likely that there was no built-in way to do what I wanted, and no one had ever successfully done it before, so the machine had nothing to draw from… and simply generated something that sounded plausible instead. Because that is what this technology does: it continues a conversation in a way that sounds plausible, as defined by similarity to existing conversations. If there are existing conversations about the topic, great! That makes for a more specific measure of plausibility. If not, even better! Just about anything might be plausible! It can just generate Whatever!
I cannot stress enough that this is worse than useless to me. Not only did it not answer my question, but it sent me on a wild goose chase making sure I had not somehow overlooked the fake API it generated.
Like, just to calibrate here: you know how some code editors will automatically fill in a right bracket or quote when you type a left one? You type "
and the result is "|"
? Yeah, that drives me up the wall. It saves no time whatsoever, and it’s wrong often enough that I waste time having to correct for it.
And that’s a predictable operation that inserts a single character! What we’ve invented is an entire fake persona that will waste your time entire paragraphs at once.
I can’t imagine using this to do any actual work and I don’t understand how anyone else does. This is a whole new kind of failure case we’ve invented. I did also ask people about this problem, and they responded in the ways people might: they said they didn’t know, or they suggested an elaborate and tedious workaround that would technically solve the problem (but introduce new ones). But the LLM statistically generated something that sounds like an API that could exist. It produced an answer that was plausible, thorough, informative, relevant, and contained no useful information whatsoever. It produced the opposite of information! It produced noise.
Why would I want this? Why would I want to use a machine that sometimes generates text that resembles a person confidently lying to me? People are sometimes wrong, sure — that’s why Stack Overflow has downvotes — but this is something else entirely. If a real person did this to you, you would stop asking them questions real fucking fast.
LLM output is crap. It’s just crap. It sucks, and is bad.
Anyway I went on to do the thing I wanted regardless, because I’m a programmer and I know how to make computers do things.
I mean, I get it. I was trying to do something that had never been done before. LLMs are fine at things that appear a zillion times in their training data — in fact, this is probably a big part of the trick, because the things that appear more often in their training data are the things people are more likely to ask about in general and thus the things people are more likely to ask an LLM. But whose creative output consists solely of doing things a million people have already done? Is everyone else working on projects built exclusively out of lists of primes and rebalancing binary trees?
Case study 2
Back in December, I was complaining about something else (surprise, it was Web ads!) and just happened to look at the Visual Studio Code website, most of which was devoted to its LLM code-completion service, Copilot. I don’t care to desecrate this blog with LLM output — it’s on Bluesky if you must — but suffice to say, it wasn’t great. It was a call to a web service, and the generated code failed to encode form data. You know, Computer 101 stuff. Also it was like twice as long as it needed to be. Also it wouldn’t work on HTTPS websites because the web service’s certificate expired three years ago — which is a fun footgun, since you very well might be on HTTP localhost, and then it’ll only break when you go live.
I found it highly unlikely that the latest and greatest API for “get website” couldn’t just encode form data for you, but lo and behold: it can! Copilot just didn’t bother to make use of it. And since Copilot is a Whatever machine and its answers are these one-time disposable things, there’s no mechanism for someone else to come in and go “hey, you forgot to encode the form data”.
What even is this thing we’ve invented? Stack Overflow, but you only get the answers people scramble to type first so they can get the points? Oh and they just lie to you sometimes? Why would I want this?
And I didn’t cherry-pick this example! They chose it! This was the front-page example for a state-of-the-art LLM integrated with the most popular code editor in the world, all built by one of the richest companies in human history, whose entire business is software and who has specifically invested a zillion dollars in this specific technology. This is the gizmo at its best! And it’s crap!
But it does something. And that’s what’s important.
The broader culture
There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.
It makes programming spaces feel bleaker. I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”. I don’t want to know how much code wasn’t actually written by anyone. I don’t want to hear how many of my colleagues think Whatever is equivalent to their own output. I don’t want to keep watching people fall for a carnival trick.
A couple days ago I saw someone (whose bio claimed they’re a Bluesky engineer, but who knows) insist that it’s “very stupid” to not use a chatbot for programming. I just cannot comprehend this. If the task is easy, I could just write the code about as fast as I could describe it anyway. If the task is hard, then it’s all the more likely the generated code will be subtly wrong (or overtly wrong). If it’s something I don’t know, I can go find out about it, and now I know more things. What are you all even writing that so much of it consists of generic slop?
But also… why do you care? Why would someone using a really cool tool that makes them more productive… feel compelled to sneer and get defensive at the mere suggestion that someone else isn’t doing the same? I know there are people who oppose, say, syntax coloring, and I think that’s pretty weird, but I don’t go out of my way to dunk on them. I can’t imagine having a stronger reaction than saying “lmao what” and immediately forgetting about it. I might have strong opinions about what code looks like, because I might have to read it, but why would I — why would anyone — have such an intense reaction to the hypothetical editor setup of a hypothetical stranger?
It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor. Sure thing, Disco Stu! There have definitely never been any inventions that turned out to be bad ideas or were just plain forgotten about. But the Bitcoin people make more money if they can shame everyone else into buying more Bitcoin, so of course they’re gonna try to do it. What do programmers get out of this? Unless you work at Microsoft and have a lot of stock options, you aren’t getting rich off of how many people use Copilot.
It’s curiously similar to how, as a fitting segue, Microsoft is now gonna factor “AI” use into employee performance reviews:
“AI is now a fundamental part of how we work,” Liuson wrote. “Just like collaboration, data-driven thinking, and effective communication, using AI is no longer optional — it’s core to every role and every level.”
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
What are we actually saying here — that even Microsoft has to evaluate usage of “AI” directly, because it doesn’t affect performance enough to have an obvious impact otherwise? That the technology is so limp that even its biggest investor has to strong-arm its own employees into using it? That their own employees don’t want to use it?
Genuinely good new tools don’t tend to need coercion to fuel their adoption only a few years into their existence, right? What the fuck is going on here?
Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week):
Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
I’m not trying to put the author on blast or anything, so let’s leave it anonymous, but — my guy? My dude?
What on earth are you talking about?
I don’t know the context for this. What I do know is that a table saw quickly cuts straight lines. That is the thing it does. It doesn’t do Whatever. It doesn’t sometimes cut wavy lines and sometimes glue pieces together instead. It doesn’t roll some dice and guess what shape of cut you are statistically likely to want based on an extensive database of previous cuts. It cuts a straight fucking line.
If I were a carpenter, and my colleagues got really into this new thing where you just chuck 2×4s at a spinning whirling mass of blades until a chair comes out the other side… you know, I just might want to switch careers.
I keep seeing this — people compare LLMs to calculators, or screwdrivers, or digital cameras, or whatever. I’m left wondering if the people saying this stuff have ever used any of those things. A calculator does arithmetic for you — thus automating the tedious, repetitive part — but you still have to know which buttons to press to get the answer you want. You can’t just type the entire problem in and get Whatever — something that sounds plausible, with a microscopic disclaimer that checking it for accuracy is your problem.
Calculators do have limitations at their extremes, and if you’re working with extremes, you have to be aware of those. Table saws will (or, used to) cut through fingers just as happily as wood. Tools have edge cases — at their edges. LLMs have edge cases everywhere, and they are constantly changing, even minute to minute, even for exactly the same input fed to exactly the same model. It’s also possible to adjust or customize tools in various ways, whereas 90% of the times I’ve seen someone talk about their customized LLM, all they’ve done is prepend a paragraph like “Please answer as though speaking to a customer.” The state of the art is to ask the computer nicely to do something, add a disclaimer saying it’s not your problem if the computer is racist, and then charge for access.
This is not mere automation. This is a completely new type of thing. We’ve never had a machine that can take almost any input and just do Whatever. But I keep watching people act like it’s the same level of invention as the egg slicer and I feel like I’m losing my fucking mind.
But what if it gets better
I don’t know. What if it does? What does that mean? I hear “better” and I read the press release and in the fine print it says that now it can count the number of letters in “Mississippi” correctly or whatever. And then it’s still crap.
What if it didn’t produce crap? I struggle to imagine such a world, in no small part because the hype around the Whatever machine is so staggeringly overblown. My phone has a dedicated Tensor™ chip to simulate artificial intelligence in the palm of my hand, wow! Here’s what it does: tells me it’ll be hot this week.
But if the machine still just fabricates an elaborate plausible fiction when it doesn’t have an answer on-hand, what good is it? I can always just go find the place it got the answer from originally, and at least then I know that someone wrote it. Someone had a reason to think it, even if they were mistaken. Maybe the well is just permanently poisoned — anytime I see anything I know to be LLM output, my first assumption is that it’s nonsense, completely divorced from reality.
I know a lot of people have a lot of gripes with LLMs and generative “AI” that tie them to big grandiose concerns like intellectual property or environmental impact. My gripes are more of a tangled web that I can only summarize as: the vibes are bad. The tone is unbearable. The lying as a fallback is offensive. The advertising keeps focusing on how you can coast through life without caring about your work or family because you can just generate a birthday card or whatever. The people funding and pushing it keep openly salivating at the idea of replacing as much human input as possible with a machine best known for generating titles of books that don’t exist.
I don’t know how you get “better” than this. I don’t know how you make a better Whatever machine.
And then there's the art thing
I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?
Nobody seems to appreciate that if you can make a computer do something entirely on its own, then that becomes the baseline.
There is a lot that can be said about image generation (little of it polite), but I’m running out of steam a little here. I’d intended to comment on the ongoing efforts to make better and better photo-quality image generation, but I can’t think of much to say beyond: why the fuck would you work on that? We don’t have enough trouble with, say, the conservative “news” sphere inventing its own alternate reality that millions of people buy into, simply by lying — now we have to give them a machine tailor-made for creating fake photos and videos too? Why does this need to exist? Why is this in my phone’s fucking camera app? Can’t these people go live on an airgapped island somewhere and work on their new horrifying fraud machine by themselves?
Also I could swear I saw Google advertise that Gemini can do your homework for you
This is starting to get away from the main thesis of Whatever but every time I hear about students coasting through school just using LLMs, I wonder what we are doing to humanity’s ability to think critically about anything. It already wasn’t great, but now we’re raising a whole generation on a machine that gives them Whatever, and they just take it. You’ve seen anecdotes of people posting comments and submitting papers and whatnot with obvious tells like “As a large language model…” in them. That means they aren’t even reading the words they claim as their own! They just produce Whatever.
Actually hang on this gets me into conclusion territory.
Enough of Whatever
I remember that Facebook literally proposed running a bunch of its own LLM-driven fake accounts on its own website. Fake people making fake posts about Whatever, so you’ll have more Whatever to look at, so you’ll see more ads along the way. Monetize the rot, I guess.
I can’t imagine publishing a game with, say, Midjourney-generated art, even if it didn’t have uncanny otherworldly surfaces bleeding into each other. I would find that humiliating. But there are games on the Switch shop that do it. Whatever.
It begins to feel like a broad celebration of mediocrity. Finally, society says, with a huge sigh of relief. I don’t have to write a letter to my granddaughter. I don’t have to write a three-line fetch call. I don’t have to know anything, care about what I’m doing, or even have an opinion.
I can just substitute some Content™. I can just ask the computer for Whatever
But I like programming. I like writing. I like making things and then being able to sit back and look at them and think, holy fuck, I made that. There is no joy for me in typing a vague description into a computer and refreshing my way through a parade of Whatever until something is good enough.
The most obnoxious people like to talk about how Stable Diffusion is “democratizing art” and that is the dumbest thing I’ve ever heard. There is no fucking King of Art decreeing who is allowed to draw and who isn’t. You could do it. You could do it right now. But it’s hard, so you’d rather spend that time crying on Twitter about how unfair it is that learning a skill takes work and thank god the computer can give you all of the admiration with none of the effort now.
This is an incredibly weird moment. There have always been inventions that make some craft easier (but sometimes a little more shoddy as well). There have always been people who resented the idea that the thing they work very hard at is now more accessible. America’s Protestant work culture is deeply entangled with this as well, but I don’t value sweat in and of itself — I have a broader objection.
Because this is something else. What’s being sold to us is a machine that is promised to do everything. That’s far beyond a tiny question like “should you know how to manually focus in order to take a photography” — it gets at the notion of thinking about, or doing, anything at all.
I don’t think anyone is obligated to do anything in particular. If you don’t want to draw, or write, or compose, or program, or whatever, then don’t! That’s fine.
But I think the core of what pisses me off is that selling this magic machine requires selling the idea that doing things is worthless. Because if doing something has some value, then it must be somehow better than pushing a button and receiving Whatever for essentially no cost. If you’re some assclown like Sam Altman, whose graph-go-up depends on convincing you to replace all your employees with ChatGPT, you have to destroy that idea. It is the greatest threat to your business model. You have to destroy the idea that things are worth doing.
I think that sucks, I think he sucks, and I think his machine sucks. So fuck him and fuck his machine.
Do things. Make things. And then put them on your website so I can see them.
What's Your Reaction?






