if you ever wonder how I write Pivot, it’s a bit like this. The thing below is not a written text, it’s a script for me to simulate spontaneity, so don’t worry about the grammar or wording. But how are the ideas? And what have I missed?
(Imagine the text below with links to previous Pivots where I said a lotta this stuff.)
‘AI is here to stay’ - what does that mean? What are you actually claiming?
When some huge and stupid public AI disaster hits the news, AI pumpers will dive in to say stuff like “you have to admit, AI is here to stay.”
Well, no I don’t. Not unless you specify what you actually mean when you say that. Like, what is the claim they’re making? Herpes is here to stay too, but you probably wouldn’t brag about it.
We’re talking about the generative AI stuff here. Chatbots. Image slop generators. That sorta thing. Sometimes they’ll claim chatbots are forever because machine learning works for X-ray scans. These people are wasting your time.
Are they saying that OpenAI and its friends, all setting money on fire, will be around forever? Ha, no. That is not economically possible. They’re machines for taking money from venture capitalists and setting it on fire. The chatbots are just the excuse for that. They’re not sustainable businesses. Maybe after the collapse there will be a company that buys the name “OpenAI” and dances around wearing it like a skin.
Are they saying there’s a market for generative AI and so it’ll keep going when the bubble pops? Sure maybe there’ll be a market - but as I’ve been saying for a while now, the prices will be 5x or 10x what they are now if it has to pay its way as a business.
Are they saying you can always run a local model at home? Sure, and about 0.0% of chatbot users do that. In 2025, the home models are painfully slow even on a high end box. No normal people are going to do this. It’s like if they said “see, radio is here forever!” and it was actually five guys talking with Morse Code.
I’ve seen claims that the tools will still exist. I mean sure, the transformer architecture is actually useful for stuff. But mere existence isn’t much of a claim either.
So. If someone says “AI is here to stay,” nail them down on what the heck the precise claim is they’re making. Details. Numbers. What do you mean by being here? What would failure mean? Get them to make their claim properly.
I’ll make a prediction for you, give you an example. When, not if, the venture capitalists and their money pipeline go home and the chatbot prices multiply by ten, the market will collapse. There will be some small providers left. it will be technically not dead yet!! but the bubble will be extremely over. The number of people running an LLM at home will still be negligible.
It’s possible there will be something left after the bubble pops. AI boosters like saying it’s JUST LIKE the dot-com bubble!!! But i haven’t really been convinced by the argument “Amazon lost money for years, so if OpenAI just sets money on fire then it must be Amazon.”
Will inference costs — 80%-90% of compute load — come down? Sure, eventually. Will it be soon enough? Well, Nvidia just had a bad chip generation and is going back to its old chips but putting more of them in modules with heating problems.
So there you go. If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.
As I mentioned before, some spammers and scammers might actually need the tech to remain competitive in their markets from now on, I guess. And I think they might be the only ones (except for a few addicts) who would either be willing to pay full price or start running their own slop generators locally.
This is pretty much the only reason I could imagine why “AI” (at least in its current form) might be “here to stay”.
On the other hand, maybe the public will eventually become so saturated with AI slop that not even criminals will be able to use it to con their victims anymore.
If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.
I’m gonna do the exact opposite of this ending quote and say AI will be gone forever after this bubble (a prediction I’ve hammered multiple times before),
First, the AI bubble has given plenty of credence to the motion that building a humanlike AI system (let alone superintelligence) is completely impossible, something I’ve talked about in a previous MoreWrite. Focusing on a specific wrinkle, the bubble has shown the power of imagination/creativity to be the exclusive domain of human/animal minds, with AI systems capable of producing only low-quality, uniquely AI-like garbage (commonly known as AI slop, or just slop for short).
Second, the bubble’s widespread and varied harms have completely undermined any notion of “artificial intelligence” being value-neutral as a concept. The large-scale art theft/plagiarism committed to create the LLMs behind this bubble (Perplexity, ChatGPT, CrAIyon, Suno/Udio, etcetera), and the large-scale harms enabled by these LLMs (plagiarism/copyright infringement, worker layoffs/exploitation, enshittification), and the heavy use of LLMs for explicitly fascist means (which I’ve noted in a previous MoreWrite) have all provided plenty of credence to notions of AI as a concept being inherently unethical, and plenty of reason to treat use of/support of AI as an ethical failing.
ooh good, i’ll add that generative AI will likely become as radioactive to the public as crypto is.
The ideas are in general good.
I think the long term cost argument could be strengthen by saying something about DeepSeeks claims to run much cheaper. If there is anything to say about that, I have not kept track.
The ML/LLM split argument might benefit from being beefed up. I saw a funny post on Tumblr (so good luck finding that again) about pigeons being taught to identify cancer cells (a thing, according to the post, I haven’t verified) and how while that is a thing you wouldn’t leap to putting a pigeon in charge of checking CVs and recommending hires. The post was funnier, but it got to the critical point of what statistical relationships reasonably can be used for and what it can’t, which becomes obvious when it is a pigeon instead of a machine. Ah well, you can beef it up in a later post or maybe you intended to link an already existing one. There is a value in being consise instead of rambling like I am doing here.
It’s a tricky one, because a lot of ML these days is transformer-based, because transformers are unreasonably effective, and many things a transformer can do well, an LLM can do okay at. (e.g. translation, transcription, OCR …)
A lot of people are super convinced by an impressive demo. We have computers you can just talk to now! That’s legit amazing, actually! The whole field of NLP is 80% solved! The other 20% is where it’s a lying fuckin idiot and that’s probably not fixable …
Contra Blue Monday, I think that we’re more likely to see “AI” stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.
Some of the worst people you know are going to pivot to “See, AI is useful for cancer doctors, that was what I’ve been saying the whole time. Sentient chatbots? I haven’t written those specific words, you must be very bad at reading. Now, lets move on to Quantum!”
Part of me suspects that particular pivot is gonna largely fail to convince anyone - paraphrasing Todd In The Shadows “Witness” retrospective, other tech bubbles may have failed harder than AI, but nothing has failed louder.
The notion of “AI = “sentient” chatbots/slop generators” is very firmly stuck in the public consciousness, and pointing to AI being useful in some niche area isn’t gonna paper over the breathlessly-promoted claims of Utopian Superintelligence When Its Donetm or the terabytes upon terabytes of digital slop polluting the 'net.
I doubt it’ll stop the worst people we know from trying, though - they’re hucksters at heart, getting caught and publicly humiliated is unlikely to stop 'em.
Leave it as it is then, I think it works.
Doing another round of thinking, the insistence of “AI is here to stay” is itself a sign of how this is a bubble that needs continuos hype. Clocks are also here to stay, but nobody needs to argue that they are. How was it Tywin Lannister put it - if you have to tell people you are the king, you are not a real king?