Of course AI is a bubble. It has all the hallmarks of a classic tech bubble. Pick up a rental car at SFO and drive in either direction on the 101 – north to San Francisco, south to Palo Alto – and …
An interesting article, but it seems to be missing the big applications of AI. It isn’t all about the LLMs and other large models, but where it will definitely be used is in smaller-scale problems where specialized models can be pruned. There is a bubble, that’s for sure, but it’s in the usage of large, unpruned models for meanial tasks.
Yeah. To me it seems transparently obvious that at least some of the applications of AI will continue to change the world - maybe in a big way - after the bust that will inevitably happen to the AI-adjacent business side after the current boom. I agree with Doctorow on everything he’s saying about the business side, but that’s not the only side and it’s a little weird that he’s focusing exclusively on that aspect. But what the hell, he’s smart and I hadn’t seen this particular business-side perspective before.
I think he touches on this when he talks about the high value, fault intolerant applications. The problem with AI, as it is now, is that it’s very good at producing high quality bullshit. Maybe it’s analysis and output is spot on. And maybe it’s over-fitted to something that had no real attachment to what you are trying to predict. And because much of it remains a black box, telling the two apart often takes up so much time that workers don’t save any time at all. For applications where having a better way to dig through a mountain of data would be beneficial, an AI sending you down the wrong rabbit hole can be costly and make the use of that AI questionable.
This has been my own experience with “AI driven tools”. I work in cybersecurity, and you can’t swing a Cat-5 O’ Nine Tails without hitting some vendor trying to sell you on the next big AI driven security tool. And they’re crap, one and all. What they do very, very well is churn our false positives that analysts then lose hours two trying to to just understand what the fuck the AI saw that it alerted on. And since the AI and it’s algorithms are the “secret sauce” the vendor is selling, they do exactly fuck all to help the analysts understand the “why” behind an alert. And it’s almost always a false positive. Of course, those vendors will swear up and down, it’s just a matter of better tuning of the AI model on your network. And they’ll sell you lots of time with their specialists to try and tune the model. It won’t help, but they’ll keep selling you on that tuning all the same.
In the long term, I do think AI will have a place in many fields. Highly specialized AI, which isn’t a black box, will be a useful tool for people in lots of fields. What it won’t be is a labor saving device. It will just make the people doing those jobs more accurate. But, it’s not being sold this way and we need the current model to collapse and demonstrate that AI is just not ready to take over most roles yet. Then maybe, we can start treating AI as a tool to make good people better and not as a way to replace them.
An interesting article, but it seems to be missing the big applications of AI. It isn’t all about the LLMs and other large models, but where it will definitely be used is in smaller-scale problems where specialized models can be pruned. There is a bubble, that’s for sure, but it’s in the usage of large, unpruned models for meanial tasks.
Yeah. To me it seems transparently obvious that at least some of the applications of AI will continue to change the world - maybe in a big way - after the bust that will inevitably happen to the AI-adjacent business side after the current boom. I agree with Doctorow on everything he’s saying about the business side, but that’s not the only side and it’s a little weird that he’s focusing exclusively on that aspect. But what the hell, he’s smart and I hadn’t seen this particular business-side perspective before.
I think he touches on this when he talks about the high value, fault intolerant applications. The problem with AI, as it is now, is that it’s very good at producing high quality bullshit. Maybe it’s analysis and output is spot on. And maybe it’s over-fitted to something that had no real attachment to what you are trying to predict. And because much of it remains a black box, telling the two apart often takes up so much time that workers don’t save any time at all. For applications where having a better way to dig through a mountain of data would be beneficial, an AI sending you down the wrong rabbit hole can be costly and make the use of that AI questionable.
This has been my own experience with “AI driven tools”. I work in cybersecurity, and you can’t swing a Cat-5 O’ Nine Tails without hitting some vendor trying to sell you on the next big AI driven security tool. And they’re crap, one and all. What they do very, very well is churn our false positives that analysts then lose hours two trying to to just understand what the fuck the AI saw that it alerted on. And since the AI and it’s algorithms are the “secret sauce” the vendor is selling, they do exactly fuck all to help the analysts understand the “why” behind an alert. And it’s almost always a false positive. Of course, those vendors will swear up and down, it’s just a matter of better tuning of the AI model on your network. And they’ll sell you lots of time with their specialists to try and tune the model. It won’t help, but they’ll keep selling you on that tuning all the same.
In the long term, I do think AI will have a place in many fields. Highly specialized AI, which isn’t a black box, will be a useful tool for people in lots of fields. What it won’t be is a labor saving device. It will just make the people doing those jobs more accurate. But, it’s not being sold this way and we need the current model to collapse and demonstrate that AI is just not ready to take over most roles yet. Then maybe, we can start treating AI as a tool to make good people better and not as a way to replace them.