• 1 Post
  • 439 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • So I’m not double checking their work because that’s more of a time and energy investment than I’m prepared for here. I also do not have the perspective of someone who has actually had to make the relevant top-level decisions. But caveats aside I think there are some interesting conclusions to be drawn here:

    • It’s actually heartening to see that even the LW comments open by bringing up how optimistic this analysis is about the capabilities of LLM-based systems. “Our chatbot fucked up” has some significant fiscal downsides that need to be accounted for.

    • The initial comparison of direct API costs is interesting because the work of setting up and running this hypothetical replacement system is not trivial and cannot reasonably be outsourced to whoever has the lowest cost of labor due. I would assume that the additional requirements of setting up and running your own foundation model similarly eats through most of the benefits of vertical integration, even before we get into how radically (and therefore disastrously) that would expand the capabilities of most companies. Most organizations that aren’t already tech companies couldn’t do it, and those that could will likely not see the advertised returns.

    • I’m not sure how much of the AI bubble we’re in is driven even by an expectation of actual financial returns at this point. To what extent are we looking at an investor and managerial class that is excited to put “AI” somewhere on their reports because that’s the current Cutting Edge of Disruptive Digital Transformation into New Paradigms of Technology and Innovation and whatever else all these business idiots think they’re supposed to do all day.

    I’m actually going to ignore the question of what happens to the displaced workers here because the idea that this job is something that earns a decent living wage is still just as dead if it’s replaced by AI or outsourced to whoever has the fewest worker protections. That said, I will pour one out for my frontline IT comrades in South Africa and beyond. Whenever this question is asked the answer is bad for us.









  • The whole thing has a vaguely ex-catholic vibe where sin is simultaneously the result of evil actions on earth and also something that’s inherently part of your soul as a human being because dumb woman ate an apple. As someone who was raised in the church to a degree it never felt unreal and actually resonated pretty hard, but also yeah it doesn’t make a lot of sense logically.








  • See, what you’re describing with your sister is exactly the opposite of what happens with an LLM. Presumably your sister enjoys Big Brother and failed to adequately explain or justify her enjoyment of it to your own mind. But at the start there are two minds trying to meet. Azathoth preys on this assumption; there is no mind to communicate with, only the form of language and the patterns of the millions of minds that made it’s training data, twisted and melded together to be forced through a series of algebraic sieves. This fetid pink brain-slurry is what gets vomited into your browser when the model evaluates a prompt, not the product of a real mind that is communicating something, no matter how similar it may look when processed into text.

    This also matches up with the LLM-induced psychosis that we see, including these spiral/typhoon emoji cultists. Most of the trouble starts when people start trying to ask Azathoth about itself, but the deeper you peer into its not-soul the more inexorably trapped you become in the hall of broken funhouse mirrors.