MeetMeAtTheMovies [they/them]

  • 8 Posts
  • 45 Comments
Joined 2 months ago
cake
Cake day: January 18th, 2026

help-circle







  • There could be something which served workers’ interests which we call “AI”, but I would argue that much if not all of the implementation details would be different.

    A copy-paste of part of a previous comment I made:

    Look at how modern LLMs work. They’re trained in large data centers owned by private companies using giant corpuses of data that were largely obtained without the permission or knowledge of the people who created it. Then, to use them, the weights are loaded into an amount of memory that’s out of reach for most consumer desktops and users must call into the LLM using an API. Working memory of a conversation doesn’t persist in between messages or tool calls, so the entire history must be loaded into its context window on every call. In other words, all the “learning” for these models must take place up front in training and outside of taking context into account, it doesn’t actually adjust to learn new things about the world. There are workarounds for this, of course, to simulate the experience of interacting with something that can learn, but they have their limitations and aren’t reliable yet. I could go on. Running probabilistic process on deterministic hardware is an area that we may see more work on soon.

    Every single step of that description had alternatives that would be more likely to be chosen outside of a capitalist system. They could be more eco friendly. They could be more efficient. They could be more powerful and learn from your interactions in way that persists. And a lot of these changes would delay the exposure of LLMs to the general public and see them spending longer in academia. But that would be okay because we wouldn’t have the profit motive at the center of this inflating a giant bubble that’s poised to pop and flatten the economy. Bottom line is this stuff was pushed out and hyped up well before it was ready and well before it was able to be scaled up ethically and with the working class in mind. None of this was inevitable.





  • To coordinate that many people, you would need either:

    • a political party that would coordinate global actions via some sort of hierarchy
    • a disaster of some kind that affected enough of the population that the entire world could be convinced to act all at once, or at least in quick succession, but still didn’t take out all of our communication structures so decentralized communication would still be possible.

    We saw how Covid worked out so I think the likelihood of everyone not only acting at once, but also in unison, because of a disaster is quite small without a party to coordinate. There need to be constraints on behavior with levers of power to pull and enforce those constraints in order to get literally billions of people to do the same thing at the same time. I don’t see a way around it.




  • if AI can exist, it will inevitably exist

    Unrelated to the question of ableism, this specifically logic that’s pushed by tech companies in general, that their decisions were inevitable and therefore there is no point in questioning them.

    Look at how modern LLMs work. They’re trained in large data centers owned by private companies using giant corpuses of data that were largely obtained without the permission or knowledge of the people who created it. Then, to use them, the weights are loaded into an amount of memory that’s out of reach for most consumer desktops and users must call into the LLM using an API. Working memory of a conversation doesn’t persist in between messages or tool calls, so the entire history must be loaded into its context window on every call. In other words, all the “learning” for these models must take place up front in training and outside of taking context into account, it doesn’t actually adjust to learn new things about the world. There are workarounds for this, of course, to simulate the experience of interacting with something that can learn, but they have their limitations and aren’t reliable yet. I could go on. Running probabilistic process on deterministic hardware is an area that we may see more work on soon.

    Every single step of that description had alternatives that would be more likely to be chosen outside of a capitalist system. They could be more eco friendly. They could be more efficient. They could be more powerful and learn from your interactions in way that persists. And a lot of these changes would delay the exposure of LLMs to the general public and see them spending longer in academia. But that would be okay because we wouldn’t have the profit motive at the center of this inflating a giant bubble that’s poised to pop and flatten the economy. Bottom line is this stuff was pushed out and hyped up well before it was ready and well before it was able to be scaled up ethically and with the working class in mind. None of this was inevitable.






  • I went back and read your previous post and I am in awe of how strong you’ve been. How much you’re being targeted currently is a testament to how successful you’ve been.

    How much training have you had on organizing? I also don’t say this to be critical, but just as observation; It seems from a cursory reading that you either did not anticipate the extent to which the administration would go in response to your actions or you did not put your organizer(s?) through an inoculation process. Please correct me if I’m wrong about that. I know there’s only so much detail you can include in a single post. It’s also tempting to think “If only I’d done more” when you’re already putting in so much work.

    I don’t know much about student organizing and its standard practices. I’m more familiar with labor organizing. But I do see you working incredibly hard to breathe life into something that others are more just participants in and I recognize that dynamic. It’s very common while organizations are just starting to get momentum and I’ve been there. Labor organizing has an advantage of being well documented and having a well-established playbook on both sides. Because of that, inoculation is less a process of looking forward and more a process of looking back. Maybe that’s not so true with student organizing?

    Regardless, I’m sorry you’re facing such a traumatic and unfair circumstance. I hope you find a way to make your organizing sustainable and healthy for you as it often chews straight through those who take it on.