- cross-posted to:
- canada@lemmy.ca
- cross-posted to:
- canada@lemmy.ca
Here we go again with only caring about mass shootings… Not saying they’re not a problem, but at least in the US cops have killed 33x as many people as every mass shooting has done since 1982, but wanting to abolish the police is a non-starter for 90% of libs since they benefit from police violence.
Either way, LLMs are the problem here. There’s been five different iterations of every LLM out there and they’re still all sycophantic souless text prediction algorithms no matter what. They never challenge the user. They all need to fucking go. People are going into psychosis, committing suicide, and murdering others without a single lawmaker doing anything about those stupid companies who own those chatbots and refuse to take responsibility.
EDIT:
Local police had previously been aware of other worrisome behavior by the perpetrator
Because of course they did. Of fucking course. There has never been a mass shooting anywhere in the entire planet where the fucking PIGS didn’t know about the shooter, they always just think “must’ve been the wind” and make the surprised Pikachu face when it happens.
I hate to say it, but they were going to kill people no matter what. AI is just a helper. If they could google what they needed, they would have used that. I can’t dismiss the danger of sycophantic AI, though.
That being said, fuck AI.
Some of them, maybe, but certainly not all. Having all of your questions answered in one place in a conversational tone with something that not only does not ask questions but will often validate fantasies or imagined grievances is way worse. The friction of having to research things is often enough to dissuade people. Obviously getting the exact data is difficult, but there are several suicides that likely would have been prevented without ChatGPT, so I imagine there is a similar effect on murders.



