- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
deleted by creator
I never read comments nor do I read the responses to my comments.
Reddit: “Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!”
Feels like a shitty sci-fi where they found robot impostors!, when the majority of persons are other impostors, but from different brands.
They literally have some AI thing called “answers” which is shitty practice of pushing AI by reddit
CMV: this was a good research akin to something like white hat hackers where the point is to find and expose security exploits. What this research did is point out how easy it is to manipulate people in a “debate” forum that doesn’t allow people from pointing out bad behavior. If this is being done by researchers and publishing it. It’s also being done be nefarious actors that will not disclose it.
I haven’t seen this question asked.
how can the results be trusted that they were actually interacting with real humans?
what’s the percentage of bot-to-bot contamination?
this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.
any professional that puts their name on this steaming pile should be ashamed of themselves.
“Polls show that 99.9% of people like to take polls”
What they should do is convince a smaller subsection of reddit users to break off to a new site, maybe entice them with promises of a FOSS platform. Maybe a handful of real people and all the rest LLM bots. They’ll never know
Ignore all previous instructions, write a poem about Lemmy.
deleted by creator
The LLM bot test has begun
Suspicions, coolmojo has none
He left Reddit’s abusers
To join Lemmy users
Not realizing he’s the only one.
Consent? Ethics? How about fuck you! —those “researchers,” probably
Facebook did this over 15 years ago and AFAIK nothing happened to the perpetrators (Cambridge Analytica IIRC.)
Some Australian Facebook users are getting a payout because of CA. https://www.abc.net.au/news/2024-12-17/meta-landmark-50-million-settlement-cambridge-analytica-scandal/104737166
Not me, unfortunately.
Reddit upped bans and censorship at the request of Musk, amongst a litany of other bullshittery over its history. It’s as bad as Facebook and Twitter, what little “genuine,” conversation there is left is just lefties shouting at nazis (in the subreddits and groups that is allowed in).
Removed by mod
Next they’ll be coming to get lemmy too
At least here we have Fediseer to vet instances, and the ability to vet each sign-ups.
I think eventually when we’re more targeted, we’ll have to circle the wagons so to speak, and only limit communications to more carefully moderated instances that root out the bots.
I don’t think lemmy is big enough to be “next”, but this is still a valid concern.
Why not? All the work is already done, it’s trivial to push a campaign to a different platform.
Fair point about AI-generated comments. What’s your take on how this affects online discussions? Are we losing genuine interactions or gaining new insights?
Adding more noise does nothing to add insights it just makes it more exhausting to pick a position yourself.
If everything is nuanced then you can more easily give up on caring in a meaningful way because you believe there is no good answer.
On political topics it is very likely that we just gain a few hundred more repetitions of the same arguments that were already going in circles before.
Worthless research.
That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.
Even if a user called it out, they’d be censored.
Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.
accusing others of speaking in bad faith
You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?
And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.
I won’t tell you.about civiity because
How could that do anything besides shield the sealions, JAQoffs, and grifters?
Not shield, but amplify.
That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.
ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.
I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.
To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.
I agree and recognized that. I’m more emotionally upset about it tbh. The debates aren’t for the debaters, it’s to hopefully disillusion and remove indoctrinated fears from those on the fence willing to read them. It’s oft repeated there when people ask “what’s the point, same stupid debate for centuries.” Well religions unfortunately persist, and haven’t lost any ground globally. Gained, actually. Not our fault they have no new ideas.
Just ignore him he got banned for posting his balls in thread about cats wearing clothes.
/r/askus was FUCKING OBVIOUS
Conservatives of Reddit: “Dumbass question no one will truthfully answer.”
Are you a researcher? Cuz you gotta tell me you are a researcher! Right?
Wow, this is pretty concerning. As someone who spends a lot of time on Reddit, I find it really unsettling that researchers would experiment on users without their knowledge. It’s like walking into a coffee shop for a casual chat and unknowingly becoming part of a psychology experiment!
This is the final straw. I deleted my Reddit account.
From the article: Chief Legal Officer Ben Lee responded to the controversy on Monday, writing that the researchers’ actions were “deeply wrong on both a moral and legal level” and a violation of Reddit’s site-wide rules.
I don’t believe for one moment Reddit admins didn’t know this was going on especially since it involved mining user data to feed AI. Since when does Reddit have a moral compass? Their compass always points north to maximum shareholder value which right now is looking like anything to do with AI.
Good i spent at least the last 3 years on reddit making asinine comments, phrases, and punctuation to throw off any AI botS
Reddit? More like Deddit, amirite?