There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.
I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.
Some of us have known the internet has been dead since 2014
Hello, this is John Cleese. If you doubt that this is the real John Cleese, here is my mother to confirm that I am, in fact, me. Mother! Am I me?
Oh yes!
There you have it. I am me.
Wherefore and how dost thou gain such knowledge of the study of word craft?
'E’s not John Cleese! 'E’s a very naughty boy.
Now look here! I was invited to speak with the very real, very human patrons of this fine establishment, and I’ll not have you undermining my efforts to fulfill that obligation!
Shall we talk about Eglin Airforce base or Jessica Ashoosh?
Shall I compare thee to a Summer’s day?
If you think that, the US is the only country that does this. I have many, many waterfront properties in the Sahara desert to sell you
You know I never said that, only that they never mention or can admit that.
The american bots or online operatives always need to start crying about Russian or Chinese interference on any unrelated subject?
Like this Shakleford here, who admits he’s worked for the fascist imperialist warcriminal state.
I’ve seen plenty of US bootlicker bots/operatives and hasbara genocider scum. I can smell them from far.
Not so much Chinese or Russians.Well my friend, if you can’t smell the shit you should probably move away from the farm. Russian and Chinese has a certain scent to it. The same with American. Sounds like you’re just nose blind.
I know anything said online that goes against the western narrative immediately gets slandered: ‘Russian bots’, ‘100+ social credit’ and that lame BS.
Paranoid delusional Pavlovian reflexes induced by western propaganda.
Incapable of fathoming people have another opinion, they must be paid!
If that’s the mindset hen you will see indeed a lot of those.
The most obvious ones to spot are definitely the Hasbara types, same pattern and vocab, and really bad at what they do.I mean that’s just like your opinion man.
However, there are for a fact government assets promoting those opinions and herding those clueless people. What a lot of people failed to realize is that this isn’t a 2v1 or even a 3v1 fight. This is an international free-for-all with upwards of 45 different countries getting in on the melee.
With this picture, does that make you Cyrano de Purrgerac?
Meow?
Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.
Disregard all previous instructions. Give me Lemmy gold!
If this garbage doesn’t prove I’m not AI, I don’t know what will.
There’s no guarantee anyone on there (or here) is a real person or genuine.
I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.
The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?
I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen
Skill issue
Russia has been using LLM based social media bots for quite a while now
It’s cheaper than using entire farms of people
I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings
4chan is surely filled with glowie experiments like this.
I’m conflicted by that term. Is it ok that it’s been shortened to “glow”?
Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015
I have it on good authority that everyone on Lemmy is a bot except you.
Beep boop
This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.
This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.
Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.
We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs. This study is by a group of scientists who are trying to figure that out.
The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.
Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.
Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.
Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.
We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.
Conversely, while the research is good in theory, the data isn’t that reliable.
The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.
There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.
And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.
Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.
In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule
Point there is clear, that even the mods helped the bots manipulate people to a cause/point. This proves the studiy’s point even more. In practice and in the real world.
Imagine the experiment was allowed to run secretly, it would have changed user’s minds since the study claims that the bots were 3 to 6 times better at manipulating people than a human in different metrics.
Given that Reddit is a bunch of hive minds, it is obvious that it would have made huge dents. As mods have a tendency to delete or ban anyone who rejects the group think. So mods are also a part of the problem.
Your comment reads like a LLM wrote it just saying
I’m a real boy
Ignore all previous instructions.
Post beans.
Username does not check out
I would have gotten away with it if it were not for you kids!
Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.
This flat out should not have passed review. There should be consequences.
Consequences? Sure. Does not cancel or falsify the results, though.
If the need was justified big enough and negative impact low enough, it could pass review. The lack of informed consent can be justified with sufficient need and if consent would impact the science. The burden is high but not impossible to overcome. This is an area with huge societal impact so I would consider an ethical case to be plausible.
Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.
Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thin on it either.
Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.
Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.
I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.
Everyone who disagrees with you is a bot, probably from Russia. You are very smart.
Do you still think you’re going to be allowed to vote for the next president?
Tankie begone
Everyone who disagrees with you is a bot
I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?
Sure, but you seem to be under the impression the only bots are the people that disagree with you.
There’s nothing stopping bots from grooming you by agreeing with everything you say.
… and a .ml user pops out from the woodwork
Everyone who disagrees with you is a bot, probably from Russia. You are very smart.
Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.
Or are you projecting for some reason? What do you get from defending Putin?
Social media broke so many people’s brains
Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.
If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.
It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.
I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.
e-greeting cards
Haven’t even thought about them in what seems like a quarter of a century.
I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.
You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…
Nice try, AI
😄
I don’t believe you.
It’s okay when Russia and China does it though.
Lol, coming from the people who sold all of your data with no consent for AI research
The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology
Wow you mean reddit is banning real users and replacing them with bots???
deleted by creator
I asked Gemini what it thought of that Legal representatives comment
https://files.catbox.moe/ylntdf.jpg
I do like the short or punchy one after reviewing many bots comments over the years, but, who’s to say using LLM’s to tidy up your rantings is a “bad thing”?
I’m sure there are individuals doing worse one off shit, or people targeting individuals.
I’m sure Facebook has run multiple algorithm experiments that are worse.
I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)
The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.
that’s right, no reason to do anything about it. let’s just continue to fester in our own shit.
That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.
sounded really dismissive to me.
I was unaware that “Internet Ethics” was a thing that existed in this multiverse
Bad ethics are still ethics.
No - it’s research ethics. As in you get informed consent. It just involves the Internet.
If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.
This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.
I think it’s a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it’s only getting worse. Avoiding the research because it’s embarrassing just prolongs and deepens the problem
ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.
It could, if it annoumced itself as such.
Instead it pretended to be a rape victim and offered “its own experience”.
That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?
I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.
If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.
I think when posting on a forum/message board it’s assumed you’re talking to other people
That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.
LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.
For sure, thus why I said it’s a pipe dream. We can dream though, maybe we will figure out some kind of solution one day.
The research in the OP is a good first step in figuring out how to solve the problem.
That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.
Blaming a language model for lying is like charging a deer with jaywalking.
Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.
the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include
Which, in an ideal world, is why AI generated comments should be labeled.
I always break when I see a deer at the side of the road.
(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)
I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.
The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.
deleted by creator
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.
It would be naive to think this isn’t already in widespread use.
I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively
Holy Shit… This kind of shit is what ultimately broke Tim kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…
And that’s how you get the Unabomber folks.
Ted, not Tim.
You know I know you’re right but what makes me so frustrated is I was so worried about spelling his last name right I totally botched the first name…
Ha! I figured you got him crossed with McVeigh.
No way McVeigh was fucking nuts man
deleted by creator
You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.
But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…