The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.
The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.
To be fair, even if you understand the tech it’s kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it’s of no benefit to them.
Just spitballing here, and this may be a bit of pie-in-the-sky thinking, but ultimately I think this is what might push the US into socialized healthcare and/or UBI. Increasing automation won’t reduce population- and as more workers are out of work due to automation, they’ll have more time and motivation to do things like protest.
The US economy literally depends on 3-4% of the workforce being so desperate for work that they’ll take any job, regardless of how awful the pay is. They said this during the recent labor shortage, citing how this is used to keep wages down and how it’s a “bad thing” that almost 100% of the workforce was employed because it meant people could pick and choose rather than just take the first offer they get, thus causing wages to increase.
Poverty and homelessness are a feature, not a bug.
Yes, but for capitalism it’s a delicate balance- too many job openings gives labor more power, but too few job openings gives people reason to challenge the status quo. That 3-4% may be enough for the capitalists, but what happens when 15-20% of your workforce are unemployed because of automation? That’s when civil unrest happens.
Remember that the most progressive Presidential administration in US history, FDR, happened right after the gilded age and roaring 20’s crashed the economy. When 25% of Americans were out of work during the Great Depression, social programs suddenly looked much more preferable than food riots. And the wealth disparity now is even greater, relatively, than it was back then.
Very true, but it’s precisely that wealth disparity that concerns me. I’ve seen the current US wealth disparity described as being on par with the disparity in France just before the French Revolution happened, where the cost of a loaf of bread had soared to more than the average worker made in a day. I worry that the more than half a century of anti-union propaganda and “get what I need and screw everybody else” attitude has beaten down the general public enough that there simply won’t be enough of a unified effort to enact meaningful change. I worry about how bad things will have to get before it’s too much. How many families will never recover.
But these are also very different times compared to the 1920s in that we’ve been riding on the coattails of the post WW2 economic boom for almost 70 years, and as that continues to slow down we might see some actual pushback. We already have, with every generation being more progressive than the last.
But I still can’t help but worry.
Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood. In the corporate-driven society that we live today, it’s unlikely that the benefits of new advancements will be spontaneously shared.
Seems more likely that they’ll have more time not in the sense of having easier jobs but by being laid off and having to fight for their livelihood.
This is exactly what I meant.
People who have to fight for subsistence won’t easily revolt, because they’re too busy trying to survive.
People who are unemployed have nothing to lose by not revolting. And the more automation there is, the more unemployed people there will be.
So we see it the same way, but I don’t feel much optimistic about it because it’s going to get much worse before it might get better. All the suffering and struggle that it will take to reform society will be ugly.
Yes, I think it will get worse before it gets better. As long as there is a sociopathic desire to hoard wealth, and no fucks given to our fellow humans, this is how it will be. Capitalism causes these issues, and so capitalism can’t fix them.
Efficiency and productivity aren’t bad things. Nobody likes doing bullshit work.
Unemployment may become a huge issue, but IMO the solution isn’t busy work. Or at least come up with more useful government jobs programs.
Of course, there’s nothing inherently wrong with using AI to get rid of bullshit work. The issue is who will benefit from using AI and it’s unlikely to be the people who currently do the bullshit work.
But that’s literally everything in a capitalist economy. Value collects to the capital. It has nothing to do with AI.
You see the problem with that is how ai in the case of animation and art is how it’s not removing menial labor your removing hobbys that people get paid for taking part in
Who do tractors benefit?
If things becomes cheaper because of AI, then it benefits everyone.
You could cut the housing price to a tenth of what they currently are and it wouldn’t matter to the homeless people who don’t have a job. Things being cheaper don’t matter to people who can’t make a living.
Yup.
Cheap production of consumer goods almost always comes at the expense of working conditions and actual happiness.
If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.
Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.
But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.
Just one likely incident when big brother knows all and can connect the dots using raw compute power.
Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.
I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.
But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.
None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.
AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.
But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.
They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.
Haha
To be fair, this includes those, who should regulate tech companies. I’d say that people should be concerned.
Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.
When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.
Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.
The majority doesn’t understand anything.
So what?
deleted by creator
I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.
deleted by creator
You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.
Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.
And these people are making ‘observations’ without knowledge of even the external mechanisms.
I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.
Most adult Americans don’t know the difference between a PC Tower and Monitor, or a Modem and a PC, or an ethernet cable and a usb cable.
Or a browser and the internet. It’s a very low bar.
Don’t forget also hard drive vs PC tower
And cloud concepts.
This is so snobby of you.
no, I work in IT and have for over 20 years it’s merely an observation
It’s an outdated observation. Everyone today has a basic knowledge of computers
I’d dispute that. The iPadification of tech has people using computers more, but with less actual knowledge of computers.
I’d like some data on that because the tech world seems to have come a long way in just the last 20 years. Things wouldn’t progress this fast if people didn’t know what they were doing. Saying “kids these days” is such a cliche
I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.
A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.
deleted by creator
Humans should be replaced wherever they can be and the value that is generated should go back to everyone.
So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced - with AI…?
I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.
At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.
AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.
Generative text and image models have proven to have more adverse affects on society.
I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.
Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.
Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.
AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.
ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.
AFAIK two of those are generative AI based or as you said LLMs and AI art
Be the trend setter. What slang would you use(that I’ll use)?
Why not the type of AI? In that.case, LLM.
Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.
I’d like people to make a distinction between AI and machine learning, machine learning and neural networks (the word deep is redundant nowadays). And then have some sense of different popular types of neural nets: GANs, CNN, Transformer, stable diffusion. Might be nice if people know what is supervised unsupervised and reinforcement learning. Lastly people should have some sense of the difference between AI and AGI and what is not yet possible.
Chat GPT needs to be vastly improved pr thrown out to dry
I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.
Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.
Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.
This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.
Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.
I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.
The problem is that there is no real discussion about what to do with AI.
It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.
Like how some places are starting to use AI to profile the public Minority Report style.
“Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”
The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.
Agreed. And I don’t see our current economic structure standing up to this. I think we’ll need a system that gives people value that isn’t “What can you produce / what do you own?” The transition period will be brutal and we have to be careful how the elite use their influence during the restructuring. But if we’re motivated enough we could end up with a much better balance of power.
If everyone lived like the average American, we’d need 4-8 Earths to support the population, depending on which study you go by.
Some of those adults voted for Trump. Unfortunately cannot trust any of them. So
My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie “Elysium” comes to mind.
The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.
However, the drawbacks certainly are headed my way.
So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.
Generally, people are wary of disruptive technology. While this technology has potential to displace a plethora of jobs for the sake of increased productivity, companies won’t be able to move product if unemployment skyrockets.
Regardless of what people think, the Pandora’s box of AI is opened and now the only way forward is to adapt.
Yes.
All our science fiction stories prepared us for a world where AI was only possible with a giant supercomputer somewhere, or some virus that exists beyond human control, spread throughout the internet.
We were not prepared for the reality that all at once, any average Joe could create an AI on their home PC.
We absolutely can’t go backwards, and right now we’re are in the most important race in history, against every other country and company to create the best AI.
Whoever can make a self-replicating, self-improving AI first will rule the world. Or rather its AI will.
What companies have decided to call AI is not at all the same as what AI used to refer to and what science fiction stories refer to.
GPT-4 absolutely is on the spectrum of true artificial general intelligence.
We have arrived.
But it’s being used today by doctors to rewrite patient notes to sound more empathetic.
What SciFi depiction of AI had it being used by humans in order to be more empathetic than humans?
We really got it wrong badly in terms of predicting what it would look like and what it actually is.
deleted by creator
I don’t understand why people don’t have the fantasy imagine all the possibilities in which AI can help us progress from the absolutely dismal state of the world we live in currently. Yes there are risks but I just want technology to progress desperately even if I myself live somewhat comfortably for now.
My concern is that the people that already own everything today will capture all of the new value created by AI + automation and the rift of inequality will only deepen.
Guillotines aren’t as effective when they have AI-controlled assault drones.
Any war against the rich will be a guerilla war. The National Guard has shinier toys than you.
Wanna bet?
I will bet literally everything I own that you don’t have a drone with air-to-ground missiles.
You’re right I got an arsenal thats 2000-10000 years ahead of everything.
I can’t imagine AI controlled assault drones would help rich people at all. If that was a fear, wouldn’t the same fear be around since the invention of tanks or any military advancement?
Some private citizen starts using attack drones, I don’t think it will work out well in most countries. Even if the government didn’t intervene, which it would immediately
I can’t imagine AI controlled assault drones would help rich people at all.
Rich people sell them. Armies benefit from AI-controlled drones, because they can be extremely precise, hit moving targets and don’t care about connection interference if brains are on board.
I guess profiting from them, yeah. Guess I was speaking in the OP context as a response to a guillotine
I don’t understand why people don’t have the fantasy imagine all the possibilities in which AI can help us progress from the absolutely dismal state of the world we live in currently.
Because one of the primal functions of our brains is to protect us from threats. These people live today. If history teaches us anything, it’s that such inventions benefit elites and it takes years of active civil work to fight back.
AI can help us solve many problems, but risks are there. It doesn’t help that politicians have no idea how to regulate AI properly.
Inequality is a huge problem but overall technology has clearly increased the standard of living globally. Maybe if I was living in the US, the way low skilled workers are treated despite the tremendous wealth it would also affect my outlook I have to admit. Overall I feel that, long term, technology is the only thing that can get the global population to prosperity and AI has the potential to be a massive boost for scientific progress. There must be some disruption imo, mostly due to progressing climate change for which we have no answer.
It’s easy to imagine how AI can be beneficial in the short term. The problem is imagining how it won’t go wrong in the long term.
Even sci-fi has a hard time figuring that out. StarTrek just stops at ChatGPT-level of intelligence, that’s how smart the ship computer is and it doesn’t get any smarter. Whenever there is something smarter, it’s always a unique one-of that can’t be replicated.
Nobody knows how the world will look like when we have ubiquitous smart and cheap AI, not just ChatGPT-smart, but “smarter than the smartest human”-smart, and by a large margin. There is basically no realistic scenario where we won’t end up with AI that will be far superior to us.
Even sci-fi has a hard time figuring that out.
Science fiction just is about entertainment. An AI that’s all but invisible and causes no problems isn’t really a character worth exploring.
An AI that’s all but invisible and causes no problems isn’t really a character worth exploring.
Yeah, but don’t you see the problem in that by itself? Even in the best case scenario we are heading into a future where humanities existence is so boring that it has no more stories worth telling.
We see a precursor to that with smartphones in movies today. The writer always have to slap some lame excuse in there for the smartphones to not work, as otherwise there wouldn’t be a story. Hardly anybody can come up with ideas on how to have an interesting story where the smartphones do work.
No, I don’t see a problem in old tropes dying.
Because we’ve all seen Terminator and The Matrix.
Most US adults never aspire to create anything and thus a tool that is useful for creating is of no use to them.
But if you’re actually creating things you’ve most likely invested time into learning creative tools. Ai seems like it could be useful for quickly generating reference though. But most of the time there’s already useful enough refs on the internet already. So far ai has been more of a sidegrade and an alternative to making something.
A majority of U.S. adults don’t belive jack shit about the benefits of most things.
I’m more angry I can’t use a co-pilot at work yet
Most US adults don’t even know what AI is and it’s a miracle they don’t drown in their own droll… This sort of “news” is beyond irrelevant.