I have seen so many articles, tweets, posts, etc. in the past few months about AI eradicating all jobs or something along the line, and robots eliminating all needs for human labour.
And then I look at all the jobs that I have worked on.
Good luck using AI to get through government bureaucracies. I am sure ChatGPT get help you navigate all the regulations, apply to all the licenses automatically, comply with regulations etc. I am sure when a company is fined millions they can just say “but…ChatGPT say this can work!”
Good luck telling the CEO to use AI assistant. I am sure the 70-years-old CEO would prefer shouting to a phone which may tell them the idea does not work instead of shouting to a group of employees who would nod nervously and then implement the ideas while ignoring the bad parts.
Good luck replacing humans with robots. The maintenance costs of hardware and software on an army of robots which needs fuel and electricity and probably internet connection MUST BE lower than hiring labour at minimum wages. Right? Did I forgot to mention that human can takes care of themselves?
Remember that the society is run by humans. Even the rich and the powerful are human and have human needs. They would want other people to work for them.
What if a singularity AI took over the world? I mean if that is possible and the society fail to prevent such an event from happening then humanity deserves to perish anyways. Also please don’t tell me you believe in Roko’s basilisk.
Stop worrying and start living your life!
I’m not so concerned with AI stealing jobs (though the current models do steal and profit from labor to function, which is unethical). The more concerning idea is how it will be used as an invisible weapon to reinforce fascist social engineering projects.
When people first learned of how much mass surveillance was going on across the world and on the Internet, people would sort of joke wryly about “the NSA agent” who read their stuff and put them on a list or whatever whenever they made some sharp political commentary or said something that could be misconstrued as terroristic out of context, as if we each had an agent assigned to us who just sat in a room all day and read our posts and listened in on our phone calls.
The thing is, AI will actually make that kind of thing possible as it gets more indistinguishable from humans. We could each get our own little tailored propaganda bot who tracks us across the web, not just listening and profiling us constantly, but actively manipulating and decieving us in subtle ways. Even fabricating evidence for their lies on the fly in the form of fake videos, photos and news sources etc.
My fear is that, as humanity enters this next cycle of deepening fascism and coordinated anti-democratic effort we’re seeing, the ever more sophisticated tools of oppression available to fascists will ensure we wont ever emerge again. Instead we will stay in a permanent state of blind, deaf and dumb servitude with no avenue of resistance or to even understand the need for resistance.
Exactly what I worry about. I’m toying with AI at home, as a software engineer I kind of have to if I’ll want to remain employable. I see where it’s good, where it’s not so good.
Automating human’s jobs away completely? We’re a long way off. Most times you see a “wow AI did this?” it was hand picked by people as the best result out of dozens, and then usually tailored past then. It does some really cool stuff to about 80% of the way of a semi usable product, but then it needs humans to fine tune. Plus coding things, sure it can get you started much quicker or give you functions, but we’re decades away from being able to interact with an AI to the point where it can maintain your entire ecosystem. I can’t imagine the complexity there.
What it does do a great job at is seeing patterns, and pointing out those patterns. Exactly what the people you describe want it for. “Given this person’s medical history, should we insure them?” “How likely is this person going to default on their loan?” and quite literally Minority report style is what I’m afraid of. “Given everything we know about this person, what crimes are they likely committing?”
The old “don’t do anything wrong and no one come looking” kinda goes out the window with that. It’s going to be able to make predictions on you based on other’s behavior. I definitely see some “We got a warrant because our AI told us you were up to no good” and some 99 year old judge saying that’s okay.
and don’t think that just because governments are heavy and slow that they won’t do this. They’ll have some hot shot contractor come up, build it for them, and they’ll get to use it with little to no oversight because “the government doesn’t own it”.
The jobs thing is a distraction so we don’t see what dystopia AI can really bring us into.
Agreed that this is a legit concern.
The most worrying aspect of AI technology is not AI itself, but the people behind it, which is why regulating AI is so important for our society. And so far society is failing on that front.