I think AI experts would probably prefer to be ruled by a large language model than by a general AI that adheres to Asimov’s laws.
With large language models it will basically be a technocratie of prompt hackers, which are at least humans and thus have a stake in Humanity.
LLMs spit out language that can be used as a prompt… no need for the middleman.
The whole point of asimov’s laws of robotics was things can go wrong even if a system adhered to them perfectly. And current AI attempts doesn’t even have that.
The first law of robotics is: we don’t talk about robot wars.
I can guess the second one!
I honestly ponder if an LLM trained on every human on earth’s input once a month about their opinions on the world and what should be done to fix it would have a “normalized trend” in that regard.
LLMBOT 9000 2024!
Didn’t know this comic was still around.
Haven’t read ol Bob since the 2000s. Gotta say it didn’t age as poorly as most others from that era.