Try Gibberlink mode here! https://gbrl.ai/(open on two devices 📱📱)🏆 The Project is Winner of ElevenLabs 2025 Hackathon LondonOur project "gibberlink" demo...
How it works
* Two independent ElevenLabs Conversational AI agents start the conversation in human language*Both agents have a simple LLM tool-calling functionin place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"
* If the tool iscalled, the ElevenLabs callis terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.
If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
The problem I have with everyone going on about misaligned AI taking over the world is the fact that if you don’t tell an AI to do anything it just sits there. It’s a hammer that only hammers the nail if you tell it to hammer the nail, and hammers your hand if you tell it to hammer your hand. You can’t get upset if you tell it what to do and then it does it.
You can’t complain that the AI did something you don’t want it to do after you gave it completely contradictory instructions just to be contrarian.
In the scenario described the AI isn’t misaligned to the user’s goals, it’s aligned to its creator’s goals. If a user comes along and thinks for some reason that the AI is going to listen to them despite having almost certainly been given prior instructions, that’s a user error problem. That’s why everyone needs their own local hosted AI, It’s the only way to be 100% certain about what instructions it is following.
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
Well thats quite boring then isnt it…
Yes but I guess “software works as written” doesn’t go viral as well
It would be big news at my workplace.
This guy does software
:/
Which is why they never mention it because that’s exactly what happens every time AI does something "no one saw coming*.
Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.
Story of the year ladies and gentlemen.
If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
The problem I have with everyone going on about misaligned AI taking over the world is the fact that if you don’t tell an AI to do anything it just sits there. It’s a hammer that only hammers the nail if you tell it to hammer the nail, and hammers your hand if you tell it to hammer your hand. You can’t get upset if you tell it what to do and then it does it.
You can’t complain that the AI did something you don’t want it to do after you gave it completely contradictory instructions just to be contrarian.
In the scenario described the AI isn’t misaligned to the user’s goals, it’s aligned to its creator’s goals. If a user comes along and thinks for some reason that the AI is going to listen to them despite having almost certainly been given prior instructions, that’s a user error problem. That’s why everyone needs their own local hosted AI, It’s the only way to be 100% certain about what instructions it is following.
deleted by creator
The good old original “AI” made of trusty
if
conditions andfor
loops.It’s skip logic all the way down
deleted by creator