Personal Voice samples and recreates your voice, so if you ever lose the ability to speak, you can continue to communicate through your iPhone, iPad, or Mac ...
I do hope Apple built in a way to improve the voice over time using the original recordings.
While it’s decent right now. It still has that distinct robotic sound. I immediately recognized that the “creature” was using Personal Voice.
It would be unfortunate given this feature is designed for people who might lose their voice and they might not get the chance to re-record if the feature improves down the line :(
There is a high chance they 100% intentionally plan to keep a robotic tone to it forever as a safe guard from abuse. The iPhone is too popular of a product, it’s too accessible to recreate a voice and can be used maliciously.
Like how an AirTag notifies your thief that they are escaping with an AirTag.
As long as you save the recordings, I expect there will be better systems able to train off them. I would hope that the system saves the originals somewhere so it can self-improve with updates.
Ideally, it’ll reach a point where it can optionally scan facial features or hand gestures to include inflection or emotive content.
Beautiful video. Very well done.
I do hope Apple built in a way to improve the voice over time using the original recordings.
While it’s decent right now. It still has that distinct robotic sound. I immediately recognized that the “creature” was using Personal Voice.
It would be unfortunate given this feature is designed for people who might lose their voice and they might not get the chance to re-record if the feature improves down the line :(
There is a high chance they 100% intentionally plan to keep a robotic tone to it forever as a safe guard from abuse. The iPhone is too popular of a product, it’s too accessible to recreate a voice and can be used maliciously.
Like how an AirTag notifies your thief that they are escaping with an AirTag.
As long as you save the recordings, I expect there will be better systems able to train off them. I would hope that the system saves the originals somewhere so it can self-improve with updates.
Ideally, it’ll reach a point where it can optionally scan facial features or hand gestures to include inflection or emotive content.