Virtual assistance and talk-back speakers are among the greatest additions to tech and smart home devices. Alexa, one of the most successful talk-back voice assistants, made her way into everybody's and some workplaces as well. Furthermore, the product developers are working on voice modulation to make it seem more like a human one. Although Alexa has improved as a conversationalist, understanding when to whisper and providing a follow-up mode, its voice is still distinctive and slightly robotic. Well, the voice does seem iconic and is famous no less than a celebrity.
But in the near future, Alexa will be able to talk using a filter that gives her a voice that is more human-like, specifically the voice of your deceased loved ones. Recently, Rohit Prasad, senior vice president and lead scientist of Alexa AI, demonstrated this experimental function, claiming that AI can duplicate a person's voice with just one minute of recorded audio. This was orchestrated at Amazon's annual MARS conference.
Although the concept can be considered a little intimidating as the feature could have been used to demo a voice of an alive person. But Amazon instead opted to have Alexa narrate a bedtime story to a child using the voice of his deceased grandmother. So this choice is questionable; it seems to illustrate the advances in humanizing the Alexa experience. Again this can have an ulterior impact rather than what connecting a user to someone they've lost could. Even hearing the voice of someone in a different time zone or your favorite show character could make Alexa's communications feel more authentic.
Sure, it's a terrifying step forward for AI when a machine can replicate someone's speech after just a minute of learning or ingestion. But is this development unexpected? Actually, no. Deepfakes, a term that first appeared in 2017, is in charge of creating convincing but fake graphics.
And perhaps you witnessed the Metaphysic founders' America's Got Talent audition when they used an incredibly persuasive Simon Cowell AI. The deed both pleased and alarmed me, not because it seemed like a magic trick, but rather because I am aware that AI technology is more sophisticated than many people realize.
In other words, Alexa's development of mimicry skills was inevitable. Neither the best Alexa speakers nor the version of the Alexa assistant that users now use on their smartphones, headphones, or other connected Alexa devices was promised to have the capability at the MARS conference. If Alexa ever gets this feature, it probably won't happen right away. Therefore, it will be interesting to see how people would adjust to such AI capabilities. Also, Alexa users can use other voices like the celebrities they like and adore for a change.