Hey Alexa, Bring Back the Dead
It’s a tale as old as Silicon Valley; tech designed with good intentions that, when integrated into society, reveals unintended — and often harmful — consequences.
A few shining examples: Social media companies that allowed us to connect to friends and family across the world that soon became cesspits for misinformation, hate, fake news and data harvesting (the latter might not be unintended.) AirTags that allow you to keep track of your keys, or your dog, or, you know, stalk somebody. Smart home devices that were designed to make living simple, at the cost of spying on you and capturing data on just about everything you do.
And now we can add another.
Amazon is developing technology that allows its Alexa devices to replicate the voices of the dead. During its annual MARS conference, the company showed off a snippet of video where a boy asks Alexa for his bedtime story to be read to him by his grandmother. The device then flips to an A.I.-generated version of his grandmother’s voice and begins reading. Shucks. I’ll admit it’s a sweet concept on the surface.
How exactly it works wasn’t disclosed, but the company did offer some reasoning behind why they are pursuing it. Describing the feature, Rohit Prasad, senior vice president and head scientist for Alexa, said that the aim was to add more “human attributes of empathy and affect” to the interactions users have with Alexa, with the goal to “make the memories last.”
While that’s a moving thought, especially when applied to those taken away from us too soon, it’s also kind of… creepy.
The Black Mirror vibes are strong with this one
The problem is that the potential consequences are vast. One only needs to look at visual deep fakes to guess where it could go. We already have visual deepfakes that work well enough to trick even the trained eye, like the quite-brilliant Tom Cruise videos.
Now, audio deep fakes are catching up. The technology is already being used in TV, video games and even podcasts. (A recent example is the Anthony Bourdain documentary, which used deep fake audio of his voice to say things he hadn’t said. Incidentally, it also…